path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
Regression/Model Diagnostics.ipynb | ###Markdown
Model Diagnostics in PythonIn this notebook, you will be trying out some of the model diagnostics you saw from Sebastian, but in your case there will only be two cases - either admitted or not admitted.First let's read in the necessary libraries and the dataset.
###Code
import numpy as np
import pandas as pd
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import confusion_matrix, precision_score, recall_score, accuracy_score
from sklearn.model_selection import train_test_split
np.random.seed(42)
df = pd.read_csv('./admissions.csv')
df.head()
###Output
_____no_output_____
###Markdown
`1.` Change prestige to dummy variable columns that are added to `df`. Then divide your data into training and test data. Create your test set as 20% of the data, and use a random state of 0. Your response should be the `admit` column. [Here](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html) are the docs, which can also find with a quick google search if you get stuck.
###Code
df[['prest_1', 'prest_2', 'prest_3', 'prest_4']] = pd.get_dummies(df['prestige'])
X = df.drop(['admit', 'prestige', 'prest_1'] , axis=1)
y = df['admit']
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.20, random_state=0)
###Output
_____no_output_____
###Markdown
`2.` Now use [sklearn's Logistic Regression](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) to fit a logistic model using `gre`, `gpa`, and 3 of your `prestige` dummy variables. For now, fit the logistic regression model without changing any of the hyperparameters. The usual steps are:* Instantiate* Fit (on train)* Predict (on test)* Score (compare predict to test)As a first score, obtain the [confusion matrix](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.confusion_matrix.html). Then answer the first question below about how well your model performed on the test data.
###Code
log_mod = LogisticRegression()
log_mod.fit(X_train, y_train)
preds = log_mod.predict(X_test)
confusion_matrix(y_test, preds)
###Output
_____no_output_____
###Markdown
`3.` Now, try out a few additional metrics: [precision](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.precision_score.html), [recall](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.recall_score.html), and [accuracy](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.accuracy_score.html) are all popular metrics, which you saw with Sebastian. You could compute these directly from the confusion matrix, but you can also use these built in functions in sklearn.Another very popular set of metrics are [ROC curves and AUC](http://scikit-learn.org/stable/auto_examples/model_selection/plot_roc.htmlsphx-glr-auto-examples-model-selection-plot-roc-py). These actually use the probability from the logistic regression models, and not just the label. [This](http://blog.yhat.com/posts/roc-curves.html) is also a great resource for understanding ROC curves and AUC.Try out these metrics to answer the second quiz question below. I also provided the ROC plot below. The ideal case is for this to shoot all the way to the upper left hand corner. Again, these are discussed in more detail in the Machine Learning Udacity program.
###Code
precision_score(y_test, preds)
recall_score(y_test, preds)
accuracy_score(y_test, preds)
### Unless you install the ggplot library in the workspace, you will
### get an error when running this code!
from ggplot import *
from sklearn.metrics import roc_curve, auc
%matplotlib inline
preds = log_mod.predict_proba(X_test)[:,1]
fpr, tpr, _ = roc_curve(y_test, preds)
df = pd.DataFrame(dict(fpr=fpr, tpr=tpr))
ggplot(df, aes(x='fpr', y='tpr')) +\
geom_line() +\
geom_abline(linetype='dashed')
###Output
_____no_output_____ |
1_processing/2_create_compendia.ipynb | ###Markdown
Create PAO1 and PA14 compendiaThis notebook is using the observation from the [exploratory notebook](../0_explore_data/cluster_by_accessory_gene.ipynb) to bin samples into PAO1 or PA14 compendia.A sample is considered PAO1 if the median gene expression of PA14 accessory genes is 0 and PAO1 accessory genes in > 0.Similarlty, a sample is considered PA14 if the median gene expression of PA14 accessory genes is > 0 and PAO1 accessory genes in 0.
###Code
%load_ext autoreload
%autoreload 2
%matplotlib inline
import os
import pandas as pd
import seaborn as sns
from textwrap import fill
import matplotlib.pyplot as plt
from scripts import paths, utils
# User param
# same_threshold: if median accessory expression of PAO1 samples > same_threshold then this sample is binned as PAO1
# 25 threshold based on comparing expression of PAO1 SRA-labeled samples vs non-PAO1 samples
same_threshold = 25
# opp_threshold: if median accessory expression of PA14 samples < opp_threshold then this sample is binned as PAO1
# 25 threshold based on previous plot (eye-balling trying to avoid samples
# on the diagonal of explore_data/cluster_by_accessory_gene.ipynb plot)
opp_threshold = 25
###Output
_____no_output_____
###Markdown
Load dataThe expression data being used is described in the [paper](link TBD) with source code [here](https://github.com/hoganlab-dartmouth/pa-seq-compendia)
###Code
# Expression data files
pao1_expression_filename = paths.PAO1_GE
pa14_expression_filename = paths.PA14_GE
# File containing table to map sample id to strain name
sample_to_strain_filename = paths.SAMPLE_TO_STRAIN
# Load expression data
pao1_expression = pd.read_csv(pao1_expression_filename, sep="\t", index_col=0, header=0)
pa14_expression = pd.read_csv(pa14_expression_filename, sep="\t", index_col=0, header=0)
# Load metadata
# Set index to experiment id, which is what we will use to map to expression data
sample_to_strain_table_full = pd.read_csv(sample_to_strain_filename, index_col=2)
###Output
_____no_output_____
###Markdown
Get core and accessory annotations
###Code
# Annotations are from BACTOME
# Gene ids from PAO1 are annotated with the homologous PA14 gene id and vice versa
pao1_annot_filename = paths.GENE_PAO1_ANNOT
pa14_annot_filename = paths.GENE_PA14_ANNOT
core_acc_dict = utils.get_my_core_acc_genes(
pao1_annot_filename, pa14_annot_filename, pao1_expression, pa14_expression
)
pao1_acc = core_acc_dict["acc_pao1"]
pa14_acc = core_acc_dict["acc_pa14"]
###Output
_____no_output_____
###Markdown
Format expression dataFormat index to only include experiment id. This will be used to map to expression data and SRA labels later
###Code
# Format expression data indices so that values can be mapped to `sample_to_strain_table`
pao1_index_processed = pao1_expression.index.str.split(".").str[0]
pa14_index_processed = pa14_expression.index.str.split(".").str[0]
print(
f"No. of samples processed using PAO1 reference after filtering: {pao1_expression.shape}"
)
print(
f"No. of samples processed using PA14 reference after filtering: {pa14_expression.shape}"
)
pao1_expression.index = pao1_index_processed
pa14_expression.index = pa14_index_processed
pao1_expression.head()
pa14_expression.head()
# Save pre-binned expression data
pao1_expression.to_csv(paths.PAO1_PREBIN_COMPENDIUM, sep="\t")
pa14_expression.to_csv(paths.PA14_PREBIN_COMPENDIUM, sep="\t")
###Output
_____no_output_____
###Markdown
Bin samples as PAO1 or PA14
###Code
# Create accessory df
# accessory gene ids | median accessory expression | strain label
# PAO1
pao1_acc_expression = pao1_expression[pao1_acc]
pao1_acc_expression["median_acc_expression"] = pao1_acc_expression.median(axis=1)
# PA14
pa14_acc_expression = pa14_expression[pa14_acc]
pa14_acc_expression["median_acc_expression"] = pa14_acc_expression.median(axis=1)
pao1_acc_expression.head()
# Merge PAO1 and PA14 accessory dataframes
pao1_pa14_acc_expression = pao1_acc_expression.merge(
pa14_acc_expression,
left_index=True,
right_index=True,
suffixes=["_pao1", "_pa14"],
)
pao1_pa14_acc_expression.head()
# Find PAO1 samples
pao1_binned_ids = list(
pao1_pa14_acc_expression.query(
"median_acc_expression_pao1>@same_threshold & median_acc_expression_pa14<@opp_threshold"
).index
)
# Find PA14 samples
pa14_binned_ids = list(
pao1_pa14_acc_expression.query(
"median_acc_expression_pao1<@opp_threshold & median_acc_expression_pa14>@same_threshold"
).index
)
# Check that there are no samples that are binned as both PAO1 and PA14
shared_pao1_pa14_binned_ids = list(set(pao1_binned_ids).intersection(pa14_binned_ids))
assert len(shared_pao1_pa14_binned_ids) == 0
###Output
_____no_output_____
###Markdown
Format SRA annotations
###Code
# Since experiments have multiple runs there are duplicated experiment ids in the index
# We will need to remove these so that the count calculations are accurate
sample_to_strain_table_full_processed = sample_to_strain_table_full[
~sample_to_strain_table_full.index.duplicated(keep="first")
]
assert (
len(sample_to_strain_table_full.index.unique())
== sample_to_strain_table_full_processed.shape[0]
)
# Aggregate boolean labels into a single strain label
aggregated_label = []
for exp_id in list(sample_to_strain_table_full_processed.index):
if sample_to_strain_table_full_processed.loc[exp_id, "PAO1"].all() == True:
aggregated_label.append("PAO1")
elif sample_to_strain_table_full_processed.loc[exp_id, "PA14"].all() == True:
aggregated_label.append("PA14")
elif sample_to_strain_table_full_processed.loc[exp_id, "PAK"].all() == True:
aggregated_label.append("PAK")
elif (
sample_to_strain_table_full_processed.loc[exp_id, "ClinicalIsolate"].all()
== True
):
aggregated_label.append("Clinical Isolate")
else:
aggregated_label.append("NA")
sample_to_strain_table_full_processed["Strain type"] = aggregated_label
sample_to_strain_table = sample_to_strain_table_full_processed["Strain type"].to_frame()
sample_to_strain_table.head()
###Output
/home/alexandra/anaconda3/envs/core_acc/lib/python3.7/site-packages/ipykernel_launcher.py:18: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
###Markdown
Save pre-binned data with median accessory expressionThis dataset will be used for Georgia's manuscript, which describes how we generated these compendia
###Code
# Select columns with median accessory expression
pao1_pa14_acc_expression_select = pao1_pa14_acc_expression[
["median_acc_expression_pao1", "median_acc_expression_pa14"]
]
pao1_pa14_acc_expression_select.head()
# Add SRA strain type
pao1_pa14_acc_expression_label = pao1_pa14_acc_expression_select.merge(
sample_to_strain_table, left_index=True, right_index=True
)
# Rename column
pao1_pa14_acc_expression_label = pao1_pa14_acc_expression_label.rename(
{"Strain type": "SRA label"}, axis=1
)
pao1_pa14_acc_expression_label.head()
# Add our binned label
pao1_pa14_acc_expression_label["Our label"] = "NA"
pao1_pa14_acc_expression_label.loc[pao1_binned_ids, "Our label"] = "PAO1-like"
pao1_pa14_acc_expression_label.loc[pa14_binned_ids, "Our label"] = "PA14-like"
pao1_pa14_acc_expression_label.head()
# Confirm dimensions
pao1_expression_prebin_filename = paths.PAO1_PREBIN_COMPENDIUM
pa14_expression_prebin_filename = paths.PA14_PREBIN_COMPENDIUM
pao1_expression_prebin = pd.read_csv(
pao1_expression_prebin_filename, sep="\t", index_col=0, header=0
)
pa14_expression_prebin = pd.read_csv(
pa14_expression_prebin_filename, sep="\t", index_col=0, header=0
)
# he two expression prebins are because the same samples were mapped to 2 different references (PAO1 and a PA14 reference.
# This assertion is to make sure that the number of samples is the same in both, which it is.
# This assertion is also testing that when we added information about our accessory gene expression
# and labels we retained the same number of samples, which we did.
assert (
pao1_expression_prebin.shape[0]
== pa14_expression_prebin.shape[0]
== pao1_pa14_acc_expression_label.shape[0]
)
# Save
pao1_pa14_acc_expression_label.to_csv(
"prebinned_compendia_acc_expression.tsv", sep="\t"
)
###Output
_____no_output_____
###Markdown
Create compendiaCreate PAO1 and PA14 compendia
###Code
# Get expression data
# Note: reindexing needed here instead of .loc since samples from expression data
# were filtered out for low counts, but these samples still exist in log files
pao1_expression_binned = pao1_expression.loc[pao1_binned_ids]
pa14_expression_binned = pa14_expression.loc[pa14_binned_ids]
assert len(pao1_binned_ids) == pao1_expression_binned.shape[0]
assert len(pa14_binned_ids) == pa14_expression_binned.shape[0]
# Label samples with SRA annotations
# pao1_expression_label = pao1_expression_binned.join(
# sample_to_strain_table, how='left')
pao1_expression_label = pao1_expression_binned.merge(
sample_to_strain_table, left_index=True, right_index=True
)
pa14_expression_label = pa14_expression_binned.merge(
sample_to_strain_table, left_index=True, right_index=True
)
print(pao1_expression_label.shape)
pao1_expression_label.head()
print(pa14_expression_label.shape)
pa14_expression_label.head()
assert pao1_expression_binned.shape[0] == pao1_expression_label.shape[0]
assert pa14_expression_binned.shape[0] == pa14_expression_label.shape[0]
sample_to_strain_table["Strain type"].value_counts()
###Output
_____no_output_____
###Markdown
Looks like our binned compendium sizes is fairly close in number to what SRA annotates Quick comparisonQuick check comparing our binned labels compared with SRA annotations
###Code
pao1_expression_label["Strain type"].value_counts()
###Output
_____no_output_____
###Markdown
**Manually check that these PA14 are mislabeled*** Clinical ones can be removed by increasing threshold
###Code
pa14_expression_label["Strain type"].value_counts()
###Output
_____no_output_____
###Markdown
CheckManually look up the samples we binned as PAO1 but SRA labeled as PA14. Are these cases of samples being mislabeled?
###Code
pao1_expression_label[pao1_expression_label["Strain type"] == "PA14"]
###Output
_____no_output_____
###Markdown
Note: These are the 7 PA14 labeled samples using threshold of 0Most samples appear to be mislabeled:* SRX5099522: https://www.ncbi.nlm.nih.gov/sra/?term=SRX5099522* SRX5099523: https://www.ncbi.nlm.nih.gov/sra/?term=SRX5099523* SRX5099524: https://www.ncbi.nlm.nih.gov/sra/?term=SRX5099524* SRX5290921: https://www.ncbi.nlm.nih.gov/sra/?term=SRX5290921* SRX5290922: https://www.ncbi.nlm.nih.gov/sra/?term=SRX5290922Two samples appear to be PA14 samples treated with antimicrobial manuka honey.* SRX7423386: https://www.ncbi.nlm.nih.gov/sra/?term=SRX7423386* SRX7423388: https://www.ncbi.nlm.nih.gov/sra/?term=SRX7423388
###Code
pa14_label_pao1_binned_ids = list(
pao1_expression_label[pao1_expression_label["Strain type"] == "PA14"].index
)
pao1_pa14_acc_expression.loc[
pa14_label_pao1_binned_ids,
["median_acc_expression_pao1", "median_acc_expression_pa14"],
]
# Save compendia with SRA label
pao1_expression_label.to_csv(paths.PAO1_COMPENDIUM_LABEL, sep="\t")
pa14_expression_label.to_csv(paths.PA14_COMPENDIUM_LABEL, sep="\t")
# Save compendia without SRA label
pao1_expression_binned.to_csv(paths.PAO1_COMPENDIUM, sep="\t")
pa14_expression_binned.to_csv(paths.PA14_COMPENDIUM, sep="\t")
# Save processed metadata table
sample_to_strain_table.to_csv(paths.SAMPLE_TO_STRAIN_PROCESSED, sep="\t")
###Output
_____no_output_____
###Markdown
Looks like our binned compendium sizes is fairly close in number to what SRA annotates Quick comparisonQuick check comparing our binned labels compared with SRA annotations
###Code
pao1_expression_label["Strain type"].value_counts()
###Output
_____no_output_____
###Markdown
**Manually check that these PA14 are mislabeled*** Clinical ones can be removed by increasing threshold
###Code
pa14_expression_label["Strain type"].value_counts()
###Output
_____no_output_____
###Markdown
CheckManually look up the samples we binned as PAO1 but SRA labeled as PA14. Are these cases of samples being mislabeled?
###Code
pao1_expression_label[pao1_expression_label["Strain type"] == "PA14"]
###Output
_____no_output_____
###Markdown
Note: These are the 7 PA14 labeled samples using threshold of 0Most samples appear to be mislabeled:* SRX5099522: https://www.ncbi.nlm.nih.gov/sra/?term=SRX5099522* SRX5099523: https://www.ncbi.nlm.nih.gov/sra/?term=SRX5099523* SRX5099524: https://www.ncbi.nlm.nih.gov/sra/?term=SRX5099524* SRX5290921: https://www.ncbi.nlm.nih.gov/sra/?term=SRX5290921* SRX5290922: https://www.ncbi.nlm.nih.gov/sra/?term=SRX5290922Two samples appear to be PA14 samples treated with antimicrobial manuka honey.* SRX7423386: https://www.ncbi.nlm.nih.gov/sra/?term=SRX7423386* SRX7423388: https://www.ncbi.nlm.nih.gov/sra/?term=SRX7423388
###Code
pa14_label_pao1_binned_ids = list(
pao1_expression_label[pao1_expression_label["Strain type"] == "PA14"].index
)
pao1_pa14_acc_expression.loc[
pa14_label_pao1_binned_ids,
["median_acc_expression_pao1", "median_acc_expression_pa14"],
]
# Save compendia with SRA label
pao1_expression_label.to_csv(paths.PAO1_COMPENDIUM_LABEL, sep="\t")
pa14_expression_label.to_csv(paths.PA14_COMPENDIUM_LABEL, sep="\t")
# Save compendia without SRA label
pao1_expression_binned.to_csv(paths.PAO1_COMPENDIUM, sep="\t")
pa14_expression_binned.to_csv(paths.PA14_COMPENDIUM, sep="\t")
# Save processed metadata table
sample_to_strain_table.to_csv(paths.SAMPLE_TO_STRAIN_PROCESSED, sep="\t")
###Output
_____no_output_____
###Markdown
Create PAO1 and PA14 compendiaThis notebook is using the observation from the [exploratory notebook](../0_explore_data/cluster_by_accessory_gene.ipynb) to bin samples into PAO1 or PA14 compendia.A sample is considered PAO1 if the median gene expression of PA14 accessory genes is 0 and PAO1 accessory genes in > 0.Similarlty, a sample is considered PA14 if the median gene expression of PA14 accessory genes is > 0 and PAO1 accessory genes in 0.
###Code
%load_ext autoreload
%autoreload 2
%matplotlib inline
import os
import pandas as pd
import seaborn as sns
from textwrap import fill
import matplotlib.pyplot as plt
from scripts import paths, utils
# User param
# same_threshold: if median accessory expression of PAO1 samples > same_threshold then this sample is binned as PAO1
# 25 threshold based on comparing expression of PAO1 SRA-labeled samples vs non-PAO1 samples
same_threshold = 25
# opp_threshold: if median accessory expression of PA14 samples < opp_threshold then this sample is binned as PAO1
# 25 threshold based on previous plot (eye-balling trying to avoid samples
# on the diagonal of explore_data/cluster_by_accessory_gene.ipynb plot)
opp_threshold = 25
###Output
_____no_output_____
###Markdown
Load dataThe expression data being used is described in the [paper](link TBD) with source code [here](https://github.com/hoganlab-dartmouth/pa-seq-compendia)
###Code
# Expression data files
pao1_expression_filename = paths.PAO1_GE
pa14_expression_filename = paths.PA14_GE
# File containing table to map sample id to strain name
sample_to_strain_filename = paths.SAMPLE_TO_STRAIN
# Load expression data
pao1_expression = pd.read_csv(pao1_expression_filename, sep="\t", index_col=0, header=0)
pa14_expression = pd.read_csv(pa14_expression_filename, sep="\t", index_col=0, header=0)
# Load metadata
# Set index to experiment id, which is what we will use to map to expression data
sample_to_strain_table_full = pd.read_csv(sample_to_strain_filename, index_col=2)
###Output
_____no_output_____
###Markdown
Get core and accessory annotations
###Code
# Annotations are from BACTOME
# Gene ids from PAO1 are annotated with the homologous PA14 gene id and vice versa
pao1_annot_filename = paths.GENE_PAO1_ANNOT
pa14_annot_filename = paths.GENE_PA14_ANNOT
core_acc_dict = utils.get_my_core_acc_genes(
pao1_annot_filename, pa14_annot_filename, pao1_expression, pa14_expression
)
pao1_acc = core_acc_dict["acc_pao1"]
pa14_acc = core_acc_dict["acc_pa14"]
###Output
_____no_output_____
###Markdown
Format expression dataFormat index to only include experiment id. This will be used to map to expression data and SRA labels later
###Code
# Format expression data indices so that values can be mapped to `sample_to_strain_table`
pao1_index_processed = pao1_expression.index.str.split(".").str[0]
pa14_index_processed = pa14_expression.index.str.split(".").str[0]
print(
f"No. of samples processed using PAO1 reference after filtering: {pao1_expression.shape}"
)
print(
f"No. of samples processed using PA14 reference after filtering: {pa14_expression.shape}"
)
pao1_expression.index = pao1_index_processed
pa14_expression.index = pa14_index_processed
pao1_expression.head()
pa14_expression.head()
# Save pre-binned expression data
pao1_expression.to_csv(paths.PAO1_PREBIN_COMPENDIUM, sep="\t")
pa14_expression.to_csv(paths.PA14_PREBIN_COMPENDIUM, sep="\t")
###Output
_____no_output_____
###Markdown
Bin samples as PAO1 or PA14
###Code
# Create accessory df
# accessory gene ids | median accessory expression | strain label
# PAO1
pao1_acc_expression = pao1_expression[pao1_acc]
pao1_acc_expression["median_acc_expression"] = pao1_acc_expression.median(axis=1)
# PA14
pa14_acc_expression = pa14_expression[pa14_acc]
pa14_acc_expression["median_acc_expression"] = pa14_acc_expression.median(axis=1)
pao1_acc_expression.head()
# Merge PAO1 and PA14 accessory dataframes
pao1_pa14_acc_expression = pao1_acc_expression.merge(
pa14_acc_expression,
left_index=True,
right_index=True,
suffixes=["_pao1", "_pa14"],
)
pao1_pa14_acc_expression.head()
# Find PAO1 samples
pao1_binned_ids = list(
pao1_pa14_acc_expression.query(
"median_acc_expression_pao1>@same_threshold & median_acc_expression_pa14<@opp_threshold"
).index
)
# Find PA14 samples
pa14_binned_ids = list(
pao1_pa14_acc_expression.query(
"median_acc_expression_pao1<@opp_threshold & median_acc_expression_pa14>@same_threshold"
).index
)
# Check that there are no samples that are binned as both PAO1 and PA14
shared_pao1_pa14_binned_ids = list(set(pao1_binned_ids).intersection(pa14_binned_ids))
assert len(shared_pao1_pa14_binned_ids) == 0
###Output
_____no_output_____
###Markdown
Format SRA annotations
###Code
# Since experiments have multiple runs there are duplicated experiment ids in the index
# We will need to remove these so that the count calculations are accurate
sample_to_strain_table_full_processed = sample_to_strain_table_full[
~sample_to_strain_table_full.index.duplicated(keep="first")
]
assert (
len(sample_to_strain_table_full.index.unique())
== sample_to_strain_table_full_processed.shape[0]
)
# Aggregate boolean labels into a single strain label
aggregated_label = []
for exp_id in list(sample_to_strain_table_full_processed.index):
if sample_to_strain_table_full_processed.loc[exp_id, "PAO1"].all() == True:
aggregated_label.append("PAO1")
elif sample_to_strain_table_full_processed.loc[exp_id, "PA14"].all() == True:
aggregated_label.append("PA14")
elif sample_to_strain_table_full_processed.loc[exp_id, "PAK"].all() == True:
aggregated_label.append("PAK")
elif (
sample_to_strain_table_full_processed.loc[exp_id, "ClinicalIsolate"].all()
== True
):
aggregated_label.append("Clinical Isolate")
else:
aggregated_label.append("NA")
sample_to_strain_table_full_processed["Strain type"] = aggregated_label
sample_to_strain_table = sample_to_strain_table_full_processed["Strain type"].to_frame()
sample_to_strain_table.head()
###Output
/home/alexandra/anaconda3/envs/core_acc/lib/python3.7/site-packages/ipykernel_launcher.py:18: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
###Markdown
Save pre-binned data with median accessory expressionThis dataset will be used for Georgia's manuscript, which describes how we generated these compendia
###Code
# Select columns with median accessory expression
pao1_pa14_acc_expression_select = pao1_pa14_acc_expression[
["median_acc_expression_pao1", "median_acc_expression_pa14"]
]
pao1_pa14_acc_expression_select.head()
# Add SRA strain type
pao1_pa14_acc_expression_label = pao1_pa14_acc_expression_select.merge(
sample_to_strain_table, left_index=True, right_index=True
)
# Rename column
pao1_pa14_acc_expression_label = pao1_pa14_acc_expression_label.rename(
{"Strain type": "SRA label"}, axis=1
)
pao1_pa14_acc_expression_label.head()
# Add our binned label
pao1_pa14_acc_expression_label["Our label"] = "NA"
pao1_pa14_acc_expression_label.loc[pao1_binned_ids, "Our label"] = "PAO1-like"
pao1_pa14_acc_expression_label.loc[pa14_binned_ids, "Our label"] = "PA14-like"
pao1_pa14_acc_expression_label.head()
# Confirm dimensions
pao1_expression_prebin_filename = paths.PAO1_PREBIN_COMPENDIUM
pa14_expression_prebin_filename = paths.PA14_PREBIN_COMPENDIUM
pao1_expression_prebin = pd.read_csv(
pao1_expression_prebin_filename, sep="\t", index_col=0, header=0
)
pa14_expression_prebin = pd.read_csv(
pa14_expression_prebin_filename, sep="\t", index_col=0, header=0
)
# he two expression prebins are because the same samples were mapped to 2 different references (PAO1 and a PA14 reference.
# This assertion is to make sure that the number of samples is the same in both, which it is.
# This assertion is also testing that when we added information about our accessory gene expression
# and labels we retained the same number of samples, which we did.
assert (
pao1_expression_prebin.shape[0]
== pa14_expression_prebin.shape[0]
== pao1_pa14_acc_expression_label.shape[0]
)
# Save
pao1_pa14_acc_expression_label.to_csv(
"prebinned_compendia_acc_expression.tsv", sep="\t"
)
###Output
_____no_output_____
###Markdown
Create compendiaCreate PAO1 and PA14 compendia
###Code
# Get expression data
# Note: reindexing needed here instead of .loc since samples from expression data
# were filtered out for low counts, but these samples still exist in log files
pao1_expression_binned = pao1_expression.loc[pao1_binned_ids]
pa14_expression_binned = pa14_expression.loc[pa14_binned_ids]
assert len(pao1_binned_ids) == pao1_expression_binned.shape[0]
assert len(pa14_binned_ids) == pa14_expression_binned.shape[0]
# Label samples with SRA annotations
# pao1_expression_label = pao1_expression_binned.join(
# sample_to_strain_table, how='left')
pao1_expression_label = pao1_expression_binned.merge(
sample_to_strain_table, left_index=True, right_index=True
)
pa14_expression_label = pa14_expression_binned.merge(
sample_to_strain_table, left_index=True, right_index=True
)
print(pao1_expression_label.shape)
pao1_expression_label.head()
print(pa14_expression_label.shape)
pa14_expression_label.head()
assert pao1_expression_binned.shape[0] == pao1_expression_label.shape[0]
assert pa14_expression_binned.shape[0] == pa14_expression_label.shape[0]
sample_to_strain_table["Strain type"].value_counts()
###Output
_____no_output_____ |
notebooks/0.2.actual_data_tests.ipynb | ###Markdown
Too small 'places' data: BO, TNLimited 'places' data: LT: 69 and EE: 252 (only large cities), HK: 21 (only districts), mixed distribution?
###Code
data_dir_path = os.environ['DATA_DIR']
tweets_files_format = 'tweets_2015_2018_{}.json.gz'
places_files_format = 'places_2015_2018_{}.json.gz'
ssh_domain = os.environ['IFISC_DOMAIN']
ssh_username = os.environ['IFISC_USERNAME']
country_codes = ('BO', 'CA', 'CH', 'EE', 'ES', 'FR', 'HK','ID', 'LT', 'LV',
'MY', 'PE', 'RO', 'SG', 'TN', 'UA')
latlon_proj = 'epsg:4326'
xy_proj = 'epsg:3857'
external_data_dir = '../data/external/'
fig_dir = '../reports/figures'
cc = 'CH'
###Output
_____no_output_____
###Markdown
Getting data
###Code
data_dir_path = os.environ['DATA_DIR']
tweets_files_format = 'tweets_{}_{}_{}.json.gz'
places_files_format = 'places_{}_{}_{}.json.gz'
ssh_domain = os.environ['IFISC_DOMAIN']
ssh_username = os.environ['IFISC_USERNAME']
project_data_dir = os.path.join('..', 'data')
external_data_dir = os.path.join(project_data_dir, 'external')
interim_data_dir = os.path.join(project_data_dir, 'interim')
processed_data_dir = os.path.join(project_data_dir, 'processed')
cell_data_path_format = os.path.join(processed_data_dir,
'{}_cell_data_cc={}_cell_size={}m.geojson')
latlon_proj = 'epsg:4326'
LANGS_DICT = dict([(lang[1],lang[0].lower().capitalize())
for lang in pycld2.LANGUAGES])
cc= 'SG'
region = None
# region = 'Cataluña'
with open(os.path.join(external_data_dir, 'countries.json')) as f:
countries_study_data = json.load(f)
if region:
area_dict = countries_study_data[cc]['regions'][region]
else:
area_dict = countries_study_data[cc]
fig_dir = os.path.join('..', 'reports', 'figures', cc)
if not os.path.exists(fig_dir):
os.makedirs(os.path.join(fig_dir, 'counts'))
os.makedirs(os.path.join(fig_dir, 'prop'))
xy_proj = area_dict['xy_proj']
cc_timezone = area_dict['timezone']
plot_langs_list = area_dict['local_langs']
min_poly_area = area_dict.get('min_poly_area') or 0.1
max_place_area = area_dict.get('max_place_area') or 1e9 # linked to cell size and places data
valid_uids_path = os.path.join(interim_data_dir, f'valid_uids_{cc}.csv')
tweets_file_path = os.path.join(data_dir_path, tweets_files_format.format(cc))
chunk_size = 100000
raw_tweets_df_generator = data_access.yield_json(tweets_file_path,
ssh_domain=ssh_domain, ssh_username=ssh_username, chunk_size=chunk_size, compression='gzip')
for i,raw_tweets_df in enumerate(raw_tweets_df_generator):
break
raw_tweets_df_generator.close()
ratio_coords = len(raw_tweets_df.loc[raw_tweets_df['coordinates'].notnull()]) / chunk_size
print('{:.1%} of tweets have exact coordinates data'.format(ratio_coords))
nr_users = len(raw_tweets_df['uid'].unique())
print('There are {} distinct users in the dataset'.format(nr_users))
raw_tweets_df.head()
places_file_path = os.path.join(data_dir_path, places_files_format.format(cc))
shapefile_name = 'CNTR_RG_01M_2016_4326.shp'
shapefile_path = os.path.join(external_data_dir, shapefile_name, shapefile_name)
shape_df = geopd.read_file(shapefile_path)
shape_df = shape_df.loc[shape_df['FID'] == cc]
raw_places_df = data_access.return_json(places_file_path,
ssh_domain=ssh_domain, ssh_username=ssh_username, compression='gzip')
raw_places_df.head()
###Output
_____no_output_____
###Markdown
Get most frequent, small enough place: if most frequent -> select it, if within more frequent bigger place -> select it, If not small enough place, discard the user
###Code
print(raw_tweets_df.info())
###Output
_____no_output_____
###Markdown
The "I'm at \" from Foursquare are also there, and they all have 'source' = Foursquare. Tweetbot is an app for regular users, it's not related to bot users.
###Code
tweets_df = raw_tweets_df[['text', 'id', 'lang', 'place_id', 'coordinates', 'uid', 'created_at']]
tweets_df = tweets_df.rename(columns={'lang': 'twitter_lang'})
null_reply_id = 'e39d05b72f25767869d44391919434896bb055772d7969f74472032b03bc18418911f3b0e6dd47ff8f3b2323728225286c3cb36914d28dc7db40bdd786159c0a'
raw_tweets_df.loc[raw_tweets_df['in_reply_to_status_id'] == null_reply_id,
['in_reply_to_status_id', 'in_reply_to_screen_name', 'in_reply_to_user_id']] = None
tweets_df['source'] = raw_tweets_df['source'].str.extract(r'>(.+)</a>', expand=False)
tweets_df['source'].value_counts().head(20)
a = raw_tweets_df[raw_tweets_df['source'].str.contains('tweetmyjobs')]
a = (a.drop(columns=['in_reply_to_status_id', 'id', 'source',
'in_reply_to_screen_name', 'in_reply_to_user_id', 'quoted_status_id'])
.sort_values(by=['uid', 'created_at']))
pd.set_option("display.max_rows", None)
a[a['uid'] == '066669353196d994d624138aa1ef4aafd892ed8e1e6e65532a39ecc7e6129b829bdbf8ea2b53b11f93a74cb7d1a3e1aa537d0c060be02778b37550d70a77a80d']
###Output
_____no_output_____
###Markdown
First tests on single df
###Code
ref_year = 2015
nr_consec_months = 3
tweets_file_path = os.path.join(data_dir_path, tweets_files_format.format(cc))
raw_tweets_df_generator = data_access.yield_json(tweets_file_path,
ssh_domain=ssh_domain, ssh_username=ssh_username, chunk_size=1000000, compression='gzip')
agg_tweeted_months_users = pd.DataFrame([], columns=['uid', 'month', 'count'])
tweets_df_list = []
for raw_tweets_df in raw_tweets_df_generator:
tweets_df_list.append(raw_tweets_df)
agg_tweeted_months_users = ufilters.inc_months_activity(
agg_tweeted_months_users, raw_tweets_df)
raw_tweets_df_generator.close()
local_uid_series = ufilters.consec_months(agg_tweeted_months_users)
ref_year = 2015
nr_consec_months = 3
tweeted_months_users = pd.DataFrame([], columns=['uid', 'month', 'count'])
tweeted_months_users = ufilters.inc_months_activity(
tweeted_months_users, tweets_df)
local_uid_series = ufilters.consec_months(tweeted_months_users)
raw_tweets_df['lang'].value_counts().head(10)
raw_tweets_df.join(local_uid_series, on='uid', how='inner')['lang'].value_counts().head(10)
tweets_file_path = os.path.join(data_dir_path, tweets_files_format.format(cc))
raw_tweets_df_generator = data_access.yield_json(tweets_file_path,
ssh_domain=ssh_domain, ssh_username=ssh_username, chunk_size=1000000, compression='gzip')
for raw_tweets_df in raw_tweets_df_generator:
filtered_tweets_df = pd.DataFrame(local_uid_series)
###Output
_____no_output_____
###Markdown
Language detection Detected languages - Languages possibly detected by CLD:
###Code
lang_with_code = dict(pycld2.LANGUAGES)
detected_lang_with_code = [(lang, lang_with_code[lang]) for lang in pycld2.DETECTED_LANGUAGES]
print(detected_lang_with_code)
###Output
_____no_output_____
###Markdown
- Languages possibly detected by Twitter (see 'lang' in https://support.gnip.com/apis/powertrack2.0/rules.htmlOperators):Amharic - amArabic - arArmenian - hyBengali - bnBulgarian - bgBurmese - myChinese - zhCzech - csDanish - daDutch - nlEnglish - enEstonian - etFinnish - fiFrench - frGeorgian - kaGerman - deGreek - elGujarati - guHaitian - htHebrew - iwHindi - hiHungarian - huIcelandic - isIndonesian - inItalian - itJapanese - jaKannada - knKhmer - kmKorean - koLao - loLatvian - lvLithuanian - ltMalayalam - mlMaldivian - dvMarathi - mrNepali - neNorwegian - noOriya - orPanjabi - paPashto - psPersian - faPolish - plPortuguese - ptRomanian - roRussian - ruSerbian - srSindhi - sdSinhala - siSlovak - skSlovenian - slSorani Kurdish - ckbSpanish - esSwedish - svTagalog - tlTamil - taTelugu - teThai - thTibetan - boTurkish - trUkrainian - ukUrdu - urUyghur - ugVietnamese - viWelsh - cy
###Code
tweets_lang_df = text_process.lang_detect(tweets_df, text_col='text', min_nr_words=4, cld='pycld2')
tweets_lang_df.head()
cld_langs = tweets_lang_df['cld_lang'].unique()
cld_langs.sort()
print('Languages detected by cld: {}'.format(cld_langs))
twitter_langs = tweets_lang_df['twitter_lang'].unique()
twitter_langs.sort()
print('Languages detected by twitter: {}'.format(twitter_langs))
tweets_lang_df['twitter_lang'].value_counts().head(10)
tweets_lang_df['cld_lang'].value_counts().head(10)
###Output
_____no_output_____
###Markdown
French case, corsican is unreliably detected by CLD for French tweets, however seems pretty accurate when twitter_lang='it' Mandarin (zh) is not detected well by cld: example of a run on a chunk: 5300 tweets in Mandarin detected by twitter, and only 2300 by cld. However, there are also a good number of false positives from twitter (looking roughly at the data by hand). There notably seems to be a problem with repeated logograms: just having "haha" messes with the whole translation Multilingual users
###Code
groupby_user_lang = tweets_lang_df.loc[tweets_lang_df['twitter_lang'] != 'und'].groupby(['uid', 'twitter_lang'])
count_tweets_by_user_lang = groupby_user_lang.size()
count_langs_by_user_df = count_tweets_by_user_lang.groupby('uid').transform('size')
multiling_users_df = count_tweets_by_user_lang.loc[count_langs_by_user_df > 1]
pd.DataFrame(multiling_users_df)
pd.set_option("display.max_rows", 100)
multiling_users_list = [x[0] for x in multiling_users_df.index.values]
tweets_lang_df[tweets_lang_df['uid'].isin(multiling_users_list)].sort_values(by=['uid', 'cld_lang'])[
['uid', 'filtered_text', 'cld_lang', 'twitter_lang', 'created_at']]
###Output
_____no_output_____
###Markdown
Places into geodf and join on tweets Calculate the area to discard bbox which are too large? Problem: need to project first, which is expensive
###Code
tweets_to_loc_df = tweets_lang_df.loc[tweets_lang_df['coordinates'].isnull()]
crs = {'init': latlon_proj}
places_df = raw_places_df[['id', 'bounding_box', 'name', 'place_type']]
geometry = places_df['bounding_box'].apply(lambda x: Polygon(x['coordinates'][0]))
places_geodf = geopd.GeoDataFrame(places_df, crs=crs, geometry=geometry)
places_geodf = places_geodf.set_index('id')
places_geodf = places_geodf.drop(columns=['bounding_box'])
places_geodf['area'] = places_geodf.geometry.to_crs(xy_proj).area
tweets_final_df = tweets_to_loc_df.join(places_geodf, on='place_id', how='left')
tweets_final_df.head(10)
###Output
_____no_output_____
###Markdown
Corsican?
###Code
tweets_final_df.loc[(tweets_final_df['cld_lang'] =='co') & (tweets_final_df['twitter_lang'] =='it')]
###Output
_____no_output_____
###Markdown
CLD sensitive to letter repetitions made to insist: can put threshold if more than 3 consecutive same letter, bring it down to 2, it seems to improve prediction on exampleUsually twitter's prediction seems better...
###Code
tweets_final_df[tweets_final_df['cld_lang'] != tweets_final_df['twitter_lang']].drop(columns=['id'])
###Output
_____no_output_____
###Markdown
Swiss German?
###Code
zurich_id = places_geodf.loc[places_geodf['name']=='Zurich', 'geometry'].index[0]
# places_in_zurich = places_geodf
places_in_zurich = places_geodf.loc[places_geodf.within(places_geodf.loc[zurich_id, 'geometry'])]
places_in_zurich
tweets_in_zurich = tweets_final_df.join(places_in_zurich, on='place_id', rsuffix='_place')
print(tweets_in_zurich['cld_lang'].value_counts().head())
print(tweets_in_zurich['twitter_lang'].value_counts().head())
tweets_in_zurich.loc[(tweets_in_zurich['cld_lang']=='un') & (tweets_in_zurich['twitter_lang']=='de'),
'filtered_text']
###Output
_____no_output_____
###Markdown
Mostly mixed languages not detected by twitter it seems:
###Code
tweets_in_zurich.loc[tweets_in_zurich['twitter_lang']=='und',
'filtered_text']
###Output
_____no_output_____
###Markdown
groupbys and stuff
###Code
def get_mean_time(df, dt_col):
t_series_in_sec_of_day = df['hour']*3600 + df['minute']*60 + df['second']
return pd.to_timedelta(int(t_series_in_sec_of_day.mean()), unit='s')
tweets_df = raw_tweets_df.copy()
# Speeds up the process to extract the hour, min and sec first
tweets_df['hour'] = tweets_df['created_at'].dt.hour
tweets_df['minute'] = tweets_df['created_at'].dt.minute
tweets_df['second'] = tweets_df['created_at'].dt.second
groupby_user_place = tweets_df.groupby(['uid', 'place_id'])
count_tweets_by_user_place = groupby_user_place.size()
count_tweets_by_user_place.rename('count', inplace=True)
mean_time_by_user_place = groupby_user_place.apply(lambda df: get_mean_time(df, 'created_at'))
mean_time_by_user_place.rename('avg time', inplace=True)
# transform to keep same size, so as to be able to have a matching boolean Series of same size as
# original df to select users with more than one place for example:
count_places_by_user_df = count_tweets_by_user_place.groupby('uid').transform('size')
agg_data_df = pd.concat([count_tweets_by_user_place, mean_time_by_user_place], axis=1)
count_tweets_by_user_place_geodf = agg_data_df.join(places_geodf, on='place_id')
count_tweets_by_user_place_geodf.head()
cProfile.run("groupby_user_place.apply(lambda df: get_mean_time(df, 'created_at'))")
count_tweets_by_user_place_geodf.loc[count_places_by_user_df > 1]
###Output
_____no_output_____
###Markdown
Add new chunk to cumulative data:
###Code
count_tweets_by_user_place_geodf = count_tweets_by_user_place_geodf.join(
count_tweets_by_user_place_geodf['count'],
on=['uid', 'place_id'], how='outer', rsuffix='_new')
count_tweets_by_user_place_geodf['count'] += count_tweets_by_user_place_geodf['count_new']
count_tweets_by_user_place_geodf.drop(columns=['count_new'], inplace=True)
count_tweets_by_user_place_geodf
###Output
_____no_output_____ |
chapter/machine_learning/ensemble.ipynb | ###Markdown
With the exception of the random forest, we have so far considered machinelearning models as stand-alone entities. Combinations of models that jointlyproduce a classification are known as *ensembles*. There are two mainmethodologies that create ensembles: *bagging* and *boosting*. BaggingBagging refers to bootstrap aggregating, where bootstrap here is the same as wediscussed in the section [ch:stats:sec:boot](ch:stats:sec:boot). Basically,we resample the data with replacement and then train a classifier on the newlysampled data. Then, we combine the outputs of each of the individualclassifiers using a majority-voting scheme (for discrete outputs) or a weightedaverage (for continuous outputs). This combination is particularly effectivefor models that are easily influenced by a single data element. The resamplingprocess means that these elements cannot appear in every bootstrappedtraining set so that some of the models will not suffer these effects. Thismakes the so-computed combination of outputs less volatile. Thus, bagginghelps reduce the collective variance of individual high-variance models.To get a sense of bagging, let's suppose we have a two-dimensional plane thatis partitioned into two regions with the following boundary: $y=-x+x^2$.Pairs of $(x_i,y_i)$ points above this boundary are labeled one and pointsbelow are labeled zero. [Figure](fig:ensemble_001) shows the two regions with the nonlinear separating boundary as the black curved line. -->Two regions in the plane are separated by a nonlinear boundary. The training data is sampled from this plane. The objective is to correctly classify the so-sampled data.The problem is to take samples from each of these regionsand classify them correctly using a perceptron (see the section [ch:ml:sec:perceptron](ch:ml:sec:perceptron)). A perceptronis the simplest possible linear classifier that finds a linein the plane to separate two purported categories. Becausethe separating boundary is nonlinear, there is no way thatthe perceptron can completely solve this problem. Thefollowing code sets up the perceptron available inScikit-learn.
###Code
from sklearn.linear_model import Perceptron
p=Perceptron()
p
###Output
_____no_output_____
###Markdown
The training data and the resulting perceptron separating boundaryare shown in [Figure](fig:ensemble_002). The circles and crosses are thesampled training data and the gray separating line is the perceptron'sseparating boundary between the two categories. The black squares are thoseelements in the training data that the perceptron mis-classified. Because theperceptron can only produce linear separating boundaries, and the boundary inthis case is non-linear, the perceptron makes mistakes near where theboundary curves. The next step is to see how bagging canimprove upon this by using multiple perceptrons. -->The perceptron finds the best linear boundary between the two classes.The following code sets up the bagging classifier in Scikit-learn. Here weselect only three perceptrons. [Figure](fig:ensemble_003) shows each of thethree individual classifiers and the final bagged classifer in the panel on thebottom right. As before, the black circles indicate misclassifications in thetraining data. Joint classifications are determined by majority voting.
###Code
from sklearn.ensemble import BaggingClassifier
bp = BaggingClassifier(Perceptron(),max_samples=0.50,n_estimators=3)
bp
###Output
_____no_output_____
###Markdown
-->Each panel with the single gray line is one of the perceptrons used for the ensemble bagging classifier on the lower right.The `BaggingClassifier` can estimate its own out-of-sample error if passed the`oob_score=True` flag upon construction. This keeps track of which samples wereused for training and which were not, and then estimates the out-of-sampleerror using those samples that were unused in training. The `max_samples`keyword argument specifies the number of items from the training set to use forthe base classifier. The smaller the `max_samples` used in the baggingclassifier, the better the out-of-sample error estimate, but at the cost ofworse in-sample performance. Of course, this depends on the overall number ofsamples and the degrees-of-freedom in each individual classifier. TheVC-dimension surfaces again! BoostingAs we discussed, bagging is particularly effective for individual high-varianceclassifiers because the final majority-vote tends to smooth out the individualclassifiers and produce a more stable collaborative solution. On the otherhand, boosting is particularly effective for high-bias classifiers that areslow to adjust to new data. On the one hand, boosting is similiar to bagging inthat it uses a majority-voting (or averaging for numeric prediction) process atthe end; and it also combines individual classifiers of the same type. On theother hand, boosting is serially iterative, whereas the individual classifiersin bagging can be trained in parallel. Boosting uses the misclassifications ofprior iterations to influence the training of the next iterative classifier byweighting those misclassifications more heavily in subsequent steps. This meansthat, at every step, boosting focuses more and more on specificmisclassifications up to that point, letting the prior classificationsbe carried by earlier iterations. The primary implementation for boosting in Scikit-learn is the AdaptiveBoosting (*AdaBoost*) algorithm, which does classification(`AdaBoostClassifier`) and regression (`AdaBoostRegressor`). The first step inthe basic AdaBoost algorithm is to initialize the weights over each of thetraining set indicies, $D_0(i)=1/n$ where there are $n$ elements in thetraining set. Note that this creates a discrete uniform distribution over the*indicies*, not over the training data $\lbrace (x_i,y_i) \rbrace$ itself. Inother words, if there are repeated elements in the training data, then eachgets its own weight. The next step is to train the base classifer $h_k$ andrecord the classification error at the $k^{th}$ iteration, $\epsilon_k$. Twofactors can next be calculated using $\epsilon_k$, $$\alpha_k = \frac{1}{2}\log \frac{1-\epsilon_k}{\epsilon_k}$$ and the normalization factor, $$Z_k = 2 \sqrt{ \epsilon_k (1- \epsilon_k) }$$ For the next step, the weights over the training data are updated asin the following, $$D_{k+1}(i) = \frac{1}{Z_k} D_k(i)\exp{(-\alpha_k y_i h_k(x_i))}$$ The final classification result is assembled using the $\alpha_k$factors, $g = \sgn(\sum_{k} \alpha_k h_k)$. To re-do the problem above using boosting with perceptrons, we set up theAdaBoost classifier in the following,
###Code
from sklearn.ensemble import AdaBoostClassifier
clf=AdaBoostClassifier(Perceptron(),n_estimators=3,
algorithm='SAMME',
learning_rate=0.5)
clf
###Output
_____no_output_____ |
Unsupervised-Learning/independent_component_analysis.ipynb | ###Markdown
Independent Component Analysis LabIn this notebook, we'll use Independent Component Analysis to retrieve original signals from three observations each of which contains a different mix of the original signals. This is the same problem explained in the ICA video. DatasetLet's begin by looking at the dataset we have. We have three WAVE files, each of which is a mix, as we've mentioned. If you haven't worked with audio files in python before, that's okay, they basically boil down to being lists of floats.Let's begin by loading our first audio file, **[ICA mix 1.wav](ICA mix 1.wav)** [click to listen to the file]:
###Code
import numpy as np
import wave
# Read the wave file
mix_1_wave = wave.open('ICA mix 1.wav','r')
###Output
_____no_output_____
###Markdown
Let's peak at the parameters of the wave file to learn more about it
###Code
mix_1_wave.getparams()
###Output
_____no_output_____
###Markdown
So this file has only channel (so it's mono sound). It has a frame rate of 44100, which means each second of sound is represented by 44100 integers (integers because the file is in the common PCM 16-bit format). The file has a total of 264515 integers/frames, which means its length in seconds is:
###Code
264515/44100
###Output
_____no_output_____
###Markdown
Let's extract the frames of the wave file, which will be a part of the dataset we'll run ICA against:
###Code
# Extract Raw Audio from Wav File
signal_1_raw = mix_1_wave.readframes(-1)
signal_1 = np.fromstring(signal_1_raw, 'Int16')
###Output
/Users/cmertens/anaconda3/lib/python3.6/site-packages/ipykernel_launcher.py:3: DeprecationWarning: Numeric-style type codes are deprecated and will result in an error in the future.
This is separate from the ipykernel package so we can avoid doing imports until
/Users/cmertens/anaconda3/lib/python3.6/site-packages/ipykernel_launcher.py:3: DeprecationWarning: The binary mode of fromstring is deprecated, as it behaves surprisingly on unicode inputs. Use frombuffer instead
This is separate from the ipykernel package so we can avoid doing imports until
###Markdown
signal_1 is now a list of ints representing the sound contained in the first file.
###Code
'length: ', len(signal_1) , 'first 100 elements: ',signal_1[:100]
###Output
_____no_output_____
###Markdown
If we plot this array as a line graph, we'll get the familiar wave form representation:
###Code
import matplotlib.pyplot as plt
fs = mix_1_wave.getframerate()
timing = np.linspace(0, len(signal_1)/fs, num=len(signal_1))
plt.figure(figsize=(12,2))
plt.title('Recording 1')
plt.plot(timing,signal_1, c="#3ABFE7")
plt.ylim(-35000, 35000)
plt.show()
###Output
_____no_output_____
###Markdown
In the same way, we can now load the other two wave files, **[ICA mix 2.wav](ICA mix 2.wav)** and **[ICA mix 3.wav](ICA mix 3.wav)**
###Code
mix_2_wave = wave.open('ICA mix 2.wav','r')
#Extract Raw Audio from Wav File
signal_raw_2 = mix_2_wave.readframes(-1)
signal_2 = np.fromstring(signal_raw_2, 'Int16')
mix_3_wave = wave.open('ICA mix 3.wav','r')
#Extract Raw Audio from Wav File
signal_raw_3 = mix_3_wave.readframes(-1)
signal_3 = np.fromstring(signal_raw_3, 'Int16')
plt.figure(figsize=(12,2))
plt.title('Recording 2')
plt.plot(timing,signal_2, c="#3ABFE7")
plt.ylim(-35000, 35000)
plt.show()
plt.figure(figsize=(12,2))
plt.title('Recording 3')
plt.plot(timing,signal_3, c="#3ABFE7")
plt.ylim(-35000, 35000)
plt.show()
###Output
/Users/cmertens/anaconda3/lib/python3.6/site-packages/ipykernel_launcher.py:6: DeprecationWarning: Numeric-style type codes are deprecated and will result in an error in the future.
/Users/cmertens/anaconda3/lib/python3.6/site-packages/ipykernel_launcher.py:6: DeprecationWarning: The binary mode of fromstring is deprecated, as it behaves surprisingly on unicode inputs. Use frombuffer instead
/Users/cmertens/anaconda3/lib/python3.6/site-packages/ipykernel_launcher.py:13: DeprecationWarning: Numeric-style type codes are deprecated and will result in an error in the future.
del sys.path[0]
/Users/cmertens/anaconda3/lib/python3.6/site-packages/ipykernel_launcher.py:13: DeprecationWarning: The binary mode of fromstring is deprecated, as it behaves surprisingly on unicode inputs. Use frombuffer instead
del sys.path[0]
###Markdown
Now that we've read all three files, we're ready to [zip](https://docs.python.org/3/library/functions.htmlzip) them to create our dataset.* Create dataset ```X``` by zipping signal_1, signal_2, and signal_3 into a single list
###Code
X = list(zip(signal_1, signal_2, signal_3))
# Let's peak at what X looks like
X[:10]
###Output
_____no_output_____
###Markdown
We are now ready to run ICA to try to retrieve the original signals.* Import sklearn's [FastICA](http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.FastICA.html) module* Initialize FastICA look for three components* Run the FastICA algorithm using fit_transform on dataset X
###Code
# DONE: Import FastICA
from sklearn.decomposition import FastICA
# DONE: Initialize FastICA with n_components=3
ica = FastICA(n_components=3)
# TODO: Run the FastICA algorithm using fit_transform on dataset X
ica_result = ica.fit_transform(X)
###Output
_____no_output_____
###Markdown
```ica_result``` now contains the result of FastICA, which we hope are the original signals. It's in the shape:
###Code
ica_result.shape
###Output
_____no_output_____
###Markdown
Let's split into separate signals and look at them
###Code
result_signal_1 = ica_result[:,0]
result_signal_2 = ica_result[:,1]
result_signal_3 = ica_result[:,2]
###Output
_____no_output_____
###Markdown
Let's plot to see how the wave forms look
###Code
# Plot Independent Component #1
plt.figure(figsize=(12,2))
plt.title('Independent Component #1')
plt.plot(result_signal_1, c="#df8efd")
plt.ylim(-0.010, 0.010)
plt.show()
# Plot Independent Component #2
plt.figure(figsize=(12,2))
plt.title('Independent Component #2')
plt.plot(result_signal_2, c="#87de72")
plt.ylim(-0.010, 0.010)
plt.show()
# Plot Independent Component #3
plt.figure(figsize=(12,2))
plt.title('Independent Component #3')
plt.plot(result_signal_3, c="#f65e97")
plt.ylim(-0.010, 0.010)
plt.show()
###Output
_____no_output_____
###Markdown
Do some of these look like musical wave forms? The best way to confirm the result is to listen to resulting files. So let's save as wave files and verify. But before we do that, we'll have to:* convert them to integer (so we can save as PCM 16-bit Wave files), otherwise only some media players would be able to play them and others won't* Map the values to the appropriate range for int16 audio. That range is between -32768 and +32767. A basic mapping can be done by multiplying by 32767.* The sounds will be a little faint, we can increase the volume by multiplying by a value like 100
###Code
from scipy.io import wavfile
# Convert to int, map the appropriate range, and increase the volume a little bit
result_signal_1_int = np.int16(result_signal_1*32767*100)
result_signal_2_int = np.int16(result_signal_2*32767*100)
result_signal_3_int = np.int16(result_signal_3*32767*100)
# Write wave files
wavfile.write("result_signal_1.wav", fs, result_signal_1_int)
wavfile.write("result_signal_2.wav", fs, result_signal_2_int)
wavfile.write("result_signal_3.wav", fs, result_signal_3_int)
###Output
_____no_output_____ |
OOP_Concepts.ipynb | ###Markdown
Phyton Classes and Objects
###Code
class MyClass:
pass
class OOP1_2:
x=5
print(x)
#creating objects
class OOP1_2:
def __init__(self,name,age):
self.name = name
self.age = age
def identity(self):
print(self.name, self.age)
person = OOP1_2("Hanns", 19)
print(person.name)
print(person.age)
print(person.identity)
#modifying object name
person.name = "Jaspher"
print(person.name)
print(person.age)
person.age = 246254
print(person.name)
print(person.age)
#deleting objects
del person.name
print(person.name)
print(person.age)
###Output
246254
###Markdown
Application 1 - write a python program which computes the area of a square and names its class as square, and sides as attributes.
###Code
class Square:
def __init__(self, sides):
self.sides = sides
def area(self):
return self.sides*self.sides
def display(self):
print("The area is: ", self.area())
square = Square(4)
square.display()
square.area()
###Output
The area is: 16
###Markdown
Application 2 - write a python program that displays your full name, age, course, and school. Create a class namned MyClass, and name, age, course, school as attributes.
###Code
class MyClass:
def __init__(self, flname, age, course, school):
self.flname = flname
self.age = age
self.course = course
self.school = school
def me(self):
print(self.flname + "\n" + str(self.age) + "\n" + self.course + "\n" + self.school)
info = MyClass ("Hanns Jaspher A. Elalto", 19, "BSCpE", "CvSU")
info.me()
###Output
Hanns Jaspher A. Elalto
19
BSCpE
CvSU
###Markdown
Python Classes and Objects Create a Class
###Code
class ClassName:
pass
class OOP1_2:
x=5
print(x)
###Output
5
###Markdown
Create Objects
###Code
class OOP1_2:
def __init__(self,name,age):
self.name = name #Attributes
self.age = age
def identity(self):
print(self.name, self.age)
person=OOP1_2("MJ",18) #Create objects
print(person.name)
print(person.age)
print(person.identity)
#Modify Object Name
person.name = "Name"
print(person.name)
print(person.age)
person.age = 20
print(person.name)
print(person.age)
#Delete Object
del person.name
print(person.name)
print(person.age)
###Output
20
###Markdown
Application 1 - Write a Python program that computes the area of a square, and name its class as Sqaure, sides as attributes
###Code
class Square:
def __init__(self,sides):
self.sides = sides
def area(self):
return self.sides**2
def display(self):
print("The area of the sqaure is",self.area(),":))")
square=Square(12)
square.display()
###Output
The area of the sqaure is 144 :))
###Markdown
Application 2 - Write a Python program that displays your full name, age, course, school. Create a class named MyClass and name, age, course, and school as attributes.
###Code
class MyClass:
def __init__(self, name, age, course, school):
self.name = name
self.age = age
self.course = course
self.school = school
def disp(self):
print("My Name is",self.name)
print("I am",self.age,"years old")
print("My course is",self.course)
print("I study in",self.school)
credentials=MyClass("Mark Jeremin C. Poblete",18,"Bachelor of Science in Computer Engineering", "Cavite State University - Main")
credentials.disp()
###Output
My Name is Mark Jeremin C. Poblete
I am 18 years old
My course is Bachelor of Science in Computer Engineering
I study in Cavite State University - Main
###Markdown
Python Classes and Objects Create a class
###Code
class MyClass:
pass
class OOP1_2:
X=5
print(X)
###Output
5
###Markdown
Create Objects
###Code
class OOP1_2:
def __init__(self,name,age): #__init__(parameter)
self.name=name #attributes
self.age=age
def identity(self):
print(self.name, self.age)
person=OOP1_2("Billy",19) #create objects
print(person.name)
print(person.age)
print(person.identity)
#Modify the Object name
person.name= "Gilson"
print(person.name)
print(person.age)
person.age=(20)
print(person.name)
print(person.age)
#Delete the Object
del person.name
print(person.age)
print(person.name)
###Output
20
###Markdown
Application 1 - Write a Python Program that computes the area of a square, and name it's class as Square; side as attributes
###Code
class Square:
def __init__(self,sides):
self.sides=sides
def area(self):
return self.sides*self.sides
def display(self):
print("The are of the square is:",self.area())
square=Square(4)
print(square.sides)
square.display()
###Output
4
The are of the square is: 16
###Markdown
Application 2 - Write a Python program that displays your full name, age, course, school. Create a class named MyClass, and name, age, course, school as attributes
###Code
class MyClass:
def __init__(self,name,age,course,school):
self.name=name
self.age=age
self.course=course
self.school=school
def identity(self):
print(self.name,self.age,self.course,self.school)
person=MyClass("Billy",19,"BSCpE","Cavite State University - Main Campus")
print(person.name)
print(person.age)
print(person.course)
print(person.school)
print(person.identity)
###Output
Billy
19
BSCpE
Cavite State University - Main Campus
<bound method MyClass.identity of <__main__.MyClass object at 0x7fec19ea1290>>
###Markdown
Classes with Multiple Objects
###Code
class Birds:
def __init__(self,bird_name):
self.bird_name = bird_name
def flying_birds(self):
print(f"{self.bird_name} flies above the sky")
def non_flying_birds(self):
print(f"{self.bird_name} is the national bird of the Philippines")
vulture = Birds("Griffon Vulture")
crane = Birds("Common Crane")
emu = Birds("Emu")
vulture.flying_birds()
crane.flying_birds()
emu.non_flying_birds()
###Output
Griffon Vulture flies above the sky
Common Crane flies above the sky
Emu is the national bird of the Philippines
###Markdown
Encapsulation using mangling with double underscores
###Code
class foo:
def __init__(self,a,b):
self.__a = a
self.__b = b
def add(self):
return self.__a +self.__b #Private attributes
number = foo(3,4)
number.add()
number.a = 7 #7, 4 7+4 = 11
number.add()
###Output
_____no_output_____
###Markdown
Encapsulation with Private Attributes
###Code
class Counter:
def __init__(self):
self.__current = 0
def increment(self):
self.__current +=1
def value(self):
return self.__current
def reset(self):
self.__current = 0
num = Counter()
num.increment() #counter = counter + 1
num.increment()
num.increment()
num.counter = 1
num.value()
###Output
_____no_output_____
###Markdown
Inheritance
###Code
class Person:
def __init__(self, firstname,surname):
self.firstname = firstname
self.surname = surname
def printname(self):
print(self.firstname,self.surname)
person = Person("Ana", "Santos")
person.printname()
class Teacher(Person):
pass
person2 = Teacher("Maria", "Sayo")
person2.printname()
class Student(Person):
pass
person3 = Student("Jhoriz", "Aquino")
person3.printname()
###Output
Ana Santos
Maria Sayo
Jhoriz Aquino
###Markdown
Polymorphism
###Code
class RegularPolygon:
def __init__(self,side):
self.side = side
class Square(RegularPolygon):
def area(self):
return self.side * self.side
class EquilateralTriangle(RegularPolygon):
def area(self):
return self.side * self.side * 0.433
object = Square(4)
print(object.area())
object2 = EquilateralTriangle(3)
print(object2.area())
###Output
16
3.897
###Markdown
Python Classes and Objects
###Code
class MyClass:
pass
class OOP1_2:
X=5
print(X)
###Output
5
###Markdown
Create Objects
###Code
class OOP1_2:
def __init__(self,name,age): #__init__(parameter)
self.name = name #attributes
self.age = age
def identity(self):
print(self.name, self.age)
person = OOP1_2("Alliyah", 19) #create objects
print(person.name)
print(person.age)
print(person.identity)
#Modify the Object Name
person.name = "Francine"
print(person.name)
print(person.age)
person.age = 20
print(person.name)
print(person.age)
#Delete the Object
del person.name
print(person.age)
print(person.name)
###Output
20
###Markdown
Application 1-Write a Python program that Computes the are of square, and name its class as square, side as attributes
###Code
class Square:
def __init__(self,sides):
self.sides=sides
def area(self):
return self.sides*self.sides
def display(self):
print("the area of the square is:", self.area())
square=Square(4)
print(square.sides)
square.display()
###Output
4
the area of the square is: 16
###Markdown
Application 2- Write a Python program that displays your full name, age, course, school.Create a class names Myclass, and name, age, school, course as attributes
###Code
class Myclass:
def __init__(self, name, age, school, course):
self.name=name
self.age=age
self.school=school
self.course=course
def identity(self):
print(self.name, self.age, self.school, self.course)
person=Myclass("Alliyah", 19,"CVSU MAIN", "BSCPE")
print(person.name)
print(person.age)
print(person.school)
print(person.course)
###Output
Alliyah
19
CVSU MAIN
BSCPE
###Markdown
Python Classes and Objects Create a class
###Code
class MyClass:
pass
class OOP1_2:
X=5
print(X)
###Output
5
###Markdown
Create Objects
###Code
class OOP1_2:
def __init__(self,name,age):
self.name = name
self.age = age
def identity(self):
print(self.name,self.age)
person=OOP1_2("Florentino",19)
print(person.name)
print(person.age)
print(person.identity)
#Modify the Object Name
person.name= "Ino"
print(person.name)
print(person.age)
###Output
Ino
19
###Markdown
Application 1. Write a Python program that computes the area of a square, and name its class as Square, side as attributes
###Code
class Square:
def __init__(self,sides):
self.sides=sides
def area(self):
return self.sides*self.sides
def display(self):
print("the area of the square is: ", self.area())
square=Square(4)
print(square.sides)
square.display()
###Output
4
the area of the square is: 16
###Markdown
Application 2. Write a Python program that displays your full name, age, course, school. Create a class named MyClass, and name, age, course, school as attributes
###Code
class MyClass:
def __init__(self,full_name,age,course,school):
self.full_name = full_name
self.age = age
self.course = course
self.school = school
def identity(self):
print(self.full_name,self.age,self.course,self.school)
person = MyClass("Florentino R. Manaysay III",19,"BS Computer Engineering","Cavite State University")
print(person.full_name)
print(person.age)
print(person.course)
print(person.school)
print(person.identity)
###Output
Florentino R. Manaysay III
19
BS Computer Engineering
Cavite State University
<bound method MyClass.identity of <__main__.MyClass object at 0x7f78b9dfe690>>
###Markdown
Phyton Classes and Objects
###Code
class MyClass:
pass
class OOP1_2:
x=5
print(x)
#creating objects
class OOP1_2:
def __init__(self,name,age):
self.name = name
self.age = age
def identity(self):
print(self.name, self.age)
person = OOP1_2("Hanns", 19)
print(person.name)
print(person.age)
print(person.identity)
#modifying object name
person.name = "Jaspher"
print(person.name)
print(person.age)
person.age = 246254
print(person.name)
print(person.age)
#deleting objects
del person.name
print(person.name)
print(person.age)
###Output
246254
###Markdown
Application 1 - write a python program which computes the area of a square and names its class as square, and sides as attributes.
###Code
class Square:
def __init__(self, sides):
self.sides = sides
def area(self):
return self.sides*self.sides
def display(self):
print("The area is: ", self.area())
square = Square(4)
square.display()
square.area()
###Output
The area is: 16
###Markdown
Application 2 - write a python program that displays your full name, age, course, and school. Create a class namned MyClass, and name, age, course, school as attributes.
###Code
class MyClass:
def __init__(self, flname, age, course, school):
self.a = flname
self.b = age
self.c = course
self.d = school
def me(self):
print(self.a + "\n" + str(self.b) + "\n" + self.c + "\n" + self.d)
info = MyClass ("Hanns Jaspher A. Elalto", 19, "BSCpE", "CvSU")
info.me()
###Output
Hanns Jaspher A. Elalto
19
BSCpE
CvSU
###Markdown
Python Classes and Objects Create a Class
###Code
class MyClass:
pass
class OOP1_2:
x=5
print(x)
###Output
5
###Markdown
Create Objects
###Code
class OOP1_2:
def __init__(self,name,age): #__init__(parameter)
self.name=name #attributes
self.age=age
def identity(self):
print(self.name,self.age)
person=OOP1_2("Maria", "39") #create objects
print(person.name)
print(person.age)
print(person.identity)
#Modify the Object Name
person.name="Rizette"
print(person.name)
print(person.age)
person.age=40
print(person.name)
print(person.age)
#Delete the object
del person.name
print(person.name)
print(person.age)
print(person.age)
###Output
_____no_output_____
###Markdown
Applicationn 1-Write a Python Program that computes the product of the area of a square, and name its class as Square, side as attributes
###Code
class Square:
def __init__(self, sides):
self.sides=sides
def area(self):
return (self.sides*self.sides) #formula to compute the area of the square
def display(self):
print("The area of the square is: ", self.area())
square= Square(4)
print("Side of square= ",square.sides)
square.display()
###Output
_____no_output_____
###Markdown
Application 2- Write a Python Program that displays your full name, age, course, school. Create a class named MyClass, and name, age, school as attributes Create Object
###Code
class Myclass:
def __init__(self, name, age, course, school): #__init_parameters
self.name= name #attributes
self.age=age
self.course= course
self.school= school
def identity(self):
print(self.name,self.age, self.course, self.school)
Person= Myclass("Sarah O. Rebulado", "19", "Bachelor Of Science in Computer Engineering", "Cavite State University-Main Campus") #Createobjects
print("My Name is ",Person.name)
print("Age: ",Person.age)
print("Course: ",Person.course)
print("School: ",Person.school)
###Output
My Name is Sarah O. Rebulado
Age: 19
Course: Bachelor Of Science in Computer Engineering
School: Cavite State University-Main Campus
###Markdown
Modify the Object Name
###Code
Person.name="Sarah Oaña Rebulado"
Person.age="19 Years Old"
print("My Name is ", Person.name)
print("Age: ", Person.age)
###Output
My Name is Sarah Oaña Rebulado
Age: 19 Years Old
###Markdown
Delete the Object
###Code
del Person.age
print("My Name is ",Person.name)
print("Age: ",Person.age)
print("Course: ",Person.course)
print("School: ",Person.school)
###Output
My Name is Sarah Oaña Rebulado
###Markdown
Python Classes and Objects Create A Class
###Code
class MyClass:
pass
class OOP1_2:
X = 5
print(X)
###Output
5
###Markdown
Create Objects
###Code
class OOP1_2:
def __init__(self, name, age):
self.name = name # attributes
self.age = age
def identity(self):
print(self.name, self.age)
person = OOP1_2("Jhoriz", 19) # create objects
print(person.name)
print(person.age)
print(person.identity)
#Modify the Object Name
person.name = "Rodel"
print(person.name)
print(person.age)
person.age = 38
print(person.name)
print(person.age)
# Delete the Object
del person.name
print(person.name)
print(person.age)
###Output
38
###Markdown
Application 1 - Write a Python program that computes the area of a square, and name its class as Square, side as attribute.
###Code
class Square:
def __init__(self, side):
self.side = side
def area(self):
return self.side * self.side
def display(self):
print("the area of the square is", self.area())
square = Square(4)
print(square.side)
square.display()
###Output
4
the area of the square is 16
###Markdown
Application 2 - Write a Python program that display your full name, age, course, school.Create a calss named MyClass, and name, age, course and school as attributes.
###Code
class MyClass:
def __init__(self, name, age, course, school):
self.name = name
self.age = age
self.course = course
self.school = school
def display_info(self):
print("Name:", self.name)
print("Age:", self.age)
print("Course:", self.course)
print("School:", self.school)
def introduce_myself(self):
print("\n\nHi guys!...")
print("..My name is " + self.name + ". I am", self.age,
"years old...")
print("...Just like y'all, I am also a first year student under " +
"the course " + self.course + " here in " + self.school + "..")
print("....I chose this course because I like the idea of engineering " +
"combined with programming.....That's all !!!")
student_1 = MyClass("Jhoriz Rodel F. Aquino", 19,
"BS in Computer Engineering", "Cavite State University")
student_1.display_info()
student_1.introduce_myself()
###Output
Name: Jhoriz Rodel F. Aquino
Age: 19
Course: BS in Computer Engineering
School: Cavite State University
Hi guys!...
..My name is Jhoriz Rodel F. Aquino. I am 19 years old...
...Just like y'all, I am also a first year student under the course BS in Computer Engineering here in Cavite State University..
....I chose this course because I like the idea of engineering combined with programming.....That's all !!!
###Markdown
Python Classes and Objects Create a Class
###Code
class MyClass:
pass
class OOP1_2:
X = 5
print(X)
###Output
5
###Markdown
Create Objects
###Code
class OOP1_2:
def __init__(self, name, age): #__init__(parameter)
self.name = name #attributes
self.age = age
def indentity(self):
print(self.name,self.age)
person = OOP1_2("Hazel", 19) #create objects
print(person.name)
print(person.age)
print(person.indentity)
#Modify the Object Name
person.name = "Anne"
print(person.name)
print(person.age)
person.age = 20
print(person.name)
print(person.age)
#Delete the Object
del person.name
print(person.name)
print(person.age)
###Output
20
###Markdown
Application 1 - Write a python program that computes the area of a square, and name its class as Square, side as attribute.
###Code
class Square:
def __init__(self, sides):
self.sides = sides
def area(self):
return self.sides*self.sides #formula to complete the area of the square
def display(self):
print("the area of a square is:", self.area())
square = Square(4)
print(square.sides)
square.display()
###Output
4
the area of a square is: 16
###Markdown
Application 2 - Write a Python program that displays your full name, age, course, school. Create a class named Myclass, and name, age, school, course as attributes.
###Code
class Myclass:
def __init__(self, name, age, course, school):
self.name = name #attributes
self.age = age
self.course = course
self.school = school
def identity(self):
print(self.name,self.age,self.course,self.school)
attributes = Myclass("Hazel Anne P. Quilao", 19, "BS Computer Engineering", "Cavite State University - Main Campus")
print(attributes.name)
print(attributes.age)
print(attributes.course)
print(attributes.school)
###Output
Hazel Anne P. Quilao
19
BS Computer Engineering
Cavite State University - Main Campus
###Markdown
Python Classes and Objects Create a Class
###Code
class MyClass:
pass
class OOP1_2:
X = 5
print(X)
###Output
5
###Markdown
Create Objects
###Code
class OOP1_2:
def __init__(self, name, age): #__init__(parameter)
self.name = name #attributes
self.age = age
def identity(self):
print(self.name, self.age)
person = OOP1_2("John Fred", 19) #create objects
print(person.name, person.age)
print(person.age)
print(person.identity)
#Modify the Object Name
person.name = "Fred"
print(person.name)
print(person.age)
person.age = 20
print(person.name)
print(person.age)
#Delete the Object
del person.name
print(person.name)
print(person.age)
###Output
20
###Markdown
Application 1 - Write a Python program that computes the area of a square, and name its class as Square, sides as attributes
###Code
class Square:
def __init__(self, sides):
self.sides = sides
def area(self):
return self.sides*self.sides #formula to compute the area of square
def display(self):
print("The area of the square is:", self.area())
square = Square(4)
print(square.sides)
square.display()
###Output
4
The area of the square is: 16
###Markdown
Application 2 - Write a Python program that displays your full name, age, course, and school. Create a class named MyClass and name, age, course, and school, as attributes
###Code
class MyClass:
def __init__(self, name, age, course, school):
self.name = name
self.age = age
self.course = course
self.school = school
def identity(self):
print(self.name, self.age, self.course, self.school)
student = MyClass("John Fred B. Delos Santos", 19, "Bachelor of Science in Computer Engineering", "Cavite State University - Main Campus")
print(f'Name: {student.name}')
print(f'Name: {student.age}')
print(f'Name: {student.course}')
print(f'Name: {student.school}')
print(f'My name is {student.name} and I am {student.age} years old. I am currently pursuing {student.course} in {student.school}.')
###Output
Name: John Fred B. Delos Santos
Name: 19
Name: Bachelor of Science in Computer Engineering
Name: Cavite State University - Main Campus
My name is John Fred B. Delos Santos and I am 19 years old. I am currently pursuing Bachelor of Science in Computer Engineering in Cavite State University - Main Campus.
###Markdown
Python Classes and ObjectsClass
###Code
class Myclass:
pass #create a class without variable and methods
class MyClass:
def __init__(self,name,age):
self.name = name #create class with attributes
self.age = age
def display(self):
print(self.name, self.age)
person = MyClass("Jeffrey R. Samonte", 19) #create an object
person.display()
#Application 1 - Write a Python program that computes for an area of rectangle: Area = lxw
class Rectangle:
def __init__(self,l,w):
self.l = l
self.w = w
def Area(self):
print(self.l * self.w)
rect = Rectangle(7,3)
rect.Area()
###Output
_____no_output_____
###Markdown
Classes with Multiple Objects
###Code
class Birds:
def __init__(self,bird_name):
self.bird_name=bird_name
def flying_birds(self):
print(f"{self.bird_name} flies above the sky")
def non_flying_birds(self):
print(f"{self.bird_name} is the national bird of the Philippines")
vulture = Birds("Griffon Vulture")
crane = Birds("Common Crane")
emu = Birds("Emu")
vulture.flying_birds()
crane.flying_birds()
emu.non_flying_birds()
###Output
Griffon Vulture flies above the sky
Common Crane flies above the sky
Emu is the national bird of the Philippines
###Markdown
Encapsulation
###Code
class foo:
def __init__(self,a,b):
self.__a = a
self.__b = b
def add(self):
return self.__a + self.__b
number = foo(3,4)
number.add()
number.a = 7
number.add()
###Output
_____no_output_____
###Markdown
Encapsulation with Private Attributes
###Code
class Counter:
def __init__(self):
self.__current=0
def increment(self):
self.__current+=1
def value(self):
return self.__current
def reset(self):
self.__current=0
num = Counter()
num.increment() #counter = counter + 1
num.increment()
num.increment()
num.counter = 1
num.value()
###Output
_____no_output_____
###Markdown
Inheritance
###Code
class Person:
def __init__(self, firstname, surname):
self.firstname = firstname
self.surname = surname
def printname(self):
print(self.firstname, self.surname)
person = Person("Ernest Danniel R.", "Tiston")
person.printname()
class Teacher(Person):
pass
person2 = Teacher("Kim", "Beligon")
person2.printname()
class Student(Person):
pass
person3 = Student("Landon", "Lorica")
person3.printname()
###Output
Ernest Danniel R. Tiston
Kim Beligon
Landon Lorica
###Markdown
Polymorphism
###Code
class RegularPolygon:
def __init__(self,side):
self.side = side
class Square(RegularPolygon):
def area(self):
return self.side*self.side
class EquilateralTriangle(RegularPolygon):
def area(self):
return self.side*self.side*0.433
object = Square(4)
print(object.area())
object2 = EquilateralTriangle(3)
print(object2.area())
###Output
16
3.897
###Markdown
Application 11.) Create a Python program that displays the name of three students (Student 1. Student 2, and Student 3) and their term grades2.) Create a class name Person and attributes-std1 std2 std3, pre,mid,fin 3.) Compute the average of each term grade using Grade() method4.) information about students grades must be hidden from others
###Code
class Person:
def __init__(self, std, pre, mid, fin):
self.__std = std
self.__pre = pre
self.__mid = mid
self.__fin = fin
def Grade(self):
return round((self.__pre + self.__mid + self.__fin)/3)
class Student1(Person):
pass
std1 = str(input("Input Name: "))
prelim1 = float(input("Prelim Grade:"))
midterm1 = float(input("Midterm Grade: "))
final1 = float (input("Final Grade: "))
student1 = Person(std1, prelim1, midterm1, final1)
print(
)
class Student2(Person):
pass
std2 = str(input("Input Name: "))
prelim2 = float(input("Prelim Grade:"))
midterm2 = float(input("Midterm Grade: "))
final2 = float (input("Final Grade: "))
student2 = Person(std2, prelim2, midterm2, final2)
print(
)
class Student3(Person):
pass
std3 = str(input("Input Name: "))
prelim3 = float(input("Prelim Grade:"))
midterm3 = float(input("Midterm Grade: "))
final3 = float (input("Final Grade: "))
student3 = Person(std3, prelim3, midterm3, final3)
print(
)
std_name = str(input("Enter Name:"))
if std_name == std1:
print("Your General Weighted Average (GWA) is: ",student1.Grade())
else:
if std_name == std2:
print("Your General Weighted Average (GWA) is: ",student2.Grade())
else:
if std_name == std3:
print("Your General Weighted Average (GWA) is: ",student3.Grade())
else:
print("The Student is not on the List")
###Output
Input Name: Landon
Prelim Grade:95
Midterm Grade: 96
Final Grade: 97
Input Name: Colleen
Prelim Grade:91
Midterm Grade: 95
Final Grade: 96
Input Name: Ernest
Prelim Grade:98
Midterm Grade: 97
Final Grade: 92
Enter Name:Colleen
Your General Weighted Average (GWA) is: 94
###Markdown
###Code
###Output
_____no_output_____
###Markdown
Python Classes and Objects Create Class
###Code
class MyClass:
pass
class OOP1_2:
X=5
print(X)
class OOP1_2:
def __init__(self,name,age): #__init__ (parameter)
self.name = name #attributes
self.age = age
def identity(self):
print(self.name, self.age)
person=OOP1_2("Maria", "39")
print(person.name)
print(person.age)
print(person.identity)
#Modify the Object Name
person.name="Rizette"
print(person.name)
print(person.age)
person.age = 40
print(person.name)
print(person.age)
#Delete the Object
del person.name
print(person.name)
print(person.age)
###Output
40
###Markdown
Application 1 - Write a phyton program that computes the area of a square, and name its class as Square, side as attributes
###Code
class Square:
def __init__(self, sides):
self.sides = sides
def area(self):
return self.sides*self.sides
def display(self):
print("the area of a square is:", self.area())
square= Square(4)
print(square.sides)
square.display()
###Output
4
the area of a square is: 16
###Markdown
Application 2 - Write a phyton program that displays your full name, age, course, school.Createa class named MyClass, and name, age, school, course as attributes
###Code
class MyClass:
def __init__(self,name,age, course, school):
self.name = name
self.age = age
self.course = course
self.school = school
def identity(self):
print(self.name, self.age)
person=MyClass("Ian Rafael Umipig","19","BS Computer Engineering","CVSU - Main")
print(person.name)
print(person.age)
print(person.course)
print(person.school)
###Output
Ian Rafael Umipig
19
BS Computer Engineering
CVSU - Main
###Markdown
Python Classes and Objects Create a Class
###Code
class MyClass:
pass
class OOP1_2:
X=5
print(X)
class OOP1_2:
def __init__(self,name,age): #__init__(parameter)
self.name = name #attributes
self.age = age
def identity(self):
print(self.name,self.age)
person = OOP1_2("Maria", 39) #create objects
print(person.name)
print(person.age)
#Modify the Object Name
person.name="Rizette"
print(person.name)
print(person.age)
person.age=40
print(person.name)
print(person.age)
#Delete the Object
del person.name
print(person.name)
print(person.age)
###Output
40
###Markdown
Application 1 - Write a Python program that computes the area of a square, and name its class as Square, side as attribute.
###Code
class Square:
def __init__(self,sides):
self.sides = sides
def area(self):
return self.sides*self.sides #formula to compute area of square
def display(self):
print("the area of the square is: ", self.area())
square = Square(4)
print(square.sides)
square.display()
###Output
4
the area of the square is: 16
###Markdown
Application 2 - Write a Python program that displays your full name, age, course, school. Create a class named MyClass, and name, age, school, course as attributes.
###Code
class MyClass:
def __init__(self,name,age,school,course):
self.name=name
self.age=age
self.school=school
self.course=course
def bio(self):
print("Name:", self.name)
print("Age:", self.age)
print("School:", self.school)
print("Course:", self.course)
stu=MyClass("Christian Angelo A. Mones",18,"Cavite State University","BS in Computer Engineering")
stu.bio()
###Output
Name: Christian Angelo A. Mones
Age: 18
School: Cavite State University
Course: BS in Computer Engineering
###Markdown
Classes with Multiple Objects
###Code
class Birds:
def __init__(self,bird_name):
self.bird_name = bird_name
def flying_birds(self):
print(f"{self.bird_name} flies above the sky")
def non_flying_birds(self):
print(f"{self.bird_name} is the national bird of the Philippines")
vulture = Birds("Griffon Vulture")
crane = Birds("Common Crane")
emu = Birds("Emu")
vulture.flying_birds()
crane.flying_birds()
emu.non_flying_birds()
###Output
Griffon Vulture flies above the sky
Common Crane flies above the sky
Emu is the national bird of the Philippines
###Markdown
Encapsulation
###Code
class foo:
def __init__(self,a,b):
self.a = a
self.b = b
def add(self):
return self.a + self.b
number = foo(3,4)
number.add()
number.a = 9 #9+4=13
number.add()
###Output
_____no_output_____
###Markdown
Encapsulation using mangling with double underscores
###Code
class foo:
def __init__(self,a,b):
self.__a = a
self.__b = b
def add(self):
return self.__a + self.__b
number = foo(3,4)
number.add()
number.a = 7 #7+4=11
number.add()
###Output
_____no_output_____
###Markdown
Encapsulation with Private Attributes
###Code
class Counter:
def __init__(self):
self.__current = 0
def increment(self):
self.__current +=1
def value(self): #retrieves the value
return self.__current
def reset(self):
self.__current = 0
num = Counter()
num.increment() #counter = counter + 1
num.increment()
num.increment()
num.value()
###Output
_____no_output_____
###Markdown
Inheritance
###Code
class Person:
def __init__(self, firstname, surname):
self.firstname = firstname
self.surname = surname
def printname(self):
print(self.firstname,self.surname)
person = Person("Mat Lauren Keiser R.","Valenzuela")
person.printname()
class Teacher(Person):
pass
person2 = Teacher("Marcus", "Austria")
person2.printname()
class Student(Person):
pass
person3 = Student("Colleen","Quijano")
person3.printname()
###Output
Mat Lauren Keiser R. Valenzuela
Marcus Austria
Colleen Quijano
###Markdown
Polymorephism
###Code
class RegularPolygon:
def __init__(self,side):
self.side = side
class Square(RegularPolygon):
def area(self):
return self.side * self.side
class EquilateralTriangle(RegularPolygon):
def area(self):
return self.side * self.side * 0.433
object = Square(4)
print(object.area())
object2 = EquilateralTriangle(3)
print(object2.area())
###Output
16
3.897
###Markdown
Application 11.) Create a Python program that displays the name of three students (Student 1. Student 2, and Student 3) and their term grades2.) Create a class name Person and attributes-std1 std2 std3, pre,mid,fin 3.) Compute the average of each term grade using Grade() method4.) information about students grades must be hidden from others
###Code
class Person:
def __init__(self,std, pre, mid, fin):
self.__std = std
self.__pre = pre
self.__mid = mid
self.__fin = fin
def Grade(self):
return round((self.__pre + self.__mid + self.__fin)/3)
class Student1(Person):
pass
std1 = str(input("Enter Name: "))
prelim_1 = float(input("Prelim Grade: "))
mid_1 = float(input("Midterm grade: "))
fin_1 = float(input("Final Grade: "))
Student_1 = Person(std1,prelim_1,mid_1,fin_1)
print(Student_1.Grade())
print()
class Student2(Person):
pass
std2 = str(input("Enter Name: "))
prelim_2 = float(input("Prelim Grade: "))
mid_2 = float(input("Midterm grade: "))
fin_2 = float(input("Final Grade: "))
Student_2 = Person(std2,prelim_2,mid_2,fin_2)
print(Student_2.Grade())
print()
class Student3(Person):
pass
std3 = str(input("Enter Name: "))
prelim_3 = float(input("Prelim Grade: "))
mid_3 = float(input("Midterm grade: "))
fin_3 = float(input("Final Grade: "))
Student_3 = Person(std3,prelim_3,mid_3,fin_3)
print(Student_3.Grade())
###Output
Enter Name: Ernest
Prelim Grade: 89
Midterm grade: 90
Final Grade: 95
91
Enter Name: Mat
Prelim Grade: 96
Midterm grade: 89
Final Grade: 87
91
Enter Name: Colleen
Prelim Grade: 97
Midterm grade: 99
Final Grade: 101
99
###Markdown
Python Classes and Objects Create a class
###Code
class MyClass:
pass
class OOP1_2:
x=5
print(x)
class OOP1_2:
def __init__(self,name,age): #__init__(paramter)
self.name =name #attributes
self.age =age
def identity(self):
print(self.name, self.age)
person=OOP1_2("Gimarose",19) #createobjects
print(person.name)
print(person.age)
print(person.identity)
#Modify the Object Name
person.name="Gima"
print(person.name)
print(person.age)
person.age=20
print(person.name)
print(person.age)
#Delete the Object
del person.name
print(person.name)
print(person.age)
###Output
20
###Markdown
Application 1 - Write a Python program that computes the area of a square, and name its class as Square, side as attribute
###Code
class Square:
def __init__(self,sides):
self.sides=sides
def area(self):
return self.sides*self.sides
def display(self):
print("the area of the square is:", self.area())
square=Square(4)
print(square.sides)
square.display()
###Output
4
the area of the square is: 16
###Markdown
Python Classes and Objects Create classes
###Code
class MyClass:
pass
class OOP1_2:
x=5
print(x)
###Output
5
###Markdown
Create Objects
###Code
class OOP1_2:
def __init__(self,name,age): #__init__()
self.name = name #attributes
self.age = age
def identity(self):
print(self.name,self.age)
person = OOP1_2("Ryu", 19) #create objects
print(person.identity)
print(person.name)
print(person.age)
#Modify the Object Name
person.name = "Yong"
person.age = 24
print(person.name)
print(person.age)
#Delete the Object
del person.name
print(person.name)
###Output
_____no_output_____
###Markdown
Application 1 - Write a Python program that computes the area of a square, and name its class as Square, side as attribute.
###Code
class Square:
def __init__(self,sides):
self.sides = sides
def area(self):
return self.sides*self.sides #formula to compute the area of the square
def display(self):
print("the area of the square is:", self.area())
square = Square(4)
print(square.sides)
square.display()
###Output
4
the area of the square is: 16
###Markdown
Application 2 - Write a Python program that displays your full name, age, course, school. Create a class names MyClass, and name, age, school as attributes
###Code
class MyClass:
def __init__(self,name,age,course,school):
self.name = name
self.age = age
self.course = course
self.school = school
def identity(self):
print(self.name,self.age,self.course,self.school)
collegestudent = MyClass("Ryu B. Santos",19,"Bachelor of Science in Computer Engineering","Cavite State Univeristy")
print(collegestudent.identity)
print(collegestudent.name)
print(collegestudent.age)
print(collegestudent.course)
print(collegestudent.school)
###Output
<bound method MyClass.identity of <__main__.MyClass object at 0x7fa09b5d5c50>>
Ryu B. Santos
19
Bachelor of Science in Computer Engineering
Cavite State Univeristy
###Markdown
Python Classes and Objects Create a Class
###Code
class Myclass:
pass
class OOP1_2:
X = 5
print(X)
###Output
5
###Markdown
Create Objects
###Code
class OOP1_2:
def __init__(self,name,age): #__init__(parameter)
self.name = name #attributes
self.age = age
def identity(self):
print(self.name, self.age)
person = OOP1_2("Kate", 19) #create objects
print(person.name)
print(person.age)
print(person.identity)
#Modify the Object Name
person.name = "Abigail"
print(person.name)
print(person.age)
person.age = 20
print(person.name)
print(person.age)
#Delete the Object
del person.name
print(person.age)
print(person.name)
###Output
20
###Markdown
Application 1 - Write a Python program that computes the area of a square, and name its class as Square, side as attributes
###Code
class Square:
def __init__(self,sides):
self.sides = sides
def area(self):
return self.sides*self.sides #formula to compute the area of the square
def display(self):
print("the area of the square is:", self.area())
square = Square(4)
print(square.sides)
square.display()
###Output
4
the area of the square is: 16
###Markdown
Application 2 - Write a Python program that displays your full name, age, course, school. Create a class named Myclass, and name, age, school as attributes.
###Code
class MyClass:
def __init__(self, name, age, course, school):
self.name = name
self.age = age
self.course = course
self.school = school
def identity(self):
print(self.name, self.age, self.course, self.school)
person = MyClass("Kate Abigail H. Palileo", 19, "BS Computer Engineering", "CVSU-Main Campus")
print(person.name)
print(person.age)
print(person.course)
print(person.school)
###Output
Kate Abigail H. Palileo
19
BS Computer Engineering
CVSU-Main Campus
###Markdown
Python Classes and Objects Create classes
###Code
class MyClass:
pass
class OOP1_2:
x=5
print(x)
###Output
5
###Markdown
Create Objects
###Code
class OOP1_2:
def __init__(self,name,age): #__init__()
self.name = name #attributes
self.age = age
def identity(self):
print(self.name,self.age)
person = OOP1_2("Ryu", 19) #create objects
print(person.identity)
print(person.name)
print(person.age)
#Modify the Object Name
person.name = "Yong"
person.age = 24
print(person.name)
print(person.age)
#Delete the Object
del person.name
print(person.name)
###Output
_____no_output_____
###Markdown
Application 1 - Write a Python program that computes the area of a square, and name its class as Square, side as attribute.
###Code
class Square:
def __init__(self,sides):
self.sides = sides
def area(self):
return self.sides*self.sides #formula to compute the area of the square
def display(self):
print("the area of the square is:", self.area())
square = Square(4)
print(square.sides)
square.display()
###Output
4
the area of the square is: 16
###Markdown
Application 2 - Write a Python program that displays your full name, age, course, school. Create a class names MyClass, and name, age, school as attributes
###Code
class MyClass:
def __init__(self,name,age,course,school):
self.name = name
self.age = age
self.course = course
self.school = school
def identity(self):
print(self.name,self.age,self.course,self.school)
collegestudent = MyClass("Ryu B. Santos",19,"Bachelor of Science in Computer Engineering","Cavite State Univeristy")
print(collegestudent.identity)
print(collegestudent.name)
print(collegestudent.age)
print(collegestudent.course)
print(collegestudent.school)
###Output
<bound method MyClass.identity of <__main__.MyClass object at 0x7fa09b5d5c50>>
Ryu B. Santos
19
Bachelor of Science in Computer Engineering
Cavite State Univeristy
###Markdown
Phython Classes and Objects
###Code
class MyClass:
pass
class OOP1_2:
x = 5
print(x)
class OOP1_2:
def __init__(self,name,age):
self.name = name #attributes
self.age = age
def identity(self):
print(self.name,self.age)
person = OOP1_2("Gerald", 19) #create objects
print(person.name)
print(person.age)
print(person.identity())
#Modify the object name
person.name = "Christian"
print(person.name)
print(person.age)
#Delete the Object
del person.name
print(person.name)
print(person.age)
###Output
19
###Markdown
Application 1 - Write a phyton program that computes the area of a square, and the name its class as square, Sides as attributes
###Code
class Square:
def __init__(self,sides):
self.sides = sides
def area(self):
return self.sides*self.sides #formula to compute the area of a square
def display(self):
print("The area of the square is:", self.area())
square = Square(4)
#square.display()
print(square.sides)
square.display()
###Output
4
The area of the square is: 16
###Markdown
Application 2 - Write a phython program that displays your full name, age, course, school. Create a class named MyClass, and name, age, course, school as attributes
###Code
class MyClass:
def __init__(self,name,age,course,school):
self.name = name
self.age = age
self.course = course
self.school = school
def identity(self):
print(self.name,self.age,self.course,self.school)
Student = MyClass("Gerald Christian Rey R. Balindan", 19, "Bachelor of Science Major in Computer Engineering", "Cavite State University-Indang Campus")
print(Student.name)
print(Student.age)
print(Student.course)
print(Student.school)
print(f"My name is {Student.name} and I am {Student.age} years old. I am a student taking {Student.course} in our beloved {Student.school}.")
###Output
Gerald Christian Rey R. Balindan
19
Bachelor of Science Major in Computer Engineering
Cavite State University-Indang Campus
My name is Gerald Christian Rey R. Balindan and I am 19 years old. I am a student taking Bachelor of Science Major in Computer Engineering in our beloved Cavite State University-Indang Campus.
###Markdown
Python Classes and Objects Create a Class
###Code
class MyClass:
pass
class OOP1_2:
X=5
print(X)
###Output
5
###Markdown
Create Objects
###Code
class OOP1_2:
def __init__(self,name,age): #__init__(parameter)
self.name=name #attributes
self.age=age
def identity(self):
print(self.name, self.age)
person = OOP1_2("Vincent", 20) #create objects
print(person.name)
print(person.age)
print(person.identity)
#Modify the Object Name
person.name="John"
print(person.name)
print(person.age)
person.age=25
print(person.name)
print(person.age)
#Delete the Object
del person.name
print(person.name)
print(person.age)
###Output
25
###Markdown
Application 1 - Write a Python Program that computes the area of a square, and name its class as Square, sides as attribute.
###Code
class Square:
def __init__(self,sides):
self.sides=sides
def area(self):
return self.sides*self.sides #formula to compute the area of the square
def display(self):
print("The area of the square is:",self.area())
square = Square(4)
print(square.sides)
square.display()
###Output
4
The area of the square is: 16
###Markdown
Application 2 - Write a Python program that displays your full name, age, course, school. Create a class name MyClass, and name, age, school, course as attributes
###Code
class MyClass:
def __init__(self,name,age,school,course):
self.name = name
self.age = age
self.school = school
self.course = course
def identity(self):
print(self.name, self.age, self.school, self.course)
student = MyClass("Jan Vincent C. Vallente", 20, "Cavite State University", "BS Computer Engineering")
print(student.name)
print(student.age)
print(student.school)
print(student.course)
###Output
Jan Vincent C. Vallente
20
Cavite State University
BS Computer Engineering
###Markdown
Python Classes and Objects Create a class
###Code
class MyClass:
pass
class OOP1_2:
x = 5
print(x)
class OOP1_2:
def __init__(self,name,age): #__init__(parameter)
self.name = name #attributes
self.age = age
def identity(self):
print(self.name, self.age)
person = OOP1_2("Landon", 19) #create objects
print(person.name)
print(person.age)
person.name = "Drey"
print(person.name)
#Delete Objects
del person.name
###Output
_____no_output_____
###Markdown
Area of a Square
###Code
class Square:
def __init__(self,sides):
self.sides = sides
def area(self):
return self.sides*self.sides #formula to compute the area of the square
def display(self):
print("the area of the square is:",self.area())
square = Square(4)
print(square.sides)
square.display()
###Output
4
the area of the square is: 16
###Markdown
Age, School, Name, Course
###Code
class MyClass:
def __init__(self,name,age,school,course): #__init__(parameter)
self.name = name #attributes
self.age = age
self.school = school
self.course = course
def identity(self):
print(self.name, self.age, self.school, self.course)
person = MyClass("Landon S. Lorica", 19, "Cavite State University", "BS Computer Engineering") #create objects
print(person.name)
print(person.age)
print(person.school)
print(person.course)
###Output
Landon S. Lorica
19
Cavite State University
BS Computer Engineering
###Markdown
Application 2- Write a Python program that displays your full name, age, course, school. Create a class named MyClass, and name, age, school, course as attributes
###Code
class MyClass:
def __init__(self,name,age,course,school):
self.name =name
self.age =age
self.course =course
self.school =school
def attributes(self):
print(self.name, self.age, self.course, self.school)
person=MyClass("Gimarose A. Luzande",19,"Bachelor of Science in Computer Engineering", "Cavite State University") #
print(person.name)
print(person.age)
print(person.course)
print(person.school)
print(person.attributes)
###Output
Gimarose A. Luzande
19
Bachelor of Science in Computer Engineering
Cavite State University
<bound method MyClass.attributes of <__main__.MyClass object at 0x7fcfcd55c310>>
###Markdown
Python Classes and Objects Create a Class
###Code
class Myclass:
pass
class OOP1_2:
X = 5
print(X)
###Output
5
###Markdown
Create Objects
###Code
class OOP1_2:
def __init__(self,name,age): #__init__(parameter)
self.name = name #attributes
self.age = age
def identity(self):
print(self.name, self.age)
person = OOP1_2("Kate", 19) #create objects
print(person.name)
print(person.age)
print(person.identity)
#Modify the Object Name
person.name = "Abigail"
print(person.name)
print(person.age)
person.age = 20
print(person.name)
print(person.age)
#Delete the Object
del person.name
print(person.age)
print(person.name)
###Output
20
###Markdown
Application 1 - Write a Python program that computes the area of a square, and name its class as Square, side as attributes
###Code
class Square:
def __init__(self,sides):
self.sides = sides
def area(self):
return self.sides*self.sides #formula to compute the area of the square
def display(self):
print("the area of the square is:", self.area())
square = Square(4)
print(square.sides)
square.display()
###Output
4
the area of the square is: 16
###Markdown
Application 2 - Write a Python program that displays your full name, age, course, school. Create a class named MyClass, and name, age, school as attributes.
###Code
class MyClass:
def __init__(self, name, age, course, school):
self.name = name
self.age = age
self.course = course
self.school = school
def identity(self):
print(self.name, self.age, self.course, self.school)
person = MyClass("Kate Abigail H. Palileo", 19, "BS Computer Engineering", "CVSU-Main Campus")
print(person.name)
print(person.age)
print(person.course)
print(person.school)
###Output
Kate Abigail H. Palileo
19
BS Computer Engineering
CVSU-Main Campus
###Markdown
Python Classes and Objects Create a Class
###Code
class MyClass:
pass
class OOP1_2:
x = 5
print(x)
###Output
5
###Markdown
Create Objects
###Code
class OOP1_2:
def __init__(self,name,age):
self.name = name #attributes
self.age = age
def identity(self):
print(self.name, self.age)
person = OOP1_2("Maria", 39)
print(person.identity)
print(person.name)
print(person.age)
#Modify the Object Name
person.name = "Rizette"
print(person.name)
print(person.age)
person.age = 40
print(person.name)
print(person.age)
#Delete the Object
del person.name
print(person.name)
print(person.age)
###Output
40
###Markdown
Application 1 - Write a python program that computes the area of a square, and name its class as Square, side as attributes
###Code
class Square:
def __init__(self,sides):
self.sides = sides
def area(self):
return self.sides*self.sides
def display(self):
print("The area of the square is", self.area())
square = Square(4)
square.display()
print(square.sides)
###Output
The area of the square is 16
4
###Markdown
Application 2 - Write a Python program that displays your full name, age, course and school. Create a class named MyClass and name, age, course, school as attributes
###Code
class MyClass:
def __init__(self,name,age,course,school):
self.name = name
self.age = age
self.course = course
self.school = school
def id(self):
print(self.name)
print(self.age)
print(self.course)
print(self.school)
myself = MyClass("Paul Francis B. Masangcay", 19, "Computer Engineering", "Cavite State University")
myself.id()
###Output
Paul Francis B. Masangcay
19
Computer Engineering
Cavite State University
###Markdown
Classes with Mutliple Objects
###Code
class Birds:
def __init__(self, bird_name):
self.bird_name = bird_name
def flying_birds(self):
print(f"{self.bird_name} flies above the sky")
def non_flying_birds(self):
print(f"{self.bird_name} is the national bird of the Philippines")
vulture = Birds("Griffo Vulture")
crane = Birds("Common Crane")
emu = Birds("Emu")
vulture.flying_birds()
crane.flying_birds()
emu.non_flying_birds()
###Output
Griffo Vulture flies above the sky
Common Crane flies above the sky
Emu is the national bird of the Philippines
###Markdown
Encapsulation using manling with doube underscores
###Code
class foo:
def __init__(self,a,b):
self.__a = a
self.__b = b
def add(self):
return self.__a + self.__b #Private attributes
number = foo(3, 4)
number.add()
number.a = 7 #7,4 7+4 =11
number.add()
###Output
_____no_output_____
###Markdown
Encapsulation with Private Attributes
###Code
class Counter:
def __init__(self):
self.__current = 0
def increment(self):
self.__current += 1
def value(self):
return self.__current
def reset(self):
self.__current = 0
num = Counter()
num.counter = 1
num.increment() #counter = counter +1
num.increment()
num.increment()
num.value()
###Output
_____no_output_____
###Markdown
Inheritance
###Code
#way to represent object that can share the same method.
class Person:
def __init__(self, firstname, surname):
self.firstname = firstname
self.surname = surname
def printname(self):
print(self.firstname, self.surname)
person = Person("Jia", "Mieral")
person.printname()
class Teacher(Person):
pass
person2 = Teacher("Maria", "Sayo")
person2.printname()
class Student(Person):
pass
person3 = Student ("Colleen", "Quijano")
person3.printname()
###Output
Jia Mieral
Maria Sayo
Colleen Quijano
###Markdown
Polymorphism
###Code
class RegularPolygon:
def __init__(self, side):
self.side = side
class Square(RegularPolygon):
def area(self):
return self.side * self.side
class EquilateralTriangle(RegularPolygon):
def area(self):
return self.side * self.side * 0.433
object = Square(4)
print(object.area())
object2 = EquilateralTriangle(3)
print(object2.area())
###Output
16
3.897
###Markdown
Create a Class
###Code
class MyClass:
pass
class OOP1_2:
X = 5
print(X)
###Output
5
###Markdown
Create Objects
###Code
class OOP1_2:
def __init__(self,name,age):
self.name = name
self.age = age
def identity(self):
print(self.name, self.age)
person = OOP1_2("Maria",39)
print(person.name)
print(person.age)
#Modify the Object Name
person.name = "Rizette"
print(person.name)
person.age = 40
print(person.name)
print(person.age)
#Delete the Object
del person.name
print(person.age)
###Output
_____no_output_____
###Markdown
Application 1 - Write a python program that computes the area of a square, and name its class as Square, side as attribute
###Code
class Square:
def __init__(self,sides):
self.sides=sides
def area(self):
return self.sides*self.sides #formula to compute the area of the square
def display(self):
print("the area of the square is:", self.area())
square = Square(4)
square.display()
print(square.sides)
###Output
the area of the square is: 16
4
###Markdown
Application 2 - Write a python program that displays your full name, age, course, school. Create a class named MyClass and name, age, school,course as attributes
###Code
class MyClass:
def __init__(self,name,age,school,course):
self.name=name
self.age=age
self.school=school
self.course=course
def identity(self):
print(self.name,self.age,self.school,self.course)
person = MyClass("Mark Adrian",18,"Cavite State University","Computer Engineering")
print(person.name)
print(person.age)
print(person.school)
print(person.course)
###Output
Mark Adrian
18
Cavite State University
Computer Engineering
###Markdown
Python Classes and Objects Create a Class
###Code
class MyClass:
pass
class OOP1_2:
x=5
print(x)
###Output
5
###Markdown
Create Objects
###Code
class OOP1_2:
def __init__(self,name,age): #__init__(parameter)
self.name=name #attributes
self.age=age
def identity(self):
print(self.name, self.age)
person= OOP1_2('Gabriel',18) #create objects
print(person.identity)
print(person.name)
print(person.age)
#modify the Object Name
person.name= 'Gab'
print(person.name)
print(person.age)
person.age=40
print(person.name)
print(person.age)
#Delete the Object
del person.name
print(person.name)
print(person.age)
###Output
40
###Markdown
Application 1-Write a Python program that computes the area of a square,and the name its class as Square,side as attributes
###Code
class Square:
def __init__(self,sides):
self.sides= sides
def area(self):
return self.sides*self.sides #Formula to compute the area of the square
def display(self):
print('the area of the square is:',self.area())
square=Square(4)
print(square.sides)
square.display()
###Output
4
the area of the square is: 16
###Markdown
Application 2- Write a Python program that displays your full name,age ,course,school. Create a class named MyClass, and name,age,school,course as attributes
###Code
class MyClass:
def __init__(self,name,age,course,school):
self.name=name
self.age=age
self.course=course
self.school=school
def information(self):
return '''{}
{}
{}
{}'''.format(self.name,self.age,self.course,self.school)
student= MyClass('Gabriel S. Catanaoan', 18, 'Bachelor of Science in Computer Engineering', 'Cavite State University')
print(student.information())
class MyClass:
def __init__(self,name,age,course,school):
self.name=name
self.age=age
self.course=course
self.school=school
def information(self):
return '{},{},{},{}'.format(self.name,self.age,self.course,self.school)
student= MyClass('Gabriel S. Catanaoan', 18, 'Bachelor of Science in Computer Engineering', 'Cavite State University')
print(student.information())
###Output
Gabriel S. Catanaoan,18,Bachelor of Science in Computer Engineering,Cavite State University
###Markdown
Python Classes and Objects Create Class
###Code
class MyClass:
pass
class OOP1_2:
x = 5
print(x)
###Output
5
###Markdown
Create Objects
###Code
class OOP1_2:
def __init__(self,name,age): #__init__(parameter)
self.name = name #attributes
self.age = age
def identity(self):
print(self.namme, self.age)
person=OOP1_2("Rovick", 19)
print(person.name)
print(person.age)
print(person.identity)
#Modify the Object Name
person.name = "Rov"
print(person.name)
print(person.age)
person.age = 20
print(person.name)
print(person.age)
#Delete the Object
del person.name
print(person.name)
print(person.age)
###Output
20
###Markdown
Application 1 - Write a Python program that conputes the area of a square, and name its class as Square, sides as attributes
###Code
class Square:
def __init__(self,sides):
self.sides = sides
def area(self):
return self.sides*self.sides #formula to compute the area of the square
def display(self):
print("the area of the square is:", self.area())
square = Square(4)
print(square.sides)
square.display()
###Output
4
the area of the square is: 16
###Markdown
Application 2 - Write a Python program that displays your full name, age, course, school.Create a class named MyClass, and name, age, school, course as attributes
###Code
class MyClass:
def __init__(self, name , age , school, course):
self.name = name
self.age = age
self.school = school
self.course = course
def identity(self):
print(self.name, self.age)
info = MyClass("Jan Rovick M. Causaren",19,"CVSU-INDANG","BSCpE")
print(info.name)
print(info.age)
print(info.school)
print(info.course)
###Output
Jan Rovick M. Causaren
19
CVSU-INDANG
BSCpE
###Markdown
Classes with Multiple Objects
###Code
class Birds:
def __init__(self,bird_name):
self.bird_name = bird_name
def flying_birds(self):
print(f"{self.bird_name} flies above the sky")
def non_flying_birds(self):
print(f"{self.bird_name} is the national bird of the Philippines")
vulture = Birds("Griffon Vulture")
crane = Birds("Common Crane")
emu = Birds("Emu")
vulture.flying_birds()
crane.flying_birds()
emu.non_flying_birds()
###Output
Griffon Vulture flies above the sky
Common Crane flies above the sky
Emu is the national bird of the Philippines
###Markdown
Encapsulation
###Code
class foo:
def __init__(self,a,b):
self.a = a
self.b = b
def add(self):
return self.a + self.b
number = foo(3,4)
number.add()
number.a = 9 #9+4=13
number.add()
###Output
_____no_output_____
###Markdown
Encapsulation using mangling with double underscores
###Code
class foo:
def __init__(self,a,b):
self.__a = a
self.__b = b
def add(self):
return self.__a + self.__b
number = foo(3,4)
number.add()
number.a = 7 #7+4=11
number.add()
###Output
_____no_output_____
###Markdown
Encapsulation with Private Attributes
###Code
class Counter:
def __init__(self):
self.__current = 0
def increment(self):
self.__current +=1
def value(self): #retrieves the value
return self.__current
def reset(self):
self.__current = 0
num = Counter()
num.increment() #counter = counter + 1
num.increment()
num.increment()
num.value()
###Output
_____no_output_____
###Markdown
Inheritance
###Code
class Person:
def __init__(self, firstname, surname):
self.firstname = firstname
self.surname = surname
def printname(self):
print(self.firstname,self.surname)
person = Person("Mat Lauren Keiser R.","Valenzuela")
person.printname()
class Teacher(Person):
pass
person2 = Teacher("Marcus", "Austria")
person2.printname()
class Student(Person):
pass
person3 = Student("Colleen","Quijano")
person3.printname()
###Output
Mat Lauren Keiser R. Valenzuela
Marcus Austria
Colleen Quijano
###Markdown
Polymorephism
###Code
class RegularPolygon:
def __init__(self,side):
self.side = side
class Square(RegularPolygon):
def area(self):
return self.side * self.side
class EquilateralTriangle(RegularPolygon):
def area(self):
return self.side * self.side * 0.433
object = Square(4)
print(object.area())
object2 = EquilateralTriangle(3)
print(object2.area())
###Output
16
3.897
###Markdown
Application 11.) Create a Python program that displays the name of three students (Student 1. Student 2, and Student 3) and their term grades2.) Create a class name Person and attributes-std1 std2 std3, pre,mid,fin 3.) Compute the average of each term grade using Grade() method4.) information about students grades must be hidden from others
###Code
class Person:
def __init__(self,std, pre, mid, fin):
self.__std = std
self.__pre = pre
self.__mid = mid
self.__fin = fin
def Grade(self):
return round((self.__pre + self.__mid + self.__fin)/3)
class Student1(Person):
pass
std1 = str(input("Enter Name: "))
prelim_1 = float(input("Prelim Grade: "))
mid_1 = float(input("Midterm grade: "))
fin_1 = float(input("Final Grade: "))
Student_1 = Person(std1,prelim_1,mid_1,fin_1)
print()
class Student2(Person):
pass
std2 = str(input("Enter Name: "))
prelim_2 = float(input("Prelim Grade: "))
mid_2 = float(input("Midterm grade: "))
fin_2 = float(input("Final Grade: "))
Student_2 = Person(std2,prelim_2,mid_2,fin_2)
print()
class Student3(Person):
pass
std3 = str(input("Enter Name: "))
prelim_3 = float(input("Prelim Grade: "))
mid_3 = float(input("Midterm grade: "))
fin_3 = float(input("Final Grade: "))
Student_3 = Person(std3,prelim_3,mid_3,fin_3)
print()
Name = str(input("Enter a name:"))
if Name == std1:
print("GWA: ", Student_1.Grade())
else:
if Name == std2:
print("GWA: ", Student_2.Grade())
else:
if Name == std3:
print("GWA: ",Student_3.Grade())
else:
print("Student is not on the list")
###Output
Enter Name: Ernest
Prelim Grade: 99
Midterm grade: 99
Final Grade: 99
Enter Name: Mat
Prelim Grade: 95
Midterm grade: 98
Final Grade: 97
Enter Name: Landon
Prelim Grade: 96
Midterm grade: 99
Final Grade: 98
Enter a name:Landon
GWA: 98
###Markdown
Python Classes and Objects Create a Class
###Code
class MyClass:
pass
class OOP1_2:
X = 5
print(X)
###Output
5
###Markdown
Create Objects
###Code
class OOP1_2:
def __init__(self,name,age): #__init__(parameter)
self.name = name #attributes
self.age = age
def identity(self):
print(self.name, self.age)
person = OOP1_2("Kinlie", 18) #create objects
print(person.name)
print(person.age)
print(person.identity)
#Modify the Object Name
person.name = "Wonyoung"
person.age = 17
print(person.name)
print(person.age)
#Delete the Object
del person.name
print(person.name)
print(person.age)
###Output
17
###Markdown
Application 1 - Write a Python program that computes the area of a square, and name its class as Square, side as attribute
###Code
class Square:
def __init__(self,sides):
self.sides = sides
def area(self):
return self.sides*self.sides #formula to compute the area of the square
def display(self):
print("the area of the square is:",self.area())
square = Square(4)
print(square.sides)
square.display()
###Output
4
the area of the square is: 16
###Markdown
Application 2 - Write a Python program that displays your full name, age, course, school. Create a class named MyClass, and name, age, school and course as attributes
###Code
class MyClass:
def __init__(self,name,age,school,course): #__init__(parameter)
self.name = name #attributes
self.age = age
self.school = school
self.course = course
def identity(self):
print(self.name, self.age, self.school, self.course)
person = MyClass("Kinlie Venice L. de Guzman", 18, "Cavite State Universty", "BS Computer Engineering") #create objects
print(person.name)
print(person.age)
print(person.school)
print(person.course)
###Output
Kinlie Venice L. de Guzman
18
Cavite State Universty
BS Computer Engineering
###Markdown
Python Classes and ObjectsClass
###Code
class Myclass:
pass #create a class without variable and methods
class MyClass:
def __init__(self,name,age):
self.name = name #create class with attributes
self.age = age
def display(self):
print(self.name, self.age)
person = MyClass("Michael S. Colcol", 19) #create an object
person.display()
#Application 1 - Write a Python program that computes for an area of rectangle: Area = lxw
class Rectangle:
def __init__(self,l,w):
self.l = l
self.w = w
def Area(self):
print(self.l * self.w)
rect = Rectangle(7,3)
rect.Area()
###Output
_____no_output_____
###Markdown
###Code
class data:
def __init__(self,name,prelim,midterm,final):
self.__name = name
self.__prelim = prelim
self.__midterm = midterm
self.__final = final
def printname(self):
print(self.__name)
def Grade(self):
return (round(self.__prelim + self.__midterm + self.__final)/3)
class stud1(data):
pass
fname_input1 = str(input("Enter Your Name: "))
prelims_input1 = float(input("Prelims:"))
midterms_input1 = float(input("Midterm:"))
finals_input1 = float(input("Finals:"))
student1 = stud1(fname_input1,prelims_input1, midterms_input1, finals_input1)
print("\n")
class stud2(data):
pass
fname_input2 = str(input("Enter Your Name: "))
prelims_input2 = float(input("Prelims:"))
midterms_input2 = float(input("Midterm:"))
finals_input2 = float(input("Finals:"))
student2 = stud2(fname_input2, prelims_input2, midterms_input2, finals_input2)
print("\n")
class stud3(data):
pass
fname_input3 = str(input("Enter Your Name: "))
prelims_input3 = float(input("Prelims:"))
midterms_input3 = float(input("Midterm:"))
finals_input3 = float(input("Finals:"))
student3 = stud3(fname_input3,prelims_input3, midterms_input3, finals_input3)
print("\n")
name = str(input("Enter Your Name:"))
if name == (fname_input1):
print("\n", "Grade: ", round(student1.Grade(),2))
else:
if name == (fname_input2):
print("\n", "Grade: ", round(student2.Grade(),2))
else:
if name == (fname_input3):
print("\n", "Grade: ", round(student3.Grade(),2))
else:
if name == "all":
print("Name: ", fname_input1, "Grade: ", round(student1.Grade(),2), "\n",
"Name: ", fname_input2, "Grade: ", round(student2.Grade(),2), "\n",
"Name: ", fname_input3, "Grade: ", round(student3.Grade(),2), "\n")
else:
print("Student Info Unavailable")
###Output
Enter Your Name: Jean
Prelims:89
Midterm:99
Finals:79
Enter Your Name: Equi
Prelims:88
Midterm:98
Finals:78
Enter Your Name: Quiel
Prelims:99
Midterm:99
Finals:99
Enter Your Name:Jean
Grade: 89.0
###Markdown
Python Classes and Objects Create a Class
###Code
class MyClass:
pass
class OOP1_2:
X = 6
print(X)
###Output
6
###Markdown
Create Objects
###Code
class OOP1_2:
def __init__(self,name,age): #double underscore like __init__
self.name = name #attributes
self.age = age
def identity(self):
print(self.name,self.age)
person = OOP1_2("Arvin",18)
print(person.name)
print(person.age)
print(person.identity)
del(person.name)
print(person.name)
print(person.age)
###Output
18
###Markdown
Appplication 1 - Write a Python program that computes the area of a square, and name its class as Square, side as attribute
###Code
class Square:
def __init__(self,sides):
self.sides = sides
def area(self):
return self.sides*self.sides #formula to compute the area of the square
def display(self):
print("the area of the square is", self.area())
square = Square(4)
print(square.sides)
square.display()
###Output
4
the area of the square is 16
###Markdown
Application 2- Write a Python program that displays your full name. age, course, school. Create a class named Myclass, and name, age, school, course as attributes
###Code
class MyClass:
def __init__(self,name,age,course,school):
self.name = name
self.age = age
self.course = course
self.school = school
def identity(self):
print(self.name,self.age,self.course,self.school)
student = MyClass("VIADO JHON ARVIN T", 18, "Bachelor of Science in Computer Engineering", "Cavite State University")
print(student.name)
print(student.age)
print(student.course)
print(student.school)
###Output
VIADO JHON ARVIN T
18
Bachelor of Science in Computer Engineering
Cavite State University
###Markdown
Python Classes and Objects Create a class
###Code
class MyClass:
pass
class OOP1_2:
X=5
print(X)
###Output
5
###Markdown
Create Objects
###Code
class OOP1_2:
def __init__(self,name,age):
self.name = name #attributes
self.age = age
def identity(self):
print(self.name,self.age)
person = OOP1_2("Roy",20) #create objects
print(person.identity)
class OOP1_2:
def __init__(self,name,age): #__init__(parameter)
self.name = name #attributes
self.age = age
def identity(self):
print(self.name,self.age)
person = OOP1_2("Roy",20)
print(person.name)
print(person.age)
class OOP1_2:
def __init__(self,name,age): #__init__(parameter)
self.name = name #attributes
self.age = age
def identity(self):
print(self.name,self.age)
person = OOP1_2("Roy",20)
print(person.name)
print(person.age)
print(person.identity)
#Modify the Object Name
person.name = "Paul"
print(person.name)
print(person.age)
person.age = 50
print(person.name)
print(person.age)
#Delete the Object
del person.name
print(person.name)
print(person.age)
###Output
50
###Markdown
Application 1 - Write a Python program that computes the area of a square and name its class as Square, side as attribute
###Code
class Square:
def __init__(self,sides):
self.sides = sides
def area(self):
return self.sides*self.sides #formula to compute the area of the square
def display(self):
print("the area of the square is",self.area())
square = Square(4)
print(square.sides)
square.display()
###Output
4
the area of the square is 16
###Markdown
Application 2 - Write a Python Program that displays your full name,age,course,school. Create a class named MyClass, and name,age,course,school as attributes.
###Code
class MyClass:
def __init__(self,name,age,course,school):
self.name = name
self.age = age
self.course = course
self.school = school
def identity(self):
print(self.name,self.age,self.course,self.school)
person = MyClass("Roy Millamis",20,"Bachelor of Science In Computer Engineering","Cavite State University")
print(person.name)
print(person.age)
print(person.course)
print(person.school)
person.identity()
print(person.identity)
###Output
Roy Millamis
20
Bachelor of Science In Computer Engineering
Cavite State University
Roy Millamis 20 Bachelor of Science In Computer Engineering Cavite State University
<bound method MyClass.identity of <__main__.MyClass object at 0x7f8a812ded10>>
###Markdown
Python Classes and Objects
###Code
class MyClass:
pass
class OOP1_2:
X = 5
print(X)
###Output
5
###Markdown
Create Objects
###Code
class OOP1_2:
def __init__(self, name, age):
self.name = name # attributes
self.age = age
def identity(self):
print(self.name, self.age)
person = OOP1_2("Kurt", 18) # create objects
print(person.name)
print(person.age)
print(person.identity)
#Modify the Object Name
person.name = "Ashley"
print(person.name)
print(person.age)
person.age = 21
print(person.name)
print(person.age)
#Delete the Object
del person.name
print(person.name)
print(person.age)
###Output
21
###Markdown
Application 1 - Write a Python program that computes the area of a square, and name its class as Square, side as attribute
###Code
class Square:
def __init__(self,sides):
self.sides = sides
def area(self):
return self.sides*self.sides #formula to compute the area of the square
def display(self):
print("the area of the square is", self.area())
square = Square(4)
print(square.sides)
square.display()
###Output
4
the area of the square is 16
###Markdown
Application 2- Write a Pyhton program that displays your full name, age, course, school. Create a class named Myclass, and name, age, school as attributes
###Code
class MyClass:
def __init__(self, name, age, course, school):
self.name = name
self.age = age
self.course = course
self.school = school
def identity(self):
print(self.name, self.age, self.course, self.school)
student = MyClass("Kurt Ashley S. Emprese", 18, "Computer Engineering", "Cavite State University")
print("Name =", student.name, "\nAge =", student.age, "\nCourse =", student.course, "\nSchool =", student.school)
###Output
Name = Kurt Ashley S. Emprese
Age = 18
Course = Computer Engineering
School = Cavite State University
###Markdown
Python Classes and Objects
###Code
class Myclass:
pass
class OOP1_2:
X = 5
print(X)
class OOP1_2:
def __init__(self,name,age): #__init__(parameter)
self.name = name #attributes
self.age = age
def identity(self):
print(self.name, self.age)
person = OOP1_2("Jeroh", 18) #create objects
print(person.name)
print(person.age)
print(person.identity)
#Modify the Object Name
person.name = "Jeroh Lee"
print(person.name)
print(person.age)
person.age = 18
print(person.name)
print(person.age)
#Delete the Object
del person.name
print(person.age)
print(person.name)
class Square:
def __init__(self,sides):
self.sides = sides
def area(self):
return self.sides*self.sides #formula to compute the area of the square
def display(self):
print("the area of the square is:", self.area())
square = Square(10)
print(square.sides)
square.display()
class MyClass:
def __init__(self, name, age, course, school):
self.name = name
self.age = age
self.course = course
self.school = school
def identity(self):
print(self.name, self.age, self.course, self.school)
person = MyClass("Jeroh Lee", 18, "BSCpE", "Cavite State University")
print(person.name)
print(person.age)
print(person.course)
print(person.school)
###Output
Jeroh Lee
18
BSCpE
Cavite State University
|
aprenderPython-main/Tema_4/Python_Funciones.ipynb | ###Markdown
Funciones Función**Función.** Una función en `Python` es una pieza de código reutilizable que solo se ejecuta cuando es llamada.Se define usando la palabra reservada `def` y estructura general es la siguiente:```def nombre_función(input1, input2, ..., inputn): cuerpo de la función return output``` **Observación.** La instrucción `return` finaliza la ejecución de la función y devuelve el resultado que se indica a continuación. Si no se indicase nada, la función finalizaría, pero no retornaría nada.Como hemos visto, en general, las funciones constan de 3 partes:- **Inputs (parámetros o argumentos).** Son los valores que le pasamos como entrada a la función.- **Cuerpo.** Son todas las operaciones que lleva a cabo la función.- **Output.** Es el resultado que devuelve la función.**Observación.** Los parámetros son variables internas de la función. Si probásemos a ejecutar una de dichas variables en el entorno global, nos saltaría un error.Con lo visto anteriormente, a la hora de construir una función hay que hacerse las siguientes preguntas:- ¿Qué datos necesita conocer la función? (inputs)- ¿Qué hace la función? (cuerpo)- ¿Qué devuelve? (output)**Observación.** Los inputs y el output son opcionales: podemos definir una función sin necesidad de proporcionarle inputs y sin que nos devuelva nada.Una vez definida una función, para llamarla utilizamos su nombre seguido de paréntesis:
###Code
def mi_primera_funcion():
print("Hola")
mi_primera_funcion()
###Output
Hola
###Markdown
Hemos dicho que tanto los inputs como el output son opcionales. Veamos algunos ejemplos con diferentes casos. --- Ejemplo 1Veamos otro ejemplo que no necesite ningún parámetro y no nos devuelva nada, tal y como ocurría con `mi_primera_funcion()`
###Code
def holaMundo():
print("Hola mundo")
holaMundo()
# Esta función, cuando es llamada, imprime "Hola mundo", pero no devuelve nada.
###Output
Hola mundo
###Markdown
--- Ejemplo 2Veamos un ejemplo de función que no necesita ningún input, pero que devuelve un output. Por ejemplo, una función que nos devuelve "¡Buenos días!"
###Code
# Declaramos la función:
def buenosDias():
return "Buenos días"
# Llamamos a la función:
buenosDias()
###Output
_____no_output_____
###Markdown
Como nos devuelve el saludo, lo podemos guardar en una variable, que será de tipo string
###Code
buenosDias = buenosDias()
print(buenosDias)
print(type(buenosDias))
###Output
Buenos días
<class 'str'>
###Markdown
--- Ejemplo 3Veamos ahora un ejemplo de función que no nos devuelva nada, pero que sí toma algún parámetro
###Code
def buenos_dias(nombre):
print("¡Buenos días, {}!".format(nombre))
buenos_dias(nombre = "Nacho")
# La función recibe un nombre por parámetro y muestra por pantalla el resultado con el nombre indicado.
###Output
¡Buenos días, Nacho!
###Markdown
--- Ejemplo 4Por último, vamos a crear una función que nos calcule la división entera de dos números y nos retorne el cociente y el resto.
###Code
def division_entera(x, y):
q = x // y
r = x % y
return q, r
###Output
_____no_output_____
###Markdown
Esta función, a la que hemos llamado `division_entera`, calcula el cociente y el resto de dos números cualesquiera y devuelve como resultado esos dos números calculados.Utilicemos ahora nuestra función para calcular el cociente y el resto de la división $41 \div 7$
###Code
division_entera(x = 41, y = 7)
division_entera(y = 7, x = 41)
division_entera(41, 7)
###Output
_____no_output_____
###Markdown
Al llamar a la función e indicarle por parámetros `x = 41` e `y = 7`, hemos obtenido como resultado la tupla `(5, 6)`. El significado de dicho resulatdo es que el cociente entero de $41\div 7$ es 5, mientras que el resto es $6$. Efectivamente$$41 = 7\cdot 5 + 6$$También podríamos guardar en variables diferentes los resultados que nos devuelve nuestra función, para poder trabajar con ellos en el entorno global
###Code
cociente, resto = division_entera(x = 41, y = 7)
print(cociente)
print(resto)
print(41 == 7 * cociente + resto)
###Output
5
6
True
###Markdown
ParámetrosPor defecto, una función debe ser llamada con el número correcto de argumentos. Esto es, si la función espera 2 argumentos, tenemos que llamar a la función con esos 2 argumentos. Ni más, ni menos.
###Code
def nombre_completo(nombre, apellido):
print("El nombre completo es: ", nombre, apellido)
nombre_completo("Ana", "García")
###Output
El nombre completo es: Ana García
###Markdown
Si intentamos llamar a la función `complete_name()` pasando 1 solo parámetro o 3 parámetros, entonces la función devuelve error. Número arbitrario de argumentosSi no sabemos el número de parámetros que van a ser introducidos, entonces añadimos un asterisco `*` previo al nombre del parámetro en la definición de la función. Los valores introducidos serán guardados en una tupla.
###Code
def suma_num(*numeros):
suma = 0
for n in numeros:
suma += n
return suma
suma_num(1, 2, 3)
suma_num(2, 4, 613, 8, 10)
###Output
_____no_output_____
###Markdown
Número arbitrario de claves de argumentoHasta ahora hemos visto que al pasar valores por parámetro a la función, podemos hacerlo con la sintaxis `clave_argumento = valor` o directamente pasar por parámetro el valor siguiendo el orden posicional de la definición de la función:
###Code
def nombre_completo(nombre, apellido):
print("El nombre completo es: ", nombre, apellido)
nombre_completo("Pedro", "Aguado")
###Output
El nombre completo es: Pedro Aguado
###Markdown
En realidad, los nombres completos pueden tener dos o incluso más apellidos, pero no sabemos si el usuario tiene 1 o 2 o más. Entones, podemos añadir dos asteriscos `**` antes del nombre del parámetro para así poder introducir tantos como queramos sin que salte error
###Code
def nombre_completo(nombre, **apellido):
print("El nombre completo es: {}".format(nombre), end = " " )
for i in apellido.items():
print("{}".format(i[1]), end = " ")
nombre_completo(nombre = "Luis", apellido1 = "Pérez", apellido2 = "López")
###Output
El nombre completo es: Luis Pérez López
###Markdown
Parámetros por defectoHemos visto que una función en `Python` puede tener o no parámetros.En caso de tener, podemos indicar que alguno tenga un valor por defecto.La función `diff()` calcula la diferencia entre los dos números que introducimos por parámetros. Podemos hacer que el sustraendo por defecto valga 1 del siguiente modo:
###Code
def diff(x, y = 1):
return x - y
diff(613)
###Output
_____no_output_____
###Markdown
Docstring**Docstring.** Son comentarios explicativos que ayudan a comprender el funcionamiento de una función.- Van entre triple comilla doble- Pueden ser multilínea- Se sitúan al principio de la definición de la funciónRetomando el ejemplo de la división entera, podríamos utilizar docstring del siguiente modo:
###Code
def division_entera(x, y):
"""
Esta función calcula el cociente y el resto
de la división entera de x entre y.
Argumentos:
x(int): dividendo
y(int): divisor distinto de cero.
"""
q = x // y
r = x % y
return q, r
###Output
_____no_output_____
###Markdown
Con la ayuda del método `.__doc__` podemos acceder directamente a la información indicada en el docstring de una función
###Code
print(division_entera.__doc__)
###Output
Esta función calcula el cociente y el resto
de la división entera de x entre y.
Argumentos:
x(int): dividendo
y(int): divisor distinto de cero.
###Markdown
Variables de una funciónDentro de una función en `Python` existen dos tipos de variables:- **Variable local.** Aquella que es creada y solamente existe dentro de la función.- **Variable global.** Aquella que es creada en el entorno global.Dada la siguiente función:
###Code
def operaciones_aritmeticas(x, y):
sum = x + y
diff = x - y
mult = x * y
div = x / y
return {"suma": sum,
"resta": diff,
"producto": mult,
"division": div}
###Output
_____no_output_____
###Markdown
Si nosotros queremos imprimir por ejemplo el valor que toma la variable `mult` en el entorno global nos saltará un error, pues esta variable no existe a nivel global porque no ha sido declarada en dicho entorno ya que solamente ha sido declarada a nivel local, dentro de la función `operaciones_aritmeticas()`.
###Code
print(operaciones_aritmeticas(5, 3))
###Output
{'suma': 8, 'resta': 2, 'producto': 15, 'division': 1.6666666666666667}
###Markdown
Si se diese el caso de que sí hubiese sido definida la variable `mult` en el entorno global, como lo que ocurre en el siguiente bloque de código, por mucho que la variable local tenga el mimso nombre y por mucho que ejecutemos la función, el valor de la variable global no se ve modificado
###Code
mult = 10
print(operaciones_aritmeticas(x = 5, y = 3))
print(mult)
###Output
{'suma': 8, 'resta': 2, 'producto': 15, 'division': 1.6666666666666667}
10
###Markdown
Si dentro de una función utilizamos la palabra reservada `global` a una variable local, ésta automáticamente pasa a ser una variable global previamente definida.Veamos un ejemplo de función que nos devuelve el siguiente número del entero `n` definido en el entorno global:
###Code
n = 7
def next_n():
global n
return n + 1
next_n()
###Output
_____no_output_____
###Markdown
Paso por copia vs. paso por referenciaDependiendo del tipo de dato que pasemos por parámetro a la función, podemos diferenciar entre- **Paso por copia.** Se crea una copia local de la variable dentro de la función.- **Paso por referencia.** Se maneja directamente la variable y los cambios realizados dentro de la función afectan también a nivel global.En general, los tipos de datos básicos como enteros, en coma flotante, strings o booleanos se pasan por copia, mientras que estructuras de datos como listas, diccionarios, conjuntos o tuplas u otros objetos se pasan por referencia.Un ejemplo de paso por copia sería
###Code
def double_value(n):
n = n*2
return n
num = 5
print(double_value(num))
print(num)
###Output
10
5
###Markdown
Un ejemplo de paso por referencia sería
###Code
def double_values(ns):
for i, n in enumerate(ns):
ns[i] *= 2
return ns
nums = [1, 2, 3, 4, 5]
print(double_values(nums))
print(nums)
###Output
[2, 4, 6, 8, 10]
[2, 4, 6, 8, 10]
###Markdown
Funciones más complejasLas funciones pueden ser más completas, pues admiten tanto operadores de decisión como de iteración.Volviendo al ejemplo 4, la función creada claramente es muy sencilla, pues suponemos que el usuario va a introducir por parámetros números enteros.**Ejercicio.** Mejora la función `division_entera()` para que- Compruebe que los números introducidos son enteros. En caso de no ser así, indicar que se ha tomado la parte entera de los valores introducidos.- Realice la división entera del mayor parámetro (en valor absoluto) entre el menor parámetro. Esto es, si el usuario introduce `x = -2` e `y = -10`, como 10 > 2, entonces la función debe llevar a cabo la división entera de -10 entre -2.- Imprima por pantalla una frase indicando la división realizada y el cociente y el resto obtenidos.- Devuelva el cociente y el resto a modo de tupla
###Code
def division_entera(x, y):
ints = (x == int(x)) and (y == int(y))
if not ints:
x = int(x)
y = int(y)
print("Se tomarán como parámetro las partes enteras de los valores introducidos")
if abs(x) >= abs(y):
q = x // y
r = x % y
print(f"Se ha realizado la división de {x} entre {y}, el resultado es: cociente = {q} y el resto {r}")
else:
q = y // x
r = y % x
print(f"Se ha realizado la división de {y} entre {x}, el resultado es: cociente = {q} y el resto {r}")
return q, r
division_entera(-10.3, -5)
division_entera(-3, 19)
###Output
Se ha realizado la división de 19 entre -3, el resultado es: cociente = -7 y el resto -2
###Markdown
--- Ejemplo 5Veamos una función que dado un número, nos dice si éste es positivo, negativo o vale 0.
###Code
def signo(num):
"""
Función que dado un número devuelve el signo positivo
negativo o cero del mismo.
Argumentos:
num(int): Número del cual queremos hallar su signo.
Returns:
signo(str): string positivo, negativo o cero.
"""
if num > 0:
return "Positivo"
elif num < 0:
return "Negativo"
else:
return "Cero"
print(signo(3.1415))
print(signo(-100))
print(signo(0))
###Output
Positivo
Negativo
Cero
###Markdown
--- Ejemplo 6Veamos ahora una función que contiene un bucle `for` y que dado un número entero, nos imprime su tabla de multiplicar con sus 10 primeros múltiplos y nos devuelve una lista con todos esos múltiplos:
###Code
def multiplication_table10(num):
"""
Dado un número entero, imprimimos su tabla de multiplicar con
los 10 primeros múltiplos y devolvemos una lista de los múltiplos.
Args:
num (int): valor del cual vamos a calcular sus tabla de multiplicar
Returns:
multiples (list): lista con los 10 primeros múltiplos de num
"""
multiples = []
print("La tabla de multiplicar del {}:".format(num))
for i in range(1, 11):
multiple = num * i
print("{} x {} = {}".format(num, i, multiple))
multiples.append(multiple)
return multiples
multiples7 = multiplication_table10(7)
print(multiples7)
###Output
La tabla de multiplicar del 7:
7 x 1 = 7
7 x 2 = 14
7 x 3 = 21
7 x 4 = 28
7 x 5 = 35
7 x 6 = 42
7 x 7 = 49
7 x 8 = 56
7 x 9 = 63
7 x 10 = 70
[7, 14, 21, 28, 35, 42, 49, 56, 63, 70]
###Markdown
Vamos ahora a mejorar la función `multiplication_table10()` para que si el usuario decide introducir un número que no sea entero, nuestra función le avise y le explique el error que está cometiendo:
###Code
def multiplication_table10(num):
"""
Dado un número entero, primero comprovamos si es entero.
Si no lo es, no devolvemos nada.
Si lo es, imprimimos su tabla de multiplicar con los 10
primeros múltiplos y devolvemos una lista de los múltiplos.
Args:
num (int): valor del cual vamos a calcular sus 10 primeros múltiplos
Returns:
multiples (list): lista con los 10 primeros múltiplos de num
"""
if type(num) != type(1):
print("El número introducido no es entero")
return
multiples = []
print("La tabla de multiplicar del {}:".format(num))
for i in range(1, 11):
multiple = num * i
print("{} x {} = {}".format(num, i, multiple))
multiples.append(multiple)
return multiples
multiples3 = multiplication_table10(num = 3)
print(multiples3)
multiples_float = multiplication_table10(num = "3.7")
print(multiples_float)
###Output
El número introducido no es entero
None
###Markdown
--- Ejemplo 7Creemos ahora una función que dada una frase acabada en punto, nos devuelva si contiene o no la letra "a" haciendo uso de un bucle `while`
###Code
def contains_a(sentence):
i = 0
while sentence[i] != ".":
if sentence[i] == "a":
return True
i += 1
return False
contains_a("El erizo es bonito.")
contains_a("El elefante es gigante.")
###Output
_____no_output_____
###Markdown
**Ejercicio.** Generaliza la función `contains_a()` a una función llamada contains_letter() que devuelva si una frase cualquiera (no necesariamente acabada en punto) contiene o no la letra indicada también por el usuario. Tienes que hacerlo únicamente con operadores de decisión e iteración. No vale usar ningún método existente de `string`.
###Code
def contains_letter(sentence, letter):
for c in sentence:
if c == letter:
return True
return False
contains_letter("Mi amigo es muy inteligente, pero un poco pesado", "t")
###Output
_____no_output_____
###Markdown
--- Funciones recursivas**Función recursiva.** Es una función que se llama a sí misma.**¡Cuidado!** Hay que tener mucho cuidado con este tipo de funciones porque podemos caer en un bucle infinito. Es decir, que la función no acabara nunca de ejecutarse.Una función recursiva que entraría en bucle infinito sería la siguiente.
###Code
def powers(x, n):
print(x ** n)
powers(x, n + 1)
###Output
_____no_output_____
###Markdown
¿Por qué decimos que entra en bulce infinito? Pues porque solo parará si nosotros interrumpimos la ejecución.Esto se debe a que no le hemos indicado un caso de parada a la función, denominado caso final.**Caso final.** Es el caso que indica cuándo debe romperse la recursión. Hay que indicarlo siempre para no caer en un bucle infinito.En el caso de la función `powers()`, podemos indicar como caso final cuando el valor resultante supere 1000000. Lo indicamos con un `if`
###Code
def powers(x, n):
if x ** n > 1000000:
return x ** n
print(x ** n)
powers(x, n + 1)
powers(2, 1)
###Output
2
4
8
16
32
64
128
256
512
1024
2048
4096
8192
16384
32768
65536
131072
262144
524288
###Markdown
--- Ejemplo 8Veamos ahora un ejemplo clásico de función recursiva que funciona correctamente.Queremos una función que nos imprima el término $i$-ésimo de la sucesión de Fibonacci. Es decir, nosotros le indicamos el índice del término y la función nos devuelve el valor de dicho término.La sucesión de Fibonacci es$$1, 1, 2, 3, 5, 8, 13,\dots$$Es decir, cada término se obtiene de la suma de los dos anteriores.$$F_0 = F_1 = 1$$$$F_n = F_{n-1} + F_{n-2}, n\geq 2$$Con lo cual, la función que queremos y a la que hemos llamado `Fibonacci()` es:
###Code
def Fibonacci(index):
if index == 0 or index == 1:
return 1
return Fibonacci(index - 1) + Fibonacci(index - 2)
Fibonacci(index = 7)
Fibonacci(30)
###Output
_____no_output_____
###Markdown
Funciones helperAl igual que las funciones pueden llamarse a sí mismas, también pueden llamar a otras funciones.**Función helper.** Es una función cuyo propósito es evitar la repetición de código.Si nos dan la siguiente función
###Code
def sign_sum(x, y):
if x + y > 0:
print("El resultado de sumar {} más {} es positivo".format(x, y))
elif x + y == 0:
print("El resultado de sumar {} más {} es cero".format(x, y))
else:
print("El resultado de sumar {} más {} es negativo".format(x, y))
sign_sum(5, 4)
sign_sum(3, -3)
sign_sum(1, -8)
###Output
El resultado de sumar 5 más 4 es positivo
El resultado de sumar 3 más -3 es cero
El resultado de sumar 1 más -8 es negativo
###Markdown
Vemos que el `print` se repite salvo por la última palabra.Podríamos pensar en crear la función helper siguiente:
###Code
def helper_print(x, y, sign):
print("El resultado de sumar {} más {} es {}.".format(x, y, sign))
###Output
_____no_output_____
###Markdown
Si utilizamos la función helper, la función `sign_sum()` quedaría modificada del siguiente modo:
###Code
def sign_sum(x, y):
if x + y > 0:
helper_print(x, y, "positivo")
elif x + y == 0:
helper_print(x, y, "cero")
else:
helper_print(x, y, "negativo")
sign_sum(5, 4)
sign_sum(3, -3)
sign_sum(1, -8)
###Output
El resultado de sumar 5 más 4 es positivo.
El resultado de sumar 3 más -3 es cero.
El resultado de sumar 1 más -8 es negativo.
###Markdown
Con lo cual ya no hay código repetido.Y como se puede observar, la función original funciona correctamente. Funciones `lambda` Funciones `lambda`Son un tipo especial de funciones de `Python` que tienen la sintaxis siguiente```lambda parámetros: expresión``` - Son útiles para ejecutar funciones en una sola línea- Pueden tomar cualquier número de argumentos- Tienen una limitación: solamente pueden contener una expresiónVeamos algunos ejemplos --- Ejemplo 1Función que dado un número, le suma 10 puntos más:
###Code
plus10 = lambda x: x + 10
plus10(5)
###Output
_____no_output_____
###Markdown
--- Ejemplo 2Función que calcula el producto de dos números:
###Code
prod = lambda x, y: x * y
prod(5, 10)
###Output
_____no_output_____
###Markdown
--- Ejemplo 3Función que dados 3 números, calcula el discriminante de la ecuación de segundo grado.Recordemos que dada una ecuación de segundo grado de la forma $$ax^2 + bx + c = 0$$el discriminante es $$\triangle = b^2-4ac$$y dependiendo de su signo, nos indica cuantas soluciones reales va a tener la ecuación:- Si $\triangle > 0$, entonces hay dos soluciones diferentes- Si $\triangle = 0$, entonces hay dos soluciones que son iguales- Si $\triangle < 0$, entonces no hay solución
###Code
discriminante = lambda a, b, c: b ** 2 - 4 * a * c
discriminante(1, 2, 1) # Se corresponde a la ecuación x^2 + 2x + 1 = 0, cuya única solución es x = -1
###Output
_____no_output_____
###Markdown
La función `filter()`- Aplica una función a todos los elementos de un objeto iterable- Devuelve un objeto generador, de ahí que usamos la función `list()` para convertirlo a lista- Como output, devuelve los elementos para los cuales el aplicar la función ha devuelto `True`Con la ayuda de las funciones `lambda`, apliquemos `filter()` para quedarnos con los números múltiplos de 7 de la siguiente lista llamada `nums`
###Code
nums = [49, 57, 62, 147, 2101, 22]
list(filter(lambda x: (x % 7 == 0), nums))
###Output
_____no_output_____
###Markdown
La función proporcionada a `filter()` no tiene por qué ser `lambda`, sino que puede ser una ya existente, o bien una creada por nosotros mismos.Con las siguientes líneas de código vamos a obtener todas las palabras cuya tercera letra sea `s` haciendo uso de `filter()` y la función creada `third_letter_is_s()`:
###Code
def third_letter_is_s(word):
return word[2] == "s"
words = ["castaña", "astronomía", "masa", "bolígrafo", "mando", "tostada"]
list(filter(third_letter_is_s, words))
###Output
_____no_output_____
###Markdown
La función `reduce()`- Aplica continuamente una misma función a los elementos de un objeto iterable 1. Aplica la función a los primeros dos elementos 2. Aplica la función al resultado del paso anterior y el tercer elemento 3. Aplica la función al resultado del paso anterior y el cuarto elemento 4. Sigue así hasta que solo queda un elemento- Devuelve el valor resultanteCon la ayuda de las funciones `lambda`, apliquemos `reduce()` para calcular el producto de todos los elementos de una lista
###Code
from functools import reduce
nums = [1, 2, 3, 4, 5, 6]
reduce(lambda x, y: x * y, nums)
###Output
_____no_output_____
###Markdown
De nuevo, la función proporcionada a `reduce()` no tiene por qué ser `lambda`, sino que puede ser una ya existente o bien, una creada por nosotros mismos.Con las siguientes líneas de código, vamos a obtener el máximo de una lista dada, haciendo uso de `reduce()` y la función creada `bigger_than()`:
###Code
def bigger_than(a, b):
if a > b:
return a
return b
bigger_than(14, 7)
nums = [-10, 5, 7, -3, 16, -30, 2, 33]
reduce(bigger_than, nums)
###Output
_____no_output_____
###Markdown
La función `map()`- Aplica una misma función a todos los elementos de un objeto iterable- Devuelve un objeto generador, de ahí que usemos la función `list()` para convertirlo a lista- Como output, devuelve el resultado de aplicar la función a cada elementoCon la ayuda de las funciones `lambda`, apliquemos `map()` para calcular las longitudes de las siguientes palabras
###Code
words = ["zapato", "amigo", "yoyo", "barco", "xilófono", "césped"]
list(map(lambda w: len(w), words))
###Output
_____no_output_____
###Markdown
Sin embargo, para este caso en concreto no haría falta usar funciones `lambda`, pues podríamos hacer directamente
###Code
list(map(len, words))
###Output
_____no_output_____
###Markdown
La función `sorted()`- Ordena los elementos del objeto iterable que indiquemos de acuerdo a la función que pasemos por parámetro- Como output, devuelve una permutación del objeto iterable ordenado según la función indicadaCon la ayuda de las funciones `lambda`, apliquemos `sorted()` para ordenar la lista `words` en función de las longitudes de las palabras en orden descendente.
###Code
words = ["zapato", "amigo", "yoyo", "barco", "xilófono", "césped"]
sorted(words, key = lambda x: len(x), reverse = True)
###Output
_____no_output_____
###Markdown
**Observación.** Si quisiésemos ordenar en orden ascendente, simplemente tendríamos que indicar `reverse = False`, que al ser el valor por defecto, bastaría con omitir dicho parámetro.**Observación.** Si el tipo de objeto a ser ordenado es un string y no indicamos parámetro `key`, entonces se ordenan por defecto: orden alfabético ascendente.
###Code
sorted(words, key = len)
sorted(words)
###Output
_____no_output_____ |
hw-01/HW-01_GitPracticeDebuggingAndPythonPackages-STUDENT.ipynb | ###Markdown
&9989;Jake Volek Homework Assignment 1 (Individual) Git practice, debugging practice, unfamiliar data, and new Python packages Goals for this homework assignmentBy the end of this assignment, you should be able to:* Use Git to create a repository, track changes to the files within the repository, and push those changes to a remote repository.* Debug some Python code.* Work with an unfamiliar data format and successfully load it into your notebook.* Visualize FITS image files using Python.* Read documentation and example code to use a new Python package* Do a bit a bit of simple image manipulation using Python functions.Work through the following assignment, making sure to follow all of the directions and answer all of the questions.There are **72 points** possible on this assignment. Point values for each part are included in the section headers and question prompts.**This assignment is due roughly two weeks from now at 11:59 pm on Friday, February 12.** It should be uploaded into the "Homework Assignments" submission folder for Homework 1. Submission instructions can be found at the end of the notebook. --- Part 1: Setting up a git repository to track your progress on your assignment (6 points)For this assignment, you're going to create new **private** GitHub repository that you can used to track your progress on this homework assignment and future assignments. Again, this should be a **private** repository so that your solutions are not publicly accessible.**&9989; Do the following**:1. On [GitHub](https://github.com) make sure you are logged into your account and then create a new GitHub repository called `cmse202-s21-turnin`.2. Once you've initialized the repository on GitHub, **clone a copy of it onto JupyterHub or your computer**.3. Inside the `cmse202-s21-turnin` repository, create a new folder called `hw-01`.4. Move this notebook into that **new directory** in your repository then **add it and commit it to your repository**. **Important**: you'll want to make sure you **save and close** the notebook before you do this step and then re-open it once you've added it to your repository.5. Finally, to test that everything is working, `git push` the notebook file so that it shows up in your **private** GitHub repository on the web.**Important**: Make sure you've added your Professor and your TA as collaborators to your new "turnin" respository with "Read" access so that they can see your assignment. **You should check the Slack channel for your section of the course to get this information.****Double-check the following**: Make sure that the version of this notebook that you are working on is the same one that you just added to your repository! If you are working on a different copy of the noteobok, **none of your changes will be tracked**.If everything went as intended, the file should now show up on your GitHub account in the "`cmse202-s21-turnin`" repository inside the `hw-01` directory that you just created. Periodically, **you'll be asked to commit your changes to the repository and push them to the remote GitHub location**. Of course, you can always commit your changes more often than that, if you wish. It can be good to get into a habit of committing your changes any time you make a significant modification, or when you stop working on the project for a bit. --- Part 2: Bit of code debugging: reading Python and understanding error messages (6 points)As a bit of Python practice, review the following code, read the error outputs and **fix the code*. When you fix the code **add a comment to explain what was wrong with the original code**. Fixing errors**Question 1 [6 points]**: Resolve the errors in the following pieces of code and add a comment that explains what was wrong in the first place.
###Code
for i in range(10):
print("The value of i is %i" %i)
#Missing a colon after the for statement.
def compute_fraction(numerator, denominator):
if denominator == 0:
print("Error: Cannot Divide by 0. Enter a new denominator.")
else:
fraction = numerator/denominator
print("The value of the fraction is %f" %fraction)
compute_fraction(5, 0)
#Cannot Divide by 0. Add an error message if denominator is 0.
def compute_fraction(numerator, denominator):
if type(numerator) == str or type(denominator) == str:
print("Cannot do math operations with type string")
else:
fraction = numerator/denominator
print("The value of the fraction is %f" %fraction)
compute_fraction("one", 25)
#Cannot do math operations with strings. Print error message saying must be int or float
import numpy as np
n = np.arange(20)
print("The value of the 10th element is %d" %n[9])
#Use brackets instead of parenthesis.
odd = [1, 3, 5, 7, 9]
even = [2, 4, 6, 8, 10]
for i in odd:
print(i)
for j in even:
print(j)
#Even list was spelled 'evven' in the for statement.
spanish = dict()
spanish['hello'] = 'hola'
spanish['yes'] = 'si'
spanish['one'] = 'uno'
spanish['two'] = 'dos'
spanish['three'] = 'tres'
spanish['red'] = 'rojo'
spanish['black'] = 'negro'
spanish['green'] = 'verde'
spanish['blue'] = 'azul'
spanish['orange'] = 'anaranjado'
print(spanish["hello"])
print(spanish["one"], spanish["two"], spanish["three"])
print(spanish["orange"])
#Orange was not an initialized key in the dictionary.
###Output
hola
uno dos tres
anaranjado
###Markdown
--- &128721; STOP**Pause to commit your changes to your Git repository!**Take a moment to save your notebook, commit the changes to your Git repository using the commit message "Committing Part 2", and push the changes to GitHub.--- Part 3: Working with unfamiliar data and a new Python library to create an astronomical image (60 points)Since we're been practicing download data and repositories from the internet and learning to use new Python packages, you're going to practice doing exactly that in this assignment! This will require using the command line a bit (or running command-line commands from inside your notebook), reading documentation, and looking at code examples you're not familiar with. These are all authentic parts of being an independent computational professional. --- 3.1: Download the data! (4 points)For this part of the assignment you're going to need to download a couple data files from the internet. They are relatively small files, so it shouldn't take too long. If you run into issues accessing the files and it seems to be unrelated to the commands you're using, contact your instructor, TA, or LA. Remember, in order to work with the data in this notebook, you'll need to make sure the data is in the same place as the notebook or you'll need to put the full path to the file in your data reading commands.**DO NOT** commit the data files to your repository! Since you can always download the file again if you're on another machine, it's not necessary to add the file to the repository. In addition, you should be cautious about committing data files to Git repositories because adding large files to a repository means that those large files will have to be downloaded every time you want to clone a new version of the repository, which can be very time-consuming. You should not try to version control large datasets! (Yes, these datasets are fairly small, so you could get away with adding them for this case, but as a rule of thumb, **you should think carefully before you commit data to a repository!**)The files you need are located here:`https://raw.githubusercontent.com/msu-cmse-courses/cmse202-S21-student/master/data/m42_40min_ir.fits``https://raw.githubusercontent.com/msu-cmse-courses/cmse202-S21-student/master/data/m42_40min_red.fits`**&9989; Question 2 [4 points]:** In the cell below, include the command line commands that you used to download the files (you can either run the command on the command line or inside the jupyter notebook using the correct leading character). If you're not sure how to download them using the command line, download them however you need to in order to get them on to your computer and move on to the later parts of the assignment.
###Code
#!pip install astropy
#Import the needed functions
from astropy.io import fits
import pandas as pd
#Use curl command and fits functions to read the data in
!curl -O https://raw.githubusercontent.com/msu-cmse-courses/cmse202-S21-student/master/data/m42_40min_ir.fits
ir = fits.open("m42_40min_ir.fits")
ir_data = fits.getdata("m42_40min_ir.fits", ext = 0)
ir_data
###Output
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 10.8M 100 10.8M 0 0 31.9M 0 --:--:-- --:--:-- --:--:-- 31.9M
###Markdown
--- 3.2: Loading/Reading unfamiliar astronomical data in Python (14 points)You might notice that the files you downloaded have the extension ".fits". This is likely a file extension that you are not familiar with and it actually indicates that it is an "FITS" file (clever, right?). What does this mean? Do a quick internet search to figure out what type of file this is and what it is commonly used for.**&9989; Question 3 [2 points]:** Record your findings below. Explain what a FITS file is and what sort of information it commonly is used to store. FITS stands for Flexible Image Transport System. This type of file often does not always include image data as it usually stores scientific images with associated data. One example of a FITS file being utilized would be astronomers using one to not just look at a picture, but look at and examine the data to correlate with it. Although you might not have a background in which you've ever worked with FITS files before, you should have all of the skills necessary to interact with this data in Python. Of course, we haven't actually used FITS for anything we've done in class. So, your first task is to figure out how to open and read the file using Python.Time to consult the internet!**&9989; Question 4 [2 points]:** List any/all packages you found that can load FITS files in Python. Which package do you think is the best one to use? If you found more than one package, how did you decide which one to choose? I used astropy.io.fits in order to open the FITS file. I could not find another package that we could use, but I am sure there is others out there. Using the fits feature inside of astropy.io, I was able to open and load in the data fairly easily and efficiently. **&9989; Question 5 [2 points]:** Is the package already installed on your computer? If so, how did you determine this? My package was not installed on my computer, and I had to pip install this. I knew it was not because when I tried to import the package, I ran into an error which was an immediate red flag that the package had not yet been installed. **&9989; Question 6 [2 points]:** If the package isn't already installed, put the command to install the package in the cell below. If the package *is* already installed, what command would you have used to install the package? !pip install astropy Loading the dataThe data that you're working with are actually images of the Orion Nebula (M42) and come from the [European South Observatory's Digital Sky Survey](http://archive.eso.org/dss/dss) and can be publicly downloaded [here](https://www.spacetelescope.org/projects/fits_liberator/m42data/). The "red" image is from the "$R$" filter from the telescope, which views the sky at red wavelengths and the "ir" image is from "$I$" filter, which views the telescope at infrared wavelengths. If you're not familiar with the term "infrared," it literally translates to "below red" and indicates that the wavelength of the light is longer then the red part of the [electromagnetic spectrum](https://en.wikipedia.org/wiki/Electromagnetic_spectrum).**&9989; Question 7 [6 points]:** Now that you have a Python package that can open FITS files, **read both files into your notebook and print the mean, standard deviation, maximum, and minimum for each file**.**Note:** If you can't figure out how to load the data file, use the following to lines of code as a replacement for the real data (you will lose the points for this question, but you'll be able to continue on in the assignment):``` pythonimage_data_red = np.random.uniform(0,10000,size=(1000,1000))image_data_ir = np.random.exponential(600,size=(1000,1000))```
###Code
# Read in botht he I and R data sets
!curl -O https://raw.githubusercontent.com/msu-cmse-courses/cmse202-S21-student/master/data/m42_40min_ir.fits
image_data = fits.open("m42_40min_ir.fits")
image_data_ir = fits.getdata("m42_40min_ir.fits", ext = 0)
!curl -O https://raw.githubusercontent.com/msu-cmse-courses/cmse202-S21-student/master/data/m42_40min_red.fits
image_data_r = fits.open("m42_40min_red.fits")
image_data_red = fits.getdata("m42_40min_red.fits", ext = 0)
import numpy as np
#Print the mean, std, min, and max using the numpy features for both data sets
print("IR")
print("Mean:", round(np.mean(image_data_ir),3))
print("Standard Deviation:", round(np.std(image_data_ir),3))
print("Minimum:", np.min(image_data_ir))
print("Maximum:", np.max(image_data_ir))
print("\nRed")
print("Mean:", round(np.mean(image_data_red),3))
print("Standard Deviation:", round(np.std(image_data_red),3))
print("Minimum:", np.min(image_data_red))
print("Maximum:", np.max(image_data_red))
###Output
IR
Mean: 7709.483
Standard Deviation: 4964.552
Minimum: 1555
Maximum: 27947
Red
Mean: 14795.75
Standard Deviation: 4558.575
Minimum: 1845
Maximum: 22512
###Markdown
--- &128721; STOP**Pause to commit your changes to your Git repository!**Take a moment to save your notebook, commit the changes to your Git repository using the commit message "Committing part 3.2", and push the changes to GitHub.--- 3.3: Working with the the data (22 points)Now that you've got the FITS files loaded into Python, it's time to start exploring the data a bit. You've already computed some simple statistics, but now you should take it a step further and try to understand the distribution of pixel values and plot the images.**&9989; Question 8 [6 points]:** Using **NumPy** compute the histogram for both the $R$ filter image and $I$ filter image using **50 bins**. You can assume that the values in the images represent **pixel brightness** (you do not need to worry about the units for this values)*Important note*: When reviewing the documentation for NumPy's histogram function, make sure you know what the properties are of the variables that are returned from the functions!Once you have your histogram values, **make a plot that contains the histograms for both images showing the pixel count as a function of pixel brightness**. Use the `step()` function in matplotlib to make your plot so that it looks like a more traditional histogram. **Make sure you include appropriate labels on your plot!**
###Code
# Import the matplotlib package
import matplotlib.pyplot as plt
%matplotlib inline
#https://www.tutorialspoint.com/numpy/numpy_histogram_using_matplotlib.htm np.histogram()
#https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.step.html step()
#Use only the first 50 bin numbers from the np.histogram function as it is included in last bin
vals_ir, bins_ir = np.histogram(image_data_ir, bins = 50)
plt.step(bins_ir[:-1], vals_ir)
vals_red, bins_red = np.histogram(image_data_red, bins = 50)
plt.step(bins_red[:-1], vals_red)
plt.xlabel("Pixel Brigtness")
plt.ylabel("Count")
plt.title("Pixel Brightness for I and R filter")
plt.legend(["I Filter", "R Filter"])
###Output
_____no_output_____
###Markdown
**&9989; Question 9 [2 points]:** In looking at the histograms, what can you say about the properties of the $R$ filter image and the $I$ filter images? Which one is dominated by a large number of dark pixels? Which one has a nearly uniform, non-neglible pixel count for a wide range of pixel brightness? We can see from the histogram that the I filter is dominated by lower brightness or a higher number of dark filters while the R filter has an even amount of pixels at each brightness exluding a peak between 17000 and 20000 in pixel brightness. This means we can assume that the I filter will have a much darker look to it, while the R filter will most likely be brighter. Now that you have a bit of understanding about the the properties of the images are, let's see if the images themselves match your expectations.**&9989; Question 10 [6 points]:** **Make two separate figures,** one that includes a plot of the $R$ filter and one that includes a plot of the $I$ filter. **Choose a colormap other than the default, but use the same colormap for each image**.**Make sure you include a colorbar** on the figures so that you can tell what the pixel values are and **ensure that the (0,0) point is the the lower left corner of the image so that the orientation matches that of this image:**
###Code
# https://matplotlib.org/3.3.2/api/_as_gen/matplotlib.pyplot.imshow.html
#Use a subplot to put the two filters maps with each other.
plt.subplot(2,1,1)
plt.imshow(image_data_ir, cmap = 'YlGn', origin = 'lower')
plt.title("I Filter")
plt.colorbar()
plt.subplot(2,1,2)
plt.imshow(image_data_red, cmap = 'YlGn', origin = 'lower')
plt.title("R Filter")
plt.tight_layout()
plt.colorbar()
###Output
_____no_output_____
###Markdown
**&9989; Question 11 [2 points]:** Do the resulting images make sense in the context of your histogram plot? Explain why or why not. This makes sense as from our histogram, we see that the R filter was more evenly spread out and there were more areas with higher pixel brightness. This is seen in our figures as there is an abundance of green and very little yellow (Green signifies higher values). Sometimes when astronomers are trying to understand the properties of an object they are are looking at they, they create "[color-color diagrams](https://en.wikipedia.org/wiki/Color%E2%80%93color_diagram)". These diagrams define "colors" by computing the difference between two different image filters. You don't need to understand the exact details behind color-color diagrams for this part of the assignment, but you're going to use the data you have available to do something similar!**&9989; Question 12 [4 points]:** Write a function that takes in two different image arrays, plots the "difference image," and returns the difference image array. **Test out your function so that you produce an "$R$-$I$" image (you want to give the function the red image and the IR image such that the image that is returned is "red" - "IR" but your function should work for _any two images_.**
###Code
# Define the function name and parameters
def diff_image(arr1, arr2):
#Calculate the difference of the two arrays
final_array = arr2 - arr1
#Plot the differece array with a color bar
plt.imshow(final_array, origin = 'lower', cmap = 'YlGn')
plt.colorbar()
#Return the difference array
return final_array
diff_image(image_data_ir, image_data_red)
###Output
_____no_output_____
###Markdown
**&9989; Question 13 [2 points]:** What can you learn from your difference image? Which part(s) of the image is(are) brighter in the $I$ filter than the the $R$ filter? From our difference image, we can see that areas that the I filter had much brighter pixels in the middle of our figure. This is shown by the light yellow in our figure that is mapped to around -10000 on our colorbar. --- &128721; STOP**Pause to commit your changes to your Git repository!**Take a moment to save your notebook, commit the changes to your Git repository using the commit message "Committing part 3.3", and push the changes to GitHub.--- 3.4: Using a specialized package for visualization of data (6 points)Now that you've spent some time exploring the nature of these images, we're going to try to use some of the tools that are unique to the package you've been using to read the FITS files into your notebook. In particular, we're going to try to use the header information associated with the FITS files to make a plot that uses the "World Coordinate System" so that instead of just plotting the image dimensions in terms of pixel position, we'll have a plot where "Right Ascension" is on the $x$-axis and "Declination" is on the $y$-axis. These coordinates are what astronomers use to navigate the sky.**&9989; Question 14 [6 points]:** Using the documentation page for the new Python package you've been using, or any other examples you can find on the internet, **make a plot using the World Coordinate System** for the $I$ filter image. The package will use the information from the header of the FITS file to define a set of axes that corresponding to the physical coordinates of the image. If all goes well, you should end up with something that looks like this:**Important note:** You made end up getting some WARNINGs in your notebook when you do this step, but you should be able to safely ignore those. However, if you run into actual errors, you need to troubleshoot those!
###Code
#https://docs.astropy.org/en/stable/visualization/wcsaxes/
# Import needed modules and read the file in
import matplotlib.pyplot as plt
from astropy.wcs import WCS
from astropy.utils.data import get_pkg_data_filename
ir_data = fits.open("https://raw.githubusercontent.com/msu-cmse-courses/cmse202-S21-student/master/data/m42_40min_ir.fits")[0]
#Set a world coordinate system
wcs = WCS(ir_data.header)
#Plot the WCS from the I data.
plt.subplot(projection=wcs)
plt.imshow(ir_data.data, origin='lower')
plt.xlabel("Right Ascension")
plt.ylabel("Declintation")
cb = plt.colorbar()
cb.set_label("Brightness")
###Output
WARNING: VerifyWarning: Verification reported errors: [astropy.io.fits.verify]
WARNING: VerifyWarning: Card 'SKEW' is not FITS standard (invalid value string: '-8.4510723167663E-02, -1.0226276281220E-01 /Measure of skew'). Fixed 'SKEW' card to meet the FITS standard. [astropy.io.fits.verify]
WARNING: VerifyWarning: Note: astropy.io.fits uses zero-based indexing.
[astropy.io.fits.verify]
WARNING: FITSFixedWarning: PC001001= 9.9999840720579E-01 /PC matrix
this form of the PCi_ja keyword is deprecated, use PCi_ja. [astropy.wcs.wcs]
WARNING: FITSFixedWarning: PC001002= -1.4748678568660E-03 /PC matrix
this form of the PCi_ja keyword is deprecated, use PCi_ja. [astropy.wcs.wcs]
WARNING: FITSFixedWarning: PC002001= 1.7849685815273E-03 /PC matrix
this form of the PCi_ja keyword is deprecated, use PCi_ja. [astropy.wcs.wcs]
WARNING: FITSFixedWarning: PC002002= 9.9999891220190E-01 /PC matrix
this form of the PCi_ja keyword is deprecated, use PCi_ja. [astropy.wcs.wcs]
WARNING: FITSFixedWarning: 'datfix' made the change 'Set MJD-OBS to 50109.000000 from DATE-OBS.
Changed DATE-OBS from '1996/01/27 ' to '1996-01-27T00:00:00.0''. [astropy.wcs.wcs]
###Markdown
--- &128721; STOP**Pause to commit your changes to your Git repository!**Take a moment to save your notebook, commit the changes to your Git repository using the commit message "Committing part 3.4", and push the changes to GitHub.--- 3.5: Writing Python functions for doing image manipulation and processing (14 points)Now that you've been able to read, manipulate, and display FITS file images, we're going to work on building some Python functions to interact with these files and does some very simple image processing.**&9989; Question 15 [10 points]:** In order to simplify the process of "observing" the nebula that's we've been looking at thus far, you're going to build the following functions:1. A `load_images` function that takes two image filenames as inputs, loads the corresponding FITS files, and returns a **dictionary** where the keys in the dictionary are the filenames and the entries in the dictionary are the corresponding image arrays.2. A `calc_stats` function that takes a dictionary of image information (like the one returned by your `load_images` function) as input and **prints the mean and standard deviation of all images in the dictionary**. Make sure that the print statements indicate which image the values correspond to by using the filenames that are stored in the dictionary.3. A `make_composite` function that takes your two filenames and your dictionary of image information as input and creates a 3D NumPy array that represents a 2D image and it's corresponding "R" "G" and "B" values. The Red (R), Green (G), and Blue (B) channels should be defined in the following ways: 1. The red channel should be defined as $$ 1.5 \times \frac{\mathrm{I~filter~image~array}}{\mathrm{The~maximum~of~the~R~filter~image~array}}$$ 2. The green channel should be based on the average pixels values, speficially defined as $$ \frac{\mathrm{(I~filter~image~array + R~filter~image~array)/2}}{\mathrm{The~maximum~of~the~R~filter~image~array}}$$ 3. The blue channel should be defined as $$ \frac{\mathrm{R~filter~image~array}}{\mathrm{The~maximum~of~the~R~filter~image~array}}$$ When this function is called it should **display the "false color" image you've created by using `plt.imshow()`** **A starter function and the code for creating the red channel has been provided for you for the `make_composite` function!** For the `make_composite` function, you may run into issues with some of your image data values not being of the correct type to do the some of math necessary to make the composite image, so you may need to convert some of the values to the appropriate type. Also, make sure you understand what the provided code is doing, especially when it comes to "clipping" the RGB values!
###Code
#Define the load images function and parameters
def load_images(f1, f2):
#Get both data sets
data1 = fits.getdata(f1, ext = 0)
data2 = fits.getdata(f2, ext = 0)
#Create a dictionary that takes file name as keys and data arrays as values
dictionary = {}
dictionary[f1] = data1
dictionary[f2] = data2
return dictionary
#Define calc states function
def calc_stats(im_dict):
#Go through each key in dictionary
for f in im_dict:
#Print the mean and std for each key rounded to 3 decimals.
print("Mean:", round(np.mean(im_dict[f]), 3))
print("Standard Deviation:", round(np.std(im_dict[f]),3))
# Here is a starting point for the "make_composite" function
def make_composite(f1, f2, im_dict):
'''
This function takes in the following:
f1 : file name for the "R" filter image
f2 : file name for the "I" filter image
im_dict : a dictionary that contains the image arrays as entries that match the file names
'''
# Define the array for storing RGB values
rgb = np.zeros((im_dict[f1].shape[0],im_dict[f1].shape[1],3))
# Define a normalization factor for our denominator using the R filter image
norm_factor = im_dict[f1].astype("float").max()
# Compute the red channel values and then clip them to ensure nothing is > 1.0
rgb[:,:,0] = (im_dict[f2].astype("float")/norm_factor) * 1.5
rgb[:,:,0][rgb[:,:,0] > 1.0] = 1.0
#Compute the green channel and make sure nothing is over 1.0
rgb[:,:,1] = ((im_dict[f2].astype("float") + im_dict[f1].astype("float"))/2) / norm_factor
rgb[:,:,1][rgb[:,:,1] > 1.0] = 1.0
#Compute the blue channel and make sure nothing is over 1.0
rgb[:,:,2] = (im_dict[f1].astype("float")/norm_factor)
rgb[:,:,2][rgb[:,:,2] > 1.0] = 1.0
#Plot the rgb values
plt.imshow(rgb, origin = 'lower',cmap = 'BuPu')
###Output
_____no_output_____
###Markdown
**&9989; Question 16 [4 points]:** Now that you've defined your class methods, you're going to put them to use. In the following cell:1. Load the images using your `load_images()` function.2. Compute the basic image statistics by calling the `calc_stats` function.3. Create a false color image using the `make_composite` function. * If all goes well, you should end up with a composite image that looks something like this: **Important note:** It is not required that your final composite image has the Right Ascension and Declination coordinates, but if you figured out how to do this in the previous section, I encourage you to include it!**Another important note**: If you never managed to get the FITS file data loaded in, you can use the fake data image arrays from previously:``` pythonimage_data_red = np.random.uniform(0,10000,size=(1000,1000))image_data_ir = np.random.exponential(600,size=(1000,1000))```
###Code
#Read in the data sets with load_images function
new_dict = load_images("m42_40min_ir.fits", "m42_40min_red.fits")
#Print the stats for each data set in dictionary
calc_stats(new_dict)
#Adjust the RGB values and plot the result using make composite function.
make_composite( "m42_40min_red.fits", "m42_40min_ir.fits", new_dict)
###Output
Mean: 7709.483
Standard Deviation: 4964.552
Mean: 14795.75
Standard Deviation: 4558.575
###Markdown
--- &128721; STOP**Pause to commit your changes to your Git repository!**Take a moment to save your notebook, commit the changes to your Git repository using the commit message "Committing part 3.5", and push the changes to GitHub.--- --- Assignment wrap-upPlease fill out the form that appears when you run the code below. **You must completely fill this out in order to receive credit for the assignment!**
###Code
from IPython.display import HTML
HTML(
"""
<iframe
src="https://forms.office.com/Pages/ResponsePage.aspx?id=MHEXIi9k2UGSEXQjetVofddd5T-Pwn1DlT6_yoCyuCFUMVlDR0FZWllFS0NEUUc3V1NZVEZUUjRPWC4u"
width="800px"
height="600px"
frameborder="0"
marginheight="0"
marginwidth="0">
Loading...
</iframe>
"""
)
###Output
_____no_output_____ |
content/object_oriented_programming/inheritance_I.ipynb | ###Markdown
Inheritance I- [Download the lecture notes](https://philchodrow.github.io/PIC16A/content/object_oriented_programming/inheritance_I.ipynb). Often, we face a problem that is *almost* solved by an existing class. For example, suppose I want to use Python to keep track of my grocery shopping. I can use a `dict` to log the items in my pantry:
###Code
pantry = {
"rice (lbs)" : 2,
"harissa (jars)" : 1,
"onions" : 5,
"lemons" : 3
}
###Output
_____no_output_____
###Markdown
Now suppose I go shopping, and I come back with:
###Code
shopping_trip = {
"rice (lbs)" : 1,
"onions" : 2,
"spinach (lbs)" : 1
}
###Output
_____no_output_____
###Markdown
What I'd like to do is add these `dict`s together in the obvious way, obtaining the `dict````{ "rice (lbs)" : 3, "harissa (jars)" : 1, "onions" : 7, "lemons" : 3, "spinach (lbs)" : 1}``` Unfortunately, the native implementation of `dict`s doesn't support this kind of operation. For our first example, we will implement a new class that **inherits** from `dict`, and which supports basic arithmetic. In particular, once we're done, the following will achieve the expected result: ```pantry += shopping_trip```To write a class `classA` that inherits from `classB`, just declare `class classA(classB)`. For example:
###Code
class ArithmeticDict(dict):
pass
###Output
_____no_output_____
###Markdown
Just by including the inheritance, this very boring class already does everything that a `dict` can do. In fact, it IS a `dict` -- that is, it is an *instance* of the `dict` class.
###Code
x = ArithmeticDict({'a' : 1, 'b' : 2})
x, type(x), isinstance(x, dict)
###Output
_____no_output_____
###Markdown
We can do normal `dict` methods:
###Code
x.update({'c' : 3})
x, x['a']
###Output
_____no_output_____
###Markdown
**Pause for a moment:** why were we able to do: ```a = ArithmeticDict({'a' : 1, 'b' : 2})```and get the expected result?
###Code
# behind the scenes
# which __init__() method is this?
a = ArithmeticDict.__init__({'a' : 1, 'b' : 2})
b = dict.__init__({'a' : 1, 'b' : 2})
###Output
_____no_output_____
###Markdown
Of course, this doesn't give us anything new yet. The important part is that we are now able to define new methods that will be available only for the `ArithmeticDict` class.
###Code
class ArithmeticDict(dict):
"""
A dictionary class that supports entrywise addition
"""
def __add__(self, to_add):
"""
Add two ArithmeticDicts entrywise.
"""
new = {}
keys1 = set(self.keys())
keys2 = set(to_add.keys())
all_keys = keys1.union(keys2)
for key in all_keys:
new.update({key : self.get(key,0) + to_add.get(key,0)})
return ArithmeticDict(new)
x = ArithmeticDict({'a' : 1, 'b' : 2})
y = ArithmeticDict({'a' : 1, 'b' : 3, 'c' : 7})
x+y
###Output
_____no_output_____
###Markdown
I'm now able to update my pantry:
###Code
pantry = {
"rice (lbs)" : 2,
"harissa (jars)" : 1,
"onions" : 5,
"lemons" : 3
}
shopping_trip = {
"rice (lbs)" : 1,
"onions" : 2,
"spinach (lbs)" : 1
}
pantry = ArithmeticDict(pantry)
pantry
shopping_trip = ArithmeticDict(shopping_trip)
shopping_trip
pantry += shopping_trip
# OR pantry = pantry + shopping_trip
pantry
###Output
_____no_output_____ |
II Machine Learning & Deep Learning/02_Decision Tree. A Supervised Classification Model/02session_decision-tree.ipynb | ###Markdown
02 | Decision Tree. A Supervised Classification Model - Subscribe to my [Blog ↗](https://blog.pythonassembly.com/)- Let's keep in touch on [LinkedIn ↗](www.linkedin.com/in/jsulopz) 😄 Discipline to Search Solutions in Google > Apply the following steps when **looking for solutions in Google**:>> 1. **Necesity**: How to load an Excel in Python?> 2. **Search in Google**: by keywords> - `load excel python`> - ~~how to load excel in python~~> 3. **Solution**: What's the `function()` that loads an Excel in Python?> - A Function to Programming is what the Atom to Phisics.> - Every time you want to do something in programming> - **You will need a `function()`** to make it> - Theferore, you must **detect parenthesis `()`**> - Out of all the words that you see in a website> - Because they indicate the presence of a `function()`. Load the Data > Load the Titanic dataset with the below commands> - This dataset **people** (rows) aboard the Titanic> - And their **sociological characteristics** (columns)> - The aim of this dataset is to predict the probability to `survive`> - Based on the social demographic characteristics.
###Code
import seaborn as sns
df = sns.load_dataset(name='titanic').iloc[:, :4]
df.head()
###Output
_____no_output_____
###Markdown
`DecisionTreeClassifier()` Model in Python Build the Model > 1. **Necesity**: Build Model> 2. **Google**: How do you search for the solution?> 3. **Solution**: Find the `function()` that makes it happen Code Thinking> Which function computes the Model?> - `fit()`>> How could can you **import the function in Python**?
###Code
fit()
algo.fit()
algo = DecisionTreeClassifier()
from sklearn.tree import DecisionTreeClassifier
model = DecisionTreeClassifier()
model.fit()
###Output
_____no_output_____
###Markdown
Separate Variables for the Model> Regarding their role:> 1. **Target Variable `y`**>> - [ ] What would you like **to predict**?>> 2. **Explanatory Variable `X`**>> - [ ] Which variable will you use **to explain** the target?
###Code
X = df.drop(columns='survived')
y = df.survived
###Output
_____no_output_____
###Markdown
Finally `fit()` the Model
###Code
model.fit(X, y)
model.fit(X, y)
X
import numpy as np
import pandas as pd
X = pd.get_dummies(X)
model.fit(X,y)
X
a = X.dropna
type(a)
a.dropna()
X = X.dropna()
X
df = pd.get_dummies(df, drop_first=True).dropna()
X = df.drop(columns='survived')
y = df.survived
X.head()
model = DecisionTreeClassifier(max_depth=4)
model.fit(X,y)
###Output
_____no_output_____
###Markdown
Calculate a Prediction with the Model > - `model.predict_proba()`
###Code
manolo = df[:1]
manolo
manolo_X = X[:1]
manolo_X
model.predict_proba(manolo_X)
model.predict(manolo_X)
###Output
_____no_output_____
###Markdown
Model Visualization > - `tree.plot_tree()`
###Code
from sklearn.tree import plot_tree
import matplotlib.pyplot as plt
X.columns
manolo
plt.figure(figsize=(20,10))
plot_tree(decision_tree=model, feature_names=X.columns, filled=True);
39/330
###Output
_____no_output_____
###Markdown
Model Interpretation > Why `sex` is the most important column? What has to do with **EDA** (Exploratory Data Analysis)?
###Code
%%HTML
<iframe width="560" height="315" src="https://www.youtube.com/embed/7VeUPuFGJHk" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
###Output
_____no_output_____
###Markdown
Prediction vs Reality > How good is our model? Precision > - `model.score()`
###Code
model.score(X,y)
dfsel = df[['survived']].copy()
dfsel['pred'] = model.predict(X)
(dfsel.survived - dfsel.pred).mean()
dfsel
df["dif"]= (dfsel.pred - dfsel.survived)
(df.dif**2).sum()
(df.dif**2).sum()/714
1 - (df.dif**2).sum()/714
model.score(X,y)
comp = dfsel.survived == dfsel.pred
comp
comp.sum()
comp.sum()/714
comp.mean()
model.score(X,y)
dfsel
###Output
_____no_output_____
###Markdown
Confusion Matrix > 1. **Sensitivity** (correct prediction on positive value, $y=1$)> 2. **Specificity** (correct prediction on negative value $y=0$).
###Code
from sklearn.metrics import ConfusionMatrixDisplay, confusion_matrix
dfsel
mat = confusion_matrix(y_true = dfsel.survived, y_pred=dfsel.pred)
mat
a = ConfusionMatrixDisplay(mat)
a.plot()
166/(124+166)
416/(416 + 8)
from sklearn.metrics import classification_report
report = classification_report(y_true = dfsel.survived, y_pred=dfsel.pred)
report
print(report)
###Output
precision recall f1-score support
0 0.77 0.98 0.86 424
1 0.95 0.57 0.72 290
accuracy 0.82 714
macro avg 0.86 0.78 0.79 714
weighted avg 0.84 0.82 0.80 714
###Markdown
02 | Decision Tree. A Supervised Classification Model - Subscribe to my [Blog ↗](https://blog.pythonassembly.com/)- Let's keep in touch on [LinkedIn ↗](www.linkedin.com/in/jsulopz) 😄 Discipline to Search Solutions in Google > Apply the following steps when **looking for solutions in Google**:>> 1. **Necesity**: How to load an Excel in Python?> 2. **Search in Google**: by keywords> - `load excel python`> - ~~how to load excel in python~~> 3. **Solution**: What's the `function()` that loads an Excel in Python?> - A Function to Programming is what the Atom to Phisics.> - Every time you want to do something in programming> - **You will need a `function()`** to make it> - Theferore, you must **detect parenthesis `()`**> - Out of all the words that you see in a website> - Because they indicate the presence of a `function()`. Load the Data > Load the Titanic dataset with the below commands> - This dataset **people** (rows) aboard the Titanic> - And their **sociological characteristics** (columns)> - The aim of this dataset is to predict the probability to `survive`> - Based on the social demographic characteristics.
###Code
import seaborn as sns
df = sns.load_dataset(name='titanic').iloc[:, :4]
df.head()
###Output
_____no_output_____
###Markdown
`DecisionTreeClassifier()` Model in Python Build the Model > 1. **Necesity**: Build Model> 2. **Google**: How do you search for the solution?> 3. **Solution**: Find the `function()` that makes it happen Code Thinking> Which function computes the Model?> - `fit()`>> How could can you **import the function in Python**?
###Code
fit()
model.fit()
###Output
_____no_output_____
###Markdown
`model = ?`
###Code
from sklearn.tree import DecisionTreeClassifier
model = DecisionTreeClassifier()
model.__dict__
model.fit()
###Output
_____no_output_____
###Markdown
Separate Variables for the Model> Regarding their role:> 1. **Target Variable `y`**>> - [ ] What would you like **to predict**?>> 2. **Explanatory Variable `X`**>> - [ ] Which variable will you use **to explain** the target?
###Code
explanatory = df.drop(columns='survived')
target = df.survived
###Output
_____no_output_____
###Markdown
Finally `fit()` the Model
###Code
model.__dict__
model.fit(X=explanatory, y=target)
import pandas as pd
pd.get_dummies(data=df)
df = pd.get_dummies(data=df, drop_first=True)
df
explanatory = df.drop(columns='survived')
target = df.survived
model.fit(X=explanatory, y=target)
df
df.isna().sum()
df.fillna('hola')
df.dropna(inplace=True) # df = df.dropna()
df
df.dropna(inplace=True) # df = df.dropna()
df
explanatory = df.drop(columns='survived')
target = df.survived
model.fit(X=explanatory, y=target)
###Output
_____no_output_____
###Markdown
Calculate a Prediction with the Model > - `model.predict_proba()`
###Code
model.predict_proba()
###Output
_____no_output_____
###Markdown
Model Visualization > - `tree.plot_tree()` Model Interpretation > Why `sex` is the most important column? What has to do with **EDA** (Exploratory Data Analysis)?
###Code
%%HTML
<iframe width="560" height="315" src="https://www.youtube.com/embed/7VeUPuFGJHk" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
###Output
_____no_output_____
###Markdown
Prediction vs Reality > How good is our model?
###Code
dfsel = df[['survived']].copy()
dfsel['pred'] = model.predict(X=explanatory)
dfsel.sample(10)
comp = dfsel.survived == dfsel.pred
comp.sum()
comp.sum()/714
comp.mean()
###Output
_____no_output_____
###Markdown
Precision > - `model.score()`
###Code
model.score(X=explanatory, y=target)
###Output
_____no_output_____
###Markdown
02 | Decision Tree. A Supervised Classification Model - Subscribe to my [Blog ↗](https://blog.pythonassembly.com/)- Let's keep in touch on [LinkedIn ↗](www.linkedin.com/in/jsulopz) 😄 Discipline to Search Solutions in Google > Apply the following steps when **looking for solutions in Google**:>> 1. **Necesity**: How to load an Excel in Python?> 2. **Search in Google**: by keywords> - `load excel python`> - ~~how to load excel in python~~> 3. **Solution**: What's the `function()` that loads an Excel in Python?> - A Function to Programming is what the Atom to Phisics.> - Every time you want to do something in programming> - **You will need a `function()`** to make it> - Theferore, you must **detect parenthesis `()`**> - Out of all the words that you see in a website> - Because they indicate the presence of a `function()`. Load the Data > Load the Titanic dataset with the below commands> - This dataset **people** (rows) aboard the Titanic> - And their **sociological characteristics** (columns)> - The aim of this dataset is to predict the probability to `survive`> - Based on the social demographic characteristics.
###Code
import seaborn as sns
sns.load_dataset(name='titanic').iloc[:, :4]
###Output
_____no_output_____
###Markdown
`DecisionTreeClassifier()` Model in Python Build the Model > 1. **Necesity**: Build Model> 2. **Google**: How do you search for the solution?> 3. **Solution**: Find the `function()` that makes it happen Code Thinking> Which function computes the Model?> - `fit()`>> How could can you **import the function in Python**? Separate Variables for the Model> Regarding their role:> 1. **Target Variable `y`**>> - [ ] What would you like **to predict**?>> 2. **Explanatory Variable `X`**>> - [ ] Which variable will you use **to explain** the target?
###Code
explanatory = ?
target = ?
###Output
_____no_output_____
###Markdown
Finally `fit()` the Model Calculate a Prediction with the Model > - `model.predict_proba()` Model Visualization > - `tree.plot_tree()` Model Interpretation > Why `sex` is the most important column? What has to do with **EDA** (Exploratory Data Analysis)?
###Code
%%HTML
<iframe width="560" height="315" src="https://www.youtube.com/embed/7VeUPuFGJHk" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
###Output
_____no_output_____ |
CallystoAndDataScience/higher-dimension-visualization.ipynb | ###Markdown
 Higher Dimension Visualizations Run the following code in your Jupyter notebook to import the pandas library and recreate the `pets` DataFrame.
###Code
#load "pandas" library under the alias "pd"
import pandas as pd
#identify the location of our online data
url = "https://raw.githubusercontent.com/callysto/online-courses/master/CallystoAndDataScience/data/pets-bootstrap.csv"
#read csv file from url and create a dataframe
pets = pd.read_csv(url)
#display the head of the data
pets.head()
###Output
_____no_output_____
###Markdown
We learned in previous modules that we can create a scatter plot to evaluate the relationship between two variables. For example, let's say we want to study the relationship between age of a pet and how long it took for the pet to be adopted.
###Code
import plotly.express as px
import plotly.io as pio
# Create scatter plot
scatter_pet = px.scatter(pets,
x="Time to Adoption (weeks)",
y="Age (years)",
title="Age (in years) and Time to Adoption (weeks) for each pet",
color ="Species",hover_name="Name")
scatter_pet.show()
###Output
_____no_output_____
###Markdown
Suppose now that we are interested in comparing the weight of a pet, and how long it took for the pet to be adopted.
###Code
# Create scatter plot
scatter_pet2 = px.scatter(pets,
x="Time to Adoption (weeks)",
y="Weight (lbs)",
title="Weight (lbs) and Time to Adoption (weeks) for each pet",
color ="Species",hover_name="Name")
scatter_pet2.show()
###Output
_____no_output_____
###Markdown
Although it was relatively easy to create two plots to compare each, it is worth asking, could we compare both age and weight, relative to how long it took for the pets to be adopted? Yes!In the next section, we will learn how to use the `scatter_3d()` within Plotly to do this. Passing three variables. Run the code below to create a 3D scatter plot of the pets' weight, age and time to adoption, where we will use three variables `x, y, z` such that`x = 'Weight (lbs)'``y = 'Age (years)'``z = 'Time to Adoption (weeks)'`
###Code
# Create 3D scatter plot
fig = px.scatter_3d(pets,
x='Weight (lbs)',
y='Age (years)',
z='Time to Adoption (weeks)',
hover_name="Name",title='Age, weight and time to adoption of pets')
fig.show()
###Output
_____no_output_____
###Markdown
Hovering over the plots let us see that Kujo, an 8-year old pet that weighs 172 lbs was adopted after 30 weeks is the pet that took the longest to be adopted. Read more about 3D scatter plots here https://plotly.com/python/3d-scatter-plots/. 4D+ PlotsWhile we cannot visualize more than three dimensions, we can incorporate more than three variables by incorporating different symbols and colours. Let's suppose for instance, that we want to identify the gender of the pet in addition to their age, weight and time to adoption.
###Code
fig = px.scatter_3d(pets,
x='Weight (lbs)',
y='Age (years)',
z='Time to Adoption (weeks)',
color='Gender',
hover_name="Name", title='Age, weight, gender and time to adoption of pets')
fig.show()
###Output
_____no_output_____
###Markdown
We can now see that Kujo is a male pet. Let's add one more dimension by incorporating symbols and let's categorize by species.
###Code
fig = px.scatter_3d(pets,
x='Weight (lbs)',
y='Age (years)',
z='Time to Adoption (weeks)',
color='Gender',
symbol='Species',
opacity=0.5,
hover_name="Name", title='Species, age, weight, gender and time to adoption of pets')
fig.show()
# Save to HTML file
# pio.write_html(fig,"3D_plus_Scatter_plot_species.html", auto_open=True)
###Output
_____no_output_____
###Markdown
We can then see that Kujo is an 8-year old male dog, that weighs 172 lbs, and that it took Kujo 30 weeks to be adopted. Surface PlotsAnother way we can represent three variables in a plot is by using surface plots. As before, we will pass `x,y,z` which contain arrays with datapoints to create a 3D surface.
###Code
import numpy as np
x = np.outer(np.linspace(-2, 2, 30), np.ones(30))
y = x.copy().T # transpose
z = np.cos(x ** 2 + y ** 2)
###Output
_____no_output_____
###Markdown
Exercise: Explore `x,y,z` by printing its contents.
###Code
print(x)
print(y)
print(z)
###Output
_____no_output_____
###Markdown
We can then plot them using the `Surface` function from `plotly.graph_objects`.
###Code
import plotly.graph_objects as go
trace = go.Surface(x = x, y = y, z =z )
data = [trace]
layout = go.Layout(title = '3D Surface plot')
fig = go.Figure(data = data)
fig.show()
# Write to HTML file
#pio.write_html(fig,"surface_plot3d.html")
###Output
_____no_output_____ |
notebooks/soho.ipynb | ###Markdown
filter transmission profiles here: http://www.ias.u-psud.fr/virgo/virgo%20new/
###Code
import numpy as np
import matplotlib.pyplot as pl
%matplotlib inline
tr, tg, tb = (np.loadtxt('soho/virspmred.dat').T,
np.loadtxt('soho/virspmgrn.dat').T,
np.loadtxt('soho/virspmblu.dat').T)
pl.plot(tr[0]/1e3, tr[1], 'r')
pl.plot(tg[0]/1e3, tg[1], 'g')
pl.plot(tb[0]/1e3, tb[1], 'b')
###Output
_____no_output_____
###Markdown
Grab some Phoenix spectra from the JexoSim archive:
###Code
from astropy.io import fits
spec_mean = fits.open('../JexoSim/archive/BT-Settl_M-0.0a+0.0/lte058.0-4.5-0.0a+0.0.BT-Settl.spec.fits.gz')
spec_cold = fits.open('../JexoSim/archive/BT-Settl_M-0.0a+0.0/lte058.0-4.5-0.0a+0.0.BT-Settl.spec.fits.gz')
spec_hot = fits.open('../JexoSim/archive/BT-Settl_M-0.0a+0.0/lte062.0-4.5-0.0a+0.0.BT-Settl.spec.fits.gz')
wlc = spec_cold[1].data.field('wavelength')
wlh = spec_hot[1].data.field('wavelength')
fc = spec_cold[1].data.field('flux')
fh = spec_hot[1].data.field('flux')
from scipy.interpolate import interp1d
interp_transmission_r = interp1d(tr[0]/1e3, tr[1])
interp_transmission_g = interp1d(tg[0]/1e3, tg[1])
interp_transmission_b = interp1d(tb[0]/1e3, tb[1])
interp_spec_hot = interp1d(wlh, fh)
interp_spec_cold = interp1d(wlc, fc)
int_hot_r = lambda x: interp_transmission_r(x)*interp_spec_hot(x)
int_cold_r = lambda x: interp_transmission_r(x)*interp_spec_cold(x)
int_hot_g = lambda x: interp_transmission_g(x)*interp_spec_hot(x)
int_cold_g = lambda x: interp_transmission_g(x)*interp_spec_cold(x)
int_hot_b = lambda x: interp_transmission_b(x)*interp_spec_hot(x)
int_cold_b = lambda x: interp_transmission_b(x)*interp_spec_cold(x)
fig, ax = pl.subplots(1, 3, figsize=(15, 5))
x_r, x_g, x_b = (np.linspace(np.min(tr[0]/1e3), np.max(tr[0]/1e3), 500),
np.linspace(np.min(tg[0]/1e3), np.max(tg[0]/1e3), 500),
np.linspace(np.min(tb[0]/1e3), np.max(tb[0]/1e3), 500))
ax[0].plot(x_r, interp_spec_hot(x_r)/np.max(interp_spec_hot(x_r)), 'b')
ax[0].plot(x_r, interp_spec_cold(x_r)/np.max(interp_spec_hot(x_r)), 'r')
ax[0].plot(x_r, interp_transmission_r(x_r), 'k', linewidth=3)
ax[1].plot(x_g, interp_spec_hot(x_g)/np.max(interp_spec_hot(x_g)), 'b')
ax[1].plot(x_g, interp_spec_cold(x_g)/np.max(interp_spec_hot(x_g)), 'r')
ax[1].plot(x_g, interp_transmission_g(x_g), 'k', linewidth=3)
ax[2].plot(x_b, interp_spec_hot(x_b)/np.max(interp_spec_hot(x_b)), 'b')
ax[2].plot(x_b, interp_spec_cold(x_b)/np.max(interp_spec_hot(x_b)), 'r')
ax[2].plot(x_b, interp_transmission_b(x_b), 'k', linewidth=3)
from scipy.integrate import quad
flux_hot_r = quad(int_hot_r, np.min(tr[0])/1e3, np.max(tr[0])/1e3)
flux_cold_r = quad(int_cold_r, np.min(tr[0])/1e3, np.max(tr[0])/1e3)
flux_hot_g = quad(int_hot_g, np.min(tg[0])/1e3, np.max(tg[0])/1e3)
flux_cold_g = quad(int_cold_g, np.min(tg[0])/1e3, np.max(tg[0])/1e3)
flux_hot_b = quad(int_hot_b, np.min(tb[0])/1e3, np.max(tb[0])/1e3)
flux_cold_b = quad(int_cold_b, np.min(tb[0])/1e3, np.max(tb[0])/1e3)
alpha_1 = (flux_hot_r[0] - flux_cold_r[0]) / flux_cold_r[0]
alpha_2 = (flux_hot_g[0] - flux_cold_g[0]) / flux_cold_g[0]
alpha_3 = (flux_hot_b[0] - flux_cold_b[0]) / flux_cold_b[0]
###Output
_____no_output_____
###Markdown
Let's take a look at the SOHO data
###Code
from astropy.time import Time
blue = fits.open('soho/blue.fits')
green = fits.open('soho/green.fits')
red = fits.open('soho/red.fits')
rgb = red, green, blue
rgb = [f[0].data for f in rgb]
mask = np.all([np.isfinite(f) for f in rgb], axis=0)
start = blue[0].header['DATES'][0:9]
end = blue[0].header['DATES'][14:]
start, end = Time([start, end]).jd
t = np.linspace(start, end, np.shape(rgb)[1]) - start
t = t[mask]
rgb = [f[mask].astype('float64') for f in rgb]
flux = np.sum(rgb, axis=0)/np.shape(rgb)[0]
# choose an arbitrary starting index and number of points to
# select a segment of the (very large) SOHO timeseries
i = 18273
n = 2000
t = t[i:i+n] - np.mean(t[i:i+n])
# in parts per part
rgb = [f[i:i+n]/1e6 for f in rgb]
fig, ax = pl.subplots(3, 1)
ax[0].plot(t, rgb[0], 'r')
ax[1].plot(t, rgb[1], 'g')
ax[2].plot(t, rgb[2], 'b')
#[x.set_ylim(-0.5, 0.5) for x in ax]
np.std(np.sum(rgb, axis=0) / 3) * 1e6
(1 / (886 * 2 * np.pi)) * 60 * 60 * 24
(1 - rgb[0]/flux_cold_r[0])/alpha_1
fig, ax = pl.subplots(3, 1)
ax[0].plot(t, rgb[0]/alpha_1, 'r')
ax[1].plot(t, rgb[1]/alpha_2, 'r')
ax[2].plot(t, rgb[2]/alpha_3, 'r')
#[x.set_ylim(-2, 2) for x in ax]
xr = rgb[0]
pl.plot(t, xr - np.mean(xr), 'r.', alpha=0.3)
pl.plot(t, xg - np.mean(xg), 'g.', alpha=0.3)
pl.plot(t, xb - np.mean(xb), 'b.', alpha=0.3)
###Output
_____no_output_____
###Markdown
Looks good I guess! Let's fit a GP to the covering fraction so that we can use that to make up some variability for our targets.
###Code
from scipy.optimize import minimize
import celerite2
from celerite2 import terms
x = xb
granulation_term = terms.SHOTerm(S0=5e-10, w0=1e3, Q=1/np.sqrt(2))
gp = celerite2.GaussianProcess(granulation_term, mean=0.0)
yerr = 20 * 1e-6
def set_params(params, gp):
gp.mean = params[0]
theta = np.exp(params[1:])
gp.kernel = terms.SHOTerm(S0=theta[0], w0=theta[1], Q=1/np.sqrt(2))
gp.compute(t, diag = yerr ** 2 + theta[2], quiet=True)
return gp
def neg_log_like(params, gp):
gp = set_params(params, gp)
return -gp.log_likelihood(np.array(x))
initial_params = [0.0, np.log(5e-10), np.log(1e3), np.log(1e-6)]
print(neg_log_like(initial_params, gp))
soln = minimize(neg_log_like, initial_params, method="L-BFGS-B", args=(gp,))
opt_gp = set_params(soln.x, gp)
print(soln)
print(np.exp(soln.x[1:]))
f = np.fft.rfftfreq(len(x), t[1] - t[0])
fft = np.fft.rfft(x)
fft = fft*np.conj(fft)
powerfft = fft.real / len(t)**2
ampfft = np.sqrt(powerfft * (60 * 60 * 24) / (2*np.pi)) * 1e6
psd = opt_gp.kernel.terms[0].get_psd(2*np.pi*f)
psd_amp = np.sqrt(psd * (60*60*24) / (2*np.pi)) * 1e6
pl.figure(figsize=(12, 6))
pl.loglog(f, psd_amp, '-')
pl.loglog(f, ampfft, 'k.', alpha=0.5)
#pl.ylim(1, 1e4)
t = np.linspace(0, 50000, 50000) / (60*60*24)
mean_temp = 4500
cold_temp = 4700
hot_temp = 4900
spec_num = lambda x: str(np.int(np.round(x/100)))
# generate a realization of the covering fraction GP
params = [0.0, -2.26671849e+01, 6.82128886e+00, -np.inf]
term = terms.SHOTerm(S0=0, w0=0, Q=0)
gp = celerite2.GaussianProcess(term, mean=0.0)
gp = set_params(params, gp)
xc = gp.dot_tril(y = np.random.randn(len(t))) + 0.5
# get the spectra
spec_mean = fits.open('../JexoSim/archive/BT-Settl_M-0.0a+0.0/lte0' + spec_num(mean_temp) + '.0-4.5-0.0a+0.0.BT-Settl.spec.fits.gz')
spec_cold = fits.open('../JexoSim/archive/BT-Settl_M-0.0a+0.0/lte0' + spec_num(cold_temp) + '.0-4.5-0.0a+0.0.BT-Settl.spec.fits.gz')
spec_hot = fits.open('../JexoSim/archive/BT-Settl_M-0.0a+0.0/lte0' + spec_num(hot_temp) + '.0-4.5-0.0a+0.0.BT-Settl.spec.fits.gz')
wlm = spec_mean[1].data.field('wavelength')
wlc = spec_cold[1].data.field('wavelength')
wlh = spec_hot[1].data.field('wavelength')
fm = spec_mean[1].data.field('flux')
fc = spec_cold[1].data.field('flux')
fh = spec_hot[1].data.field('flux')
st = np.where(np.isclose(wlm, 0.6))[0][0]
end = np.where(np.isclose(wlm, 5.3))[0][1]
wl = np.linspace(wlm[st], wlm[end], 1000)
fm_interp = interp1d(wlm, fm)
fc_interp = interp1d(wlc, fc)
fh_interp = interp1d(wlh, fh)
fc_norm = fc_interp(wl) / fm_interp(wl)
fh_norm = fh_interp(wl) / fm_interp(wl)
data = (fc_norm[:, None] * xc) + (fh_norm[:, None] * (1-xc))
pl.figure(figsize=(12, 6))
pl.plot(wl, data[:,49107], '-')
#pl.xlim(0, 5)
#pl.ylim(0, 10)
###Output
_____no_output_____
###Markdown
filter transmission profiles here: http://www.ias.u-psud.fr/virgo/virgo%20new/
###Code
import numpy as np
import matplotlib.pyplot as pl
%matplotlib inline
tr, tg, tb = (np.loadtxt('soho/virspmred.dat').T,
np.loadtxt('soho/virspmgrn.dat').T,
np.loadtxt('soho/virspmblu.dat').T)
pl.plot(tr[0]/1e3, tr[1], 'r')
pl.plot(tg[0]/1e3, tg[1], 'g')
pl.plot(tb[0]/1e3, tb[1], 'b')
###Output
_____no_output_____
###Markdown
Grab some Phoenix spectra from the JexoSim archive:
###Code
from astropy.io import fits
spec_mean = fits.open('../JexoSim/archive/BT-Settl_M-0.0a+0.0/lte058.0-4.5-0.0a+0.0.BT-Settl.spec.fits.gz')
spec_cold = fits.open('../JexoSim/archive/BT-Settl_M-0.0a+0.0/lte058.0-4.5-0.0a+0.0.BT-Settl.spec.fits.gz')
spec_hot = fits.open('../JexoSim/archive/BT-Settl_M-0.0a+0.0/lte062.0-4.5-0.0a+0.0.BT-Settl.spec.fits.gz')
wlc = spec_cold[1].data.field('wavelength')
wlh = spec_hot[1].data.field('wavelength')
fc = spec_cold[1].data.field('flux')
fh = spec_hot[1].data.field('flux')
from scipy.interpolate import interp1d
interp_transmission_r = interp1d(tr[0]/1e3, tr[1])
interp_transmission_g = interp1d(tg[0]/1e3, tg[1])
interp_transmission_b = interp1d(tb[0]/1e3, tb[1])
interp_spec_hot = interp1d(wlh, fh)
interp_spec_cold = interp1d(wlc, fc)
int_hot_r = lambda x: interp_transmission_r(x)*interp_spec_hot(x)
int_cold_r = lambda x: interp_transmission_r(x)*interp_spec_cold(x)
int_hot_g = lambda x: interp_transmission_g(x)*interp_spec_hot(x)
int_cold_g = lambda x: interp_transmission_g(x)*interp_spec_cold(x)
int_hot_b = lambda x: interp_transmission_b(x)*interp_spec_hot(x)
int_cold_b = lambda x: interp_transmission_b(x)*interp_spec_cold(x)
fig, ax = pl.subplots(1, 3, figsize=(15, 5))
x_r, x_g, x_b = (np.linspace(np.min(tr[0]/1e3), np.max(tr[0]/1e3), 500),
np.linspace(np.min(tg[0]/1e3), np.max(tg[0]/1e3), 500),
np.linspace(np.min(tb[0]/1e3), np.max(tb[0]/1e3), 500))
ax[0].plot(x_r, interp_spec_hot(x_r)/np.max(interp_spec_hot(x_r)), 'b')
ax[0].plot(x_r, interp_spec_cold(x_r)/np.max(interp_spec_hot(x_r)), 'r')
ax[0].plot(x_r, interp_transmission_r(x_r), 'k', linewidth=3)
ax[1].plot(x_g, interp_spec_hot(x_g)/np.max(interp_spec_hot(x_g)), 'b')
ax[1].plot(x_g, interp_spec_cold(x_g)/np.max(interp_spec_hot(x_g)), 'r')
ax[1].plot(x_g, interp_transmission_g(x_g), 'k', linewidth=3)
ax[2].plot(x_b, interp_spec_hot(x_b)/np.max(interp_spec_hot(x_b)), 'b')
ax[2].plot(x_b, interp_spec_cold(x_b)/np.max(interp_spec_hot(x_b)), 'r')
ax[2].plot(x_b, interp_transmission_b(x_b), 'k', linewidth=3)
from scipy.integrate import quad
flux_hot_r = quad(int_hot_r, np.min(tr[0])/1e3, np.max(tr[0])/1e3)
flux_cold_r = quad(int_cold_r, np.min(tr[0])/1e3, np.max(tr[0])/1e3)
flux_hot_g = quad(int_hot_g, np.min(tg[0])/1e3, np.max(tg[0])/1e3)
flux_cold_g = quad(int_cold_g, np.min(tg[0])/1e3, np.max(tg[0])/1e3)
flux_hot_b = quad(int_hot_b, np.min(tb[0])/1e3, np.max(tb[0])/1e3)
flux_cold_b = quad(int_cold_b, np.min(tb[0])/1e3, np.max(tb[0])/1e3)
alpha_1 = (flux_hot_r[0] - flux_cold_r[0]) / flux_cold_r[0]
alpha_2 = (flux_hot_g[0] - flux_cold_g[0]) / flux_cold_g[0]
alpha_3 = (flux_hot_b[0] - flux_cold_b[0]) / flux_cold_b[0]
###Output
_____no_output_____
###Markdown
Let's take a look at the SOHO data
###Code
from astropy.time import Time
blue = fits.open('soho/blue.fits')
green = fits.open('soho/green.fits')
red = fits.open('soho/red.fits')
rgb = red, green, blue
rgb = [f[0].data for f in rgb]
mask = np.all([np.isfinite(f) for f in rgb], axis=0)
start = blue[0].header['DATES'][0:9]
end = blue[0].header['DATES'][14:]
start, end = Time([start, end]).jd
t = np.linspace(start, end, np.shape(rgb)[1]) - start
t = t[mask]
rgb = [f[mask].astype('float64') for f in rgb]
flux = np.sum(rgb, axis=0)/np.shape(rgb)[0]
# choose an arbitrary starting index and number of points to
# select a segment of the (very large) SOHO timeseries
i = 18273
n = 2000
t = t[i:i+n] - np.mean(t[i:i+n])
# in parts per part
rgb = [f[i:i+n]/1e6 for f in rgb]
fig, ax = pl.subplots(3, 1)
ax[0].plot(t, rgb[0], 'r')
ax[1].plot(t, rgb[1], 'g')
ax[2].plot(t, rgb[2], 'b')
#[x.set_ylim(-0.5, 0.5) for x in ax]
(1 - rgb[0]/flux_cold_r[0])/alpha_1
fig, ax = pl.subplots(3, 1)
ax[0].plot(t, rgb[0]/alpha_1, 'r')
ax[1].plot(t, rgb[1]/alpha_2, 'r')
ax[2].plot(t, rgb[2]/alpha_3, 'r')
#[x.set_ylim(-2, 2) for x in ax]
xr = rgb[0]
pl.plot(t, xr - np.mean(xr), 'r.', alpha=0.3)
pl.plot(t, xg - np.mean(xg), 'g.', alpha=0.3)
pl.plot(t, xb - np.mean(xb), 'b.', alpha=0.3)
###Output
_____no_output_____
###Markdown
Looks good I guess! Let's fit a GP to the covering fraction so that we can use that to make up some variability for our targets.
###Code
from scipy.optimize import minimize
import celerite2
from celerite2 import terms
x = xb
granulation_term = terms.SHOTerm(S0=5e-10, w0=1e3, Q=1/np.sqrt(2))
gp = celerite2.GaussianProcess(granulation_term, mean=0.0)
yerr = 20 * 1e-6
def set_params(params, gp):
gp.mean = params[0]
theta = np.exp(params[1:])
gp.kernel = terms.SHOTerm(S0=theta[0], w0=theta[1], Q=1/np.sqrt(2))
gp.compute(t, diag = yerr ** 2 + theta[2], quiet=True)
return gp
def neg_log_like(params, gp):
gp = set_params(params, gp)
return -gp.log_likelihood(np.array(x))
initial_params = [0.0, np.log(5e-10), np.log(1e3), np.log(1e-6)]
print(neg_log_like(initial_params, gp))
soln = minimize(neg_log_like, initial_params, method="L-BFGS-B", args=(gp,))
opt_gp = set_params(soln.x, gp)
print(soln)
print(np.exp(soln.x[1:]))
f = np.fft.rfftfreq(len(x), t[1] - t[0])
fft = np.fft.rfft(x)
fft = fft*np.conj(fft)
powerfft = fft.real / len(t)**2
ampfft = np.sqrt(powerfft * (60 * 60 * 24) / (2*np.pi)) * 1e6
psd = opt_gp.kernel.terms[0].get_psd(2*np.pi*f)
psd_amp = np.sqrt(psd * (60*60*24) / (2*np.pi)) * 1e6
pl.figure(figsize=(12, 6))
pl.loglog(f, psd_amp, '-')
pl.loglog(f, ampfft, 'k.', alpha=0.5)
#pl.ylim(1, 1e4)
t = np.linspace(0, 50000, 50000) / (60*60*24)
mean_temp = 4500
cold_temp = 4700
hot_temp = 4900
spec_num = lambda x: str(np.int(np.round(x/100)))
# generate a realization of the covering fraction GP
params = [0.0, -2.26671849e+01, 6.82128886e+00, -np.inf]
term = terms.SHOTerm(S0=0, w0=0, Q=0)
gp = celerite2.GaussianProcess(term, mean=0.0)
gp = set_params(params, gp)
xc = gp.dot_tril(y = np.random.randn(len(t))) + 0.5
# get the spectra
spec_mean = fits.open('../JexoSim/archive/BT-Settl_M-0.0a+0.0/lte0' + spec_num(mean_temp) + '.0-4.5-0.0a+0.0.BT-Settl.spec.fits.gz')
spec_cold = fits.open('../JexoSim/archive/BT-Settl_M-0.0a+0.0/lte0' + spec_num(cold_temp) + '.0-4.5-0.0a+0.0.BT-Settl.spec.fits.gz')
spec_hot = fits.open('../JexoSim/archive/BT-Settl_M-0.0a+0.0/lte0' + spec_num(hot_temp) + '.0-4.5-0.0a+0.0.BT-Settl.spec.fits.gz')
wlm = spec_mean[1].data.field('wavelength')
wlc = spec_cold[1].data.field('wavelength')
wlh = spec_hot[1].data.field('wavelength')
fm = spec_mean[1].data.field('flux')
fc = spec_cold[1].data.field('flux')
fh = spec_hot[1].data.field('flux')
st = np.where(np.isclose(wlm, 0.6))[0][0]
end = np.where(np.isclose(wlm, 5.3))[0][1]
wl = np.linspace(wlm[st], wlm[end], 1000)
fm_interp = interp1d(wlm, fm)
fc_interp = interp1d(wlc, fc)
fh_interp = interp1d(wlh, fh)
fc_norm = fc_interp(wl) / fm_interp(wl)
fh_norm = fh_interp(wl) / fm_interp(wl)
data = (fc_norm[:, None] * xc) + (fh_norm[:, None] * (1-xc))
pl.figure(figsize=(12, 6))
pl.plot(wl, data[:,49107], '-')
#pl.xlim(0, 5)
#pl.ylim(0, 10)
pl.imshow(data, aspect='auto')
b1 = np.sum(data[1:100], axis=0)
b2 = np.sum(data[100:300], axis=0)
b3 = np.sum(data[300:], axis=0)
wn = np.random.randn(len(b1)) * 60 * 1e-6
pl.figure(figsize=(10, 5))
pl.plot(t, b1/np.mean(b1) + wn - 0.001, '-')
pl.plot(t, b2/np.mean(b2) + wn, '-')
pl.plot(t, b3/np.mean(b3) + wn + 0.001, '-')
len(data)
###Output
_____no_output_____ |
docs/tutorials/analysis/3D/simulate_3d.ipynb | ###Markdown
3D map simulation Prerequisites- Knowledge of 3D extraction and datasets used in gammapy, see for instance the [first analysis tutorial](../../starting/analysis_1.ipynb) ContextTo simulate a specific observation, it is not always necessary to simulate the full photon list. For many uses cases, simulating directly a reduced binned dataset is enough: the IRFs reduced in the correct geometry are combined with a source model to predict an actual number of counts per bin. The latter is then used to simulate a reduced dataset using Poisson probability distribution.This can be done to check the feasibility of a measurement (performance / sensitivity study), to test whether fitted parameters really provide a good fit to the data etc.Here we will see how to perform a 3D simulation of a CTA observation, assuming both the spectral and spatial morphology of an observed source.**Objective: simulate a 3D observation of a source with CTA using the CTA 1DC response and fit it with the assumed source model.** Proposed approach:Here we can't use the regular observation objects that are connected to a `DataStore`. Instead we will create a fake `~gammapy.data.Observation` that contain some pointing information and the CTA 1DC IRFs (that are loaded with `~gammapy.irf.load_cta_irfs`).Then we will create a `~gammapy.datasets.MapDataset` geometry and create it with the `~gammapy.makers.MapDatasetMaker`.Then we will be able to define a model consisting of a `~gammapy.modeling.models.PowerLawSpectralModel` and a `~gammapy.modeling.models.GaussianSpatialModel`. We will assign it to the dataset and fake the count data. Imports and versions
###Code
%matplotlib inline
import numpy as np
import astropy.units as u
from astropy.coordinates import SkyCoord
from gammapy.irf import load_cta_irfs
from gammapy.maps import WcsGeom, MapAxis
from gammapy.modeling.models import (
PowerLawSpectralModel,
GaussianSpatialModel,
SkyModel,
Models,
FoVBackgroundModel,
)
from gammapy.makers import MapDatasetMaker, SafeMaskMaker
from gammapy.modeling import Fit
from gammapy.data import Observation
from gammapy.datasets import MapDataset
!gammapy info --no-envvar --no-dependencies --no-system
###Output
_____no_output_____
###Markdown
Simulation We will simulate using the CTA-1DC IRFs shipped with gammapy. Note that for dedictaed CTA simulations, you can simply use [`Observation.from_caldb()`]() without having to externally load the IRFs
###Code
# Loading IRFs
irfs = load_cta_irfs(
"$GAMMAPY_DATA/cta-1dc/caldb/data/cta/1dc/bcf/South_z20_50h/irf_file.fits"
)
# Define the observation parameters (typically the observation duration and the pointing position):
livetime = 2.0 * u.hr
pointing = SkyCoord(0, 0, unit="deg", frame="galactic")
# Define map geometry for binned simulation
energy_reco = MapAxis.from_edges(
np.logspace(-1.0, 1.0, 10), unit="TeV", name="energy", interp="log"
)
geom = WcsGeom.create(
skydir=(0, 0),
binsz=0.02,
width=(6, 6),
frame="galactic",
axes=[energy_reco],
)
# It is usually useful to have a separate binning for the true energy axis
energy_true = MapAxis.from_edges(
np.logspace(-1.5, 1.5, 30), unit="TeV", name="energy", interp="log"
)
empty = MapDataset.create(geom, name="dataset-simu")
# Define sky model to used simulate the data.
# Here we use a Gaussian spatial model and a Power Law spectral model.
spatial_model = GaussianSpatialModel(
lon_0="0.2 deg", lat_0="0.1 deg", sigma="0.3 deg", frame="galactic"
)
spectral_model = PowerLawSpectralModel(
index=3, amplitude="1e-11 cm-2 s-1 TeV-1", reference="1 TeV"
)
model_simu = SkyModel(
spatial_model=spatial_model,
spectral_model=spectral_model,
name="model-simu",
)
bkg_model = FoVBackgroundModel(dataset_name="dataset-simu")
models = Models([model_simu, bkg_model])
print(models)
###Output
_____no_output_____
###Markdown
Now, comes the main part of dataset simulation. We create an in-memory observation and an empty dataset. We then predict the number of counts for the given model, and Poission fluctuate it using `fake()` to make a simulated counts maps. Keep in mind that it is important to specify the `selection` of the maps that you want to produce
###Code
# Create an in-memory observation
obs = Observation.create(pointing=pointing, livetime=livetime, irfs=irfs)
print(obs)
# Make the MapDataset
maker = MapDatasetMaker(selection=["exposure", "background", "psf", "edisp"])
maker_safe_mask = SafeMaskMaker(methods=["offset-max"], offset_max=4.0 * u.deg)
dataset = maker.run(empty, obs)
dataset = maker_safe_mask.run(dataset, obs)
print(dataset)
# Add the model on the dataset and Poission fluctuate
dataset.models = models
dataset.fake()
# Do a print on the dataset - there is now a counts maps
print(dataset)
###Output
_____no_output_____
###Markdown
Now use this dataset as you would in all standard analysis. You can plot the maps, or proceed with your custom analysis. In the next section, we show the standard 3D fitting as in [analysis_3d](analysis_3d.ipynb).
###Code
# To plot, eg, counts:
dataset.counts.smooth(0.05 * u.deg).plot_interactive(
add_cbar=True, stretch="linear"
)
###Output
_____no_output_____
###Markdown
FitIn this section, we do a usual 3D fit with the same model used to simulated the data and see the stability of the simulations. Often, it is useful to simulate many such datasets and look at the distribution of the reconstructed parameters.
###Code
models_fit = models.copy()
# We do not want to fit the background in this case, so we will freeze the parameters
models_fit["dataset-simu-bkg"].spectral_model.norm.frozen = True
models_fit["dataset-simu-bkg"].spectral_model.tilt.frozen = True
dataset.models = models_fit
print(dataset.models)
%%time
fit = Fit([dataset])
result = fit.run(optimize_opts={"print_level": 1})
dataset.plot_residuals_spatial(method="diff/sqrt(model)", vmin=-0.5, vmax=0.5)
###Output
_____no_output_____
###Markdown
Compare the injected and fitted models:
###Code
print(
"True model: \n",
model_simu,
"\n\n Fitted model: \n",
models_fit["model-simu"],
)
###Output
_____no_output_____
###Markdown
Get the errors on the fitted parameters from the parameter table
###Code
result.parameters.to_table()
###Output
_____no_output_____
###Markdown
3D map simulation Prerequisites- Knowledge of 3D extraction and datasets used in gammapy, see for instance the [first analysis tutorial](../../starting/analysis_1.ipynb) ContextTo simulate a specific observation, it is not always necessary to simulate the full photon list. For many uses cases, simulating directly a reduced binned dataset is enough: the IRFs reduced in the correct geometry are combined with a source model to predict an actual number of counts per bin. The latter is then used to simulate a reduced dataset using Poisson probability distribution.This can be done to check the feasibility of a measurement (performance / sensitivity study), to test whether fitted parameters really provide a good fit to the data etc.Here we will see how to perform a 3D simulation of a CTA observation, assuming both the spectral and spatial morphology of an observed source.**Objective: simulate a 3D observation of a source with CTA using the CTA 1DC response and fit it with the assumed source model.** Proposed approachHere we can't use the regular observation objects that are connected to a `DataStore`. Instead we will create a fake `~gammapy.data.Observation` that contain some pointing information and the CTA 1DC IRFs (that are loaded with `~gammapy.irf.load_cta_irfs`).Then we will create a `~gammapy.datasets.MapDataset` geometry and create it with the `~gammapy.makers.MapDatasetMaker`.Then we will be able to define a model consisting of a `~gammapy.modeling.models.PowerLawSpectralModel` and a `~gammapy.modeling.models.GaussianSpatialModel`. We will assign it to the dataset and fake the count data. Imports and versions
###Code
%matplotlib inline
import numpy as np
import astropy.units as u
from astropy.coordinates import SkyCoord
from gammapy.irf import load_cta_irfs
from gammapy.maps import WcsGeom, MapAxis
from gammapy.modeling.models import (
PowerLawSpectralModel,
GaussianSpatialModel,
SkyModel,
Models,
FoVBackgroundModel,
)
from gammapy.makers import MapDatasetMaker, SafeMaskMaker
from gammapy.modeling import Fit
from gammapy.data import Observation, observatory_locations
from gammapy.datasets import MapDataset
!gammapy info --no-envvar --no-dependencies --no-system
###Output
_____no_output_____
###Markdown
Simulation We will simulate using the CTA-1DC IRFs shipped with gammapy. Note that for dedictaed CTA simulations, you can simply use [`Observation.from_caldb()`]() without having to externally load the IRFs
###Code
# Loading IRFs
irfs = load_cta_irfs(
"$GAMMAPY_DATA/cta-1dc/caldb/data/cta/1dc/bcf/South_z20_50h/irf_file.fits"
)
# Define the observation parameters (typically the observation duration and the pointing position):
livetime = 2.0 * u.hr
pointing = SkyCoord(0, 0, unit="deg", frame="galactic")
# Define map geometry for binned simulation
energy_reco = MapAxis.from_edges(
np.logspace(-1.0, 1.0, 10), unit="TeV", name="energy", interp="log"
)
geom = WcsGeom.create(
skydir=(0, 0),
binsz=0.02,
width=(6, 6),
frame="galactic",
axes=[energy_reco],
)
# It is usually useful to have a separate binning for the true energy axis
energy_true = MapAxis.from_edges(
np.logspace(-1.5, 1.5, 30), unit="TeV", name="energy_true", interp="log"
)
empty = MapDataset.create(
geom, name="dataset-simu", energy_axis_true=energy_true
)
# Define sky model to used simulate the data.
# Here we use a Gaussian spatial model and a Power Law spectral model.
spatial_model = GaussianSpatialModel(
lon_0="0.2 deg", lat_0="0.1 deg", sigma="0.3 deg", frame="galactic"
)
spectral_model = PowerLawSpectralModel(
index=3, amplitude="1e-11 cm-2 s-1 TeV-1", reference="1 TeV"
)
model_simu = SkyModel(
spatial_model=spatial_model,
spectral_model=spectral_model,
name="model-simu",
)
bkg_model = FoVBackgroundModel(dataset_name="dataset-simu")
models = Models([model_simu, bkg_model])
print(models)
###Output
_____no_output_____
###Markdown
Now, comes the main part of dataset simulation. We create an in-memory observation and an empty dataset. We then predict the number of counts for the given model, and Poission fluctuate it using `fake()` to make a simulated counts maps. Keep in mind that it is important to specify the `selection` of the maps that you want to produce
###Code
# Create an in-memory observation
location = observatory_locations['cta_south']
obs = Observation.create(pointing=pointing, livetime=livetime, irfs=irfs, location=location)
print(obs)
# Make the MapDataset
maker = MapDatasetMaker(selection=["exposure", "background", "psf", "edisp"])
maker_safe_mask = SafeMaskMaker(methods=["offset-max"], offset_max=4.0 * u.deg)
dataset = maker.run(empty, obs)
dataset = maker_safe_mask.run(dataset, obs)
print(dataset)
# Add the model on the dataset and Poission fluctuate
dataset.models = models
dataset.fake()
# Do a print on the dataset - there is now a counts maps
print(dataset)
###Output
_____no_output_____
###Markdown
Now use this dataset as you would in all standard analysis. You can plot the maps, or proceed with your custom analysis. In the next section, we show the standard 3D fitting as in [analysis_3d](analysis_3d.ipynb).
###Code
# To plot, eg, counts:
dataset.counts.smooth(0.05 * u.deg).plot_interactive(
add_cbar=True, stretch="linear"
)
###Output
_____no_output_____
###Markdown
FitIn this section, we do a usual 3D fit with the same model used to simulated the data and see the stability of the simulations. Often, it is useful to simulate many such datasets and look at the distribution of the reconstructed parameters.
###Code
models_fit = models.copy()
# We do not want to fit the background in this case, so we will freeze the parameters
models_fit["dataset-simu-bkg"].spectral_model.norm.frozen = True
models_fit["dataset-simu-bkg"].spectral_model.tilt.frozen = True
dataset.models = models_fit
print(dataset.models)
%%time
fit = Fit(optimize_opts={"print_level": 1})
result = fit.run(datasets=[dataset])
dataset.plot_residuals_spatial(method="diff/sqrt(model)", vmin=-0.5, vmax=0.5)
###Output
_____no_output_____
###Markdown
Compare the injected and fitted models:
###Code
print(
"True model: \n",
model_simu,
"\n\n Fitted model: \n",
models_fit["model-simu"],
)
###Output
_____no_output_____
###Markdown
Get the errors on the fitted parameters from the parameter table
###Code
result.parameters.to_table()
###Output
_____no_output_____
###Markdown
3D map simulation Prerequisites- Knowledge of 3D extraction and datasets used in gammapy, see for instance the [first analysis tutorial](../../starting/analysis_1.ipynb) ContextTo simulate a specific observation, it is not always necessary to simulate the full photon list. For many uses cases, simulating directly a reduced binned dataset is enough: the IRFs reduced in the correct geometry are combined with a source model to predict an actual number of counts per bin. The latter is then used to simulate a reduced dataset using Poisson probability distribution.This can be done to check the feasibility of a measurement (performance / sensitivity study), to test whether fitted parameters really provide a good fit to the data etc.Here we will see how to perform a 3D simulation of a CTA observation, assuming both the spectral and spatial morphology of an observed source.**Objective: simulate a 3D observation of a source with CTA using the CTA 1DC response and fit it with the assumed source model.** Proposed approach:Here we can't use the regular observation objects that are connected to a `DataStore`. Instead we will create a fake `~gammapy.data.Observation` that contain some pointing information and the CTA 1DC IRFs (that are loaded with `~gammapy.irf.load_cta_irfs`).Then we will create a `~gammapy.datasets.MapDataset` geometry and create it with the `~gammapy.makers.MapDatasetMaker`.Then we will be able to define a model consisting of a `~gammapy.modeling.models.PowerLawSpectralModel` and a `~gammapy.modeling.models.GaussianSpatialModel`. We will assign it to the dataset and fake the count data. Imports and versions
###Code
%matplotlib inline
import numpy as np
import astropy.units as u
from astropy.coordinates import SkyCoord
from gammapy.irf import load_cta_irfs
from gammapy.maps import WcsGeom, MapAxis
from gammapy.modeling.models import (
PowerLawSpectralModel,
GaussianSpatialModel,
SkyModel,
Models,
FoVBackgroundModel,
)
from gammapy.makers import MapDatasetMaker, SafeMaskMaker
from gammapy.modeling import Fit
from gammapy.data import Observation
from gammapy.datasets import MapDataset
!gammapy info --no-envvar --no-dependencies --no-system
###Output
_____no_output_____
###Markdown
Simulation We will simulate using the CTA-1DC IRFs shipped with gammapy. Note that for dedictaed CTA simulations, you can simply use [`Observation.from_caldb()`]() without having to externally load the IRFs
###Code
# Loading IRFs
irfs = load_cta_irfs(
"$GAMMAPY_DATA/cta-1dc/caldb/data/cta/1dc/bcf/South_z20_50h/irf_file.fits"
)
# Define the observation parameters (typically the observation duration and the pointing position):
livetime = 2.0 * u.hr
pointing = SkyCoord(0, 0, unit="deg", frame="galactic")
# Define map geometry for binned simulation
energy_reco = MapAxis.from_edges(
np.logspace(-1.0, 1.0, 10), unit="TeV", name="energy", interp="log"
)
geom = WcsGeom.create(
skydir=(0, 0),
binsz=0.02,
width=(6, 6),
frame="galactic",
axes=[energy_reco],
)
# It is usually useful to have a separate binning for the true energy axis
energy_true = MapAxis.from_edges(
np.logspace(-1.5, 1.5, 30), unit="TeV", name="energy", interp="log"
)
empty = MapDataset.create(geom, name="dataset-simu")
# Define sky model to used simulate the data.
# Here we use a Gaussian spatial model and a Power Law spectral model.
spatial_model = GaussianSpatialModel(
lon_0="0.2 deg", lat_0="0.1 deg", sigma="0.3 deg", frame="galactic"
)
spectral_model = PowerLawSpectralModel(
index=3, amplitude="1e-11 cm-2 s-1 TeV-1", reference="1 TeV"
)
model_simu = SkyModel(
spatial_model=spatial_model,
spectral_model=spectral_model,
name="model-simu",
)
bkg_model = FoVBackgroundModel(dataset_name="dataset-simu")
models = Models([model_simu, bkg_model])
print(models)
###Output
_____no_output_____
###Markdown
Now, comes the main part of dataset simulation. We create an in-memory observation and an empty dataset. We then predict the number of counts for the given model, and Poission fluctuate it using `fake()` to make a simulated counts maps. Keep in mind that it is important to specify the `selection` of the maps that you want to produce
###Code
# Create an in-memory observation
obs = Observation.create(pointing=pointing, livetime=livetime, irfs=irfs)
print(obs)
# Make the MapDataset
maker = MapDatasetMaker(selection=["exposure", "background", "psf", "edisp"])
maker_safe_mask = SafeMaskMaker(methods=["offset-max"], offset_max=4.0 * u.deg)
dataset = maker.run(empty, obs)
dataset = maker_safe_mask.run(dataset, obs)
print(dataset)
# Add the model on the dataset and Poission fluctuate
dataset.models = models
dataset.fake()
# Do a print on the dataset - there is now a counts maps
print(dataset)
###Output
_____no_output_____
###Markdown
Now use this dataset as you would in all standard analysis. You can plot the maps, or proceed with your custom analysis. In the next section, we show the standard 3D fitting as in [analysis_3d](analysis_3d.ipynb).
###Code
# To plot, eg, counts:
dataset.counts.smooth(0.05 * u.deg).plot_interactive(
add_cbar=True, stretch="linear"
)
###Output
_____no_output_____
###Markdown
FitIn this section, we do a usual 3D fit with the same model used to simulated the data and see the stability of the simulations. Often, it is useful to simulate many such datasets and look at the distribution of the reconstructed parameters.
###Code
models_fit = models.copy()
# We do not want to fit the background in this case, so we will freeze the parameters
models_fit["dataset-simu-bkg"].spectral_model.norm.frozen = True
models_fit["dataset-simu-bkg"].spectral_model.tilt.frozen = True
dataset.models = models_fit
print(dataset.models)
%%time
fit = Fit([dataset], optimize_opts={"print_level": 1})
result = fit.run()
dataset.plot_residuals_spatial(method="diff/sqrt(model)", vmin=-0.5, vmax=0.5)
###Output
_____no_output_____
###Markdown
Compare the injected and fitted models:
###Code
print(
"True model: \n",
model_simu,
"\n\n Fitted model: \n",
models_fit["model-simu"],
)
###Output
_____no_output_____
###Markdown
Get the errors on the fitted parameters from the parameter table
###Code
result.parameters.to_table()
###Output
_____no_output_____
###Markdown
3D map simulation Prerequisites- Knowledge of 3D extraction and datasets used in gammapy, see for instance the [first analysis tutorial](../../starting/analysis_1.ipynb) ContextTo simulate a specific observation, it is not always necessary to simulate the full photon list. For many uses cases, simulating directly a reduced binned dataset is enough: the IRFs reduced in the correct geometry are combined with a source model to predict an actual number of counts per bin. The latter is then used to simulate a reduced dataset using Poisson probability distribution.This can be done to check the feasibility of a measurement (performance / sensitivity study), to test whether fitted parameters really provide a good fit to the data etc.Here we will see how to perform a 3D simulation of a CTA observation, assuming both the spectral and spatial morphology of an observed source.**Objective: simulate a 3D observation of a source with CTA using the CTA 1DC response and fit it with the assumed source model.** Proposed approach:Here we can't use the regular observation objects that are connected to a `DataStore`. Instead we will create a fake `~gammapy.data.Observation` that contain some pointing information and the CTA 1DC IRFs (that are loaded with `~gammapy.irf.load_cta_irfs`).Then we will create a `~gammapy.datasets.MapDataset` geometry and create it with the `~gammapy.makers.MapDatasetMaker`.Then we will be able to define a model consisting of a `~gammapy.modeling.models.PowerLawSpectralModel` and a `~gammapy.modeling.models.GaussianSpatialModel`. We will assign it to the dataset and fake the count data. Imports and versions
###Code
%matplotlib inline
import numpy as np
import astropy.units as u
from astropy.coordinates import SkyCoord
from gammapy.irf import load_cta_irfs
from gammapy.maps import WcsGeom, MapAxis
from gammapy.modeling.models import (
PowerLawSpectralModel,
GaussianSpatialModel,
SkyModel,
Models,
FoVBackgroundModel,
)
from gammapy.makers import MapDatasetMaker, SafeMaskMaker
from gammapy.modeling import Fit
from gammapy.data import Observation
from gammapy.datasets import MapDataset
!gammapy info --no-envvar --no-dependencies --no-system
###Output
_____no_output_____
###Markdown
Simulation We will simulate using the CTA-1DC IRFs shipped with gammapy. Note that for dedictaed CTA simulations, you can simply use [`Observation.from_caldb()`]() without having to externally load the IRFs
###Code
# Loading IRFs
irfs = load_cta_irfs(
"$GAMMAPY_DATA/cta-1dc/caldb/data/cta/1dc/bcf/South_z20_50h/irf_file.fits"
)
# Define the observation parameters (typically the observation duration and the pointing position):
livetime = 2.0 * u.hr
pointing = SkyCoord(0, 0, unit="deg", frame="galactic")
# Define map geometry for binned simulation
energy_reco = MapAxis.from_edges(
np.logspace(-1.0, 1.0, 10), unit="TeV", name="energy", interp="log"
)
geom = WcsGeom.create(
skydir=(0, 0),
binsz=0.02,
width=(6, 6),
frame="galactic",
axes=[energy_reco],
)
# It is usually useful to have a separate binning for the true energy axis
energy_true = MapAxis.from_edges(
np.logspace(-1.5, 1.5, 30), unit="TeV", name="energy_true", interp="log"
)
empty = MapDataset.create(
geom, name="dataset-simu", energy_axis_true=energy_true
)
# Define sky model to used simulate the data.
# Here we use a Gaussian spatial model and a Power Law spectral model.
spatial_model = GaussianSpatialModel(
lon_0="0.2 deg", lat_0="0.1 deg", sigma="0.3 deg", frame="galactic"
)
spectral_model = PowerLawSpectralModel(
index=3, amplitude="1e-11 cm-2 s-1 TeV-1", reference="1 TeV"
)
model_simu = SkyModel(
spatial_model=spatial_model,
spectral_model=spectral_model,
name="model-simu",
)
bkg_model = FoVBackgroundModel(dataset_name="dataset-simu")
models = Models([model_simu, bkg_model])
print(models)
###Output
_____no_output_____
###Markdown
Now, comes the main part of dataset simulation. We create an in-memory observation and an empty dataset. We then predict the number of counts for the given model, and Poission fluctuate it using `fake()` to make a simulated counts maps. Keep in mind that it is important to specify the `selection` of the maps that you want to produce
###Code
# Create an in-memory observation
obs = Observation.create(pointing=pointing, livetime=livetime, irfs=irfs)
print(obs)
# Make the MapDataset
maker = MapDatasetMaker(selection=["exposure", "background", "psf", "edisp"])
maker_safe_mask = SafeMaskMaker(methods=["offset-max"], offset_max=4.0 * u.deg)
dataset = maker.run(empty, obs)
dataset = maker_safe_mask.run(dataset, obs)
print(dataset)
# Add the model on the dataset and Poission fluctuate
dataset.models = models
dataset.fake()
# Do a print on the dataset - there is now a counts maps
print(dataset)
###Output
_____no_output_____
###Markdown
Now use this dataset as you would in all standard analysis. You can plot the maps, or proceed with your custom analysis. In the next section, we show the standard 3D fitting as in [analysis_3d](analysis_3d.ipynb).
###Code
# To plot, eg, counts:
dataset.counts.smooth(0.05 * u.deg).plot_interactive(
add_cbar=True, stretch="linear"
)
###Output
_____no_output_____
###Markdown
FitIn this section, we do a usual 3D fit with the same model used to simulated the data and see the stability of the simulations. Often, it is useful to simulate many such datasets and look at the distribution of the reconstructed parameters.
###Code
models_fit = models.copy()
# We do not want to fit the background in this case, so we will freeze the parameters
models_fit["dataset-simu-bkg"].spectral_model.norm.frozen = True
models_fit["dataset-simu-bkg"].spectral_model.tilt.frozen = True
dataset.models = models_fit
print(dataset.models)
%%time
fit = Fit(optimize_opts={"print_level": 1})
result = fit.run(datasets=[dataset])
dataset.plot_residuals_spatial(method="diff/sqrt(model)", vmin=-0.5, vmax=0.5)
###Output
_____no_output_____
###Markdown
Compare the injected and fitted models:
###Code
print(
"True model: \n",
model_simu,
"\n\n Fitted model: \n",
models_fit["model-simu"],
)
###Output
_____no_output_____
###Markdown
Get the errors on the fitted parameters from the parameter table
###Code
result.parameters.to_table()
###Output
_____no_output_____
###Markdown
3D map simulation Prerequisites- Knowledge of 3D extraction and datasets used in gammapy, see for instance the [first analysis tutorial](../../starting/analysis_1.ipynb) ContextTo simulate a specific observation, it is not always necessary to simulate the full photon list. For many uses cases, simulating directly a reduced binned dataset is enough: the IRFs reduced in the correct geometry are combined with a source model to predict an actual number of counts per bin. The latter is then used to simulate a reduced dataset using Poisson probability distribution.This can be done to check the feasibility of a measurement (performance / sensitivity study), to test whether fitted parameters really provide a good fit to the data etc.Here we will see how to perform a 3D simulation of a CTA observation, assuming both the spectral and spatial morphology of an observed source.**Objective: simulate a 3D observation of a source with CTA using the CTA 1DC response and fit it with the assumed source model.** Proposed approach:Here we can't use the regular observation objects that are connected to a `DataStore`. Instead we will create a fake `~gammapy.data.Observation` that contain some pointing information and the CTA 1DC IRFs (that are loaded with `~gammapy.irf.load_cta_irfs`).Then we will create a `~gammapy.datasets.MapDataset` geometry and create it with the `~gammapy.makers.MapDatasetMaker`.Then we will be able to define a model consisting of a `~gammapy.modeling.models.PowerLawSpectralModel` and a `~gammapy.modeling.models.GaussianSpatialModel`. We will assign it to the dataset and fake the count data. Imports and versions
###Code
%matplotlib inline
import numpy as np
import astropy.units as u
from astropy.coordinates import SkyCoord
from gammapy.irf import load_cta_irfs
from gammapy.maps import WcsGeom, MapAxis
from gammapy.modeling.models import (
PowerLawSpectralModel,
GaussianSpatialModel,
SkyModel,
Models,
FoVBackgroundModel,
)
from gammapy.makers import MapDatasetMaker, SafeMaskMaker
from gammapy.modeling import Fit
from gammapy.data import Observation
from gammapy.datasets import MapDataset
!gammapy info --no-envvar --no-dependencies --no-system
###Output
_____no_output_____
###Markdown
Simulation We will simulate using the CTA-1DC IRFs shipped with gammapy. Note that for dedictaed CTA simulations, you can simply use [`Observation.from_caldb()`]() without having to externally load the IRFs
###Code
# Loading IRFs
irfs = load_cta_irfs(
"$GAMMAPY_DATA/cta-1dc/caldb/data/cta/1dc/bcf/South_z20_50h/irf_file.fits"
)
# Define the observation parameters (typically the observation duration and the pointing position):
livetime = 2.0 * u.hr
pointing = SkyCoord(0, 0, unit="deg", frame="galactic")
# Define map geometry for binned simulation
energy_reco = MapAxis.from_edges(
np.logspace(-1.0, 1.0, 10), unit="TeV", name="energy", interp="log"
)
geom = WcsGeom.create(
skydir=(0, 0),
binsz=0.02,
width=(6, 6),
frame="galactic",
axes=[energy_reco],
)
# It is usually useful to have a separate binning for the true energy axis
energy_true = MapAxis.from_edges(
np.logspace(-1.5, 1.5, 30), unit="TeV", name="energy", interp="log"
)
empty = MapDataset.create(geom, name="dataset-simu")
# Define sky model to used simulate the data.
# Here we use a Gaussian spatial model and a Power Law spectral model.
spatial_model = GaussianSpatialModel(
lon_0="0.2 deg", lat_0="0.1 deg", sigma="0.3 deg", frame="galactic"
)
spectral_model = PowerLawSpectralModel(
index=3, amplitude="1e-11 cm-2 s-1 TeV-1", reference="1 TeV"
)
model_simu = SkyModel(
spatial_model=spatial_model,
spectral_model=spectral_model,
name="model-simu",
)
bkg_model = FoVBackgroundModel(dataset_name="dataset-simu")
models = Models([model_simu, bkg_model])
print(models)
###Output
_____no_output_____
###Markdown
Now, comes the main part of dataset simulation. We create an in-memory observation and an empty dataset. We then predict the number of counts for the given model, and Poission fluctuate it using `fake()` to make a simulated counts maps. Keep in mind that it is important to specify the `selection` of the maps that you want to produce
###Code
# Create an in-memory observation
obs = Observation.create(pointing=pointing, livetime=livetime, irfs=irfs)
print(obs)
# Make the MapDataset
maker = MapDatasetMaker(selection=["exposure", "background", "psf", "edisp"])
maker_safe_mask = SafeMaskMaker(methods=["offset-max"], offset_max=4.0 * u.deg)
dataset = maker.run(empty, obs)
dataset = maker_safe_mask.run(dataset, obs)
print(dataset)
# Add the model on the dataset and Poission fluctuate
dataset.models = models
dataset.fake()
# Do a print on the dataset - there is now a counts maps
print(dataset)
###Output
_____no_output_____
###Markdown
Now use this dataset as you would in all standard analysis. You can plot the maps, or proceed with your custom analysis. In the next section, we show the standard 3D fitting as in [analysis_3d](analysis_3d.ipynb).
###Code
# To plot, eg, counts:
dataset.counts.smooth(0.05 * u.deg).plot_interactive(
add_cbar=True, stretch="linear"
)
###Output
_____no_output_____
###Markdown
FitIn this section, we do a usual 3D fit with the same model used to simulated the data and see the stability of the simulations. Often, it is useful to simulate many such datasets and look at the distribution of the reconstructed parameters.
###Code
models_fit = models.copy()
# We do not want to fit the background in this case, so we will freeze the parameters
models_fit["dataset-simu-bkg"].spectral_model.norm.frozen = True
models_fit["dataset-simu-bkg"].spectral_model.tilt.frozen = True
dataset.models = models_fit
print(dataset.models)
%%time
fit = Fit(optimize_opts={"print_level": 1})
result = fit.run(datasets=[dataset])
dataset.plot_residuals_spatial(method="diff/sqrt(model)", vmin=-0.5, vmax=0.5)
###Output
_____no_output_____
###Markdown
Compare the injected and fitted models:
###Code
print(
"True model: \n",
model_simu,
"\n\n Fitted model: \n",
models_fit["model-simu"],
)
###Output
_____no_output_____
###Markdown
Get the errors on the fitted parameters from the parameter table
###Code
result["optimize_result"].parameters.to_table()
###Output
_____no_output_____
###Markdown
3D map simulation Prerequisites- Knowledge of 3D extraction and datasets used in gammapy, see for instance the [first analysis tutorial](../../starting/analysis_1.ipynb) ContextTo simulate a specific observation, it is not always necessary to simulate the full photon list. For many uses cases, simulating directly a reduced binned dataset is enough: the IRFs reduced in the correct geometry are combined with a source model to predict an actual number of counts per bin. The latter is then used to simulate a reduced dataset using Poisson probability distribution.This can be done to check the feasibility of a measurement (performance / sensitivity study), to test whether fitted parameters really provide a good fit to the data etc.Here we will see how to perform a 3D simulation of a CTA observation, assuming both the spectral and spatial morphology of an observed source.**Objective: simulate a 3D observation of a source with CTA using the CTA 1DC response and fit it with the assumed source model.** Proposed approachHere we can't use the regular observation objects that are connected to a `DataStore`. Instead we will create a fake `~gammapy.data.Observation` that contain some pointing information and the CTA 1DC IRFs (that are loaded with `~gammapy.irf.load_cta_irfs`).Then we will create a `~gammapy.datasets.MapDataset` geometry and create it with the `~gammapy.makers.MapDatasetMaker`.Then we will be able to define a model consisting of a `~gammapy.modeling.models.PowerLawSpectralModel` and a `~gammapy.modeling.models.GaussianSpatialModel`. We will assign it to the dataset and fake the count data. Imports and versions
###Code
%matplotlib inline
import numpy as np
import astropy.units as u
from astropy.coordinates import SkyCoord
from gammapy.irf import load_cta_irfs
from gammapy.maps import WcsGeom, MapAxis
from gammapy.modeling.models import (
PowerLawSpectralModel,
GaussianSpatialModel,
SkyModel,
Models,
FoVBackgroundModel,
)
from gammapy.makers import MapDatasetMaker, SafeMaskMaker
from gammapy.modeling import Fit
from gammapy.data import Observation
from gammapy.datasets import MapDataset
!gammapy info --no-envvar --no-dependencies --no-system
###Output
_____no_output_____
###Markdown
Simulation We will simulate using the CTA-1DC IRFs shipped with gammapy. Note that for dedictaed CTA simulations, you can simply use [`Observation.from_caldb()`]() without having to externally load the IRFs
###Code
# Loading IRFs
irfs = load_cta_irfs(
"$GAMMAPY_DATA/cta-1dc/caldb/data/cta/1dc/bcf/South_z20_50h/irf_file.fits"
)
# Define the observation parameters (typically the observation duration and the pointing position):
livetime = 2.0 * u.hr
pointing = SkyCoord(0, 0, unit="deg", frame="galactic")
# Define map geometry for binned simulation
energy_reco = MapAxis.from_edges(
np.logspace(-1.0, 1.0, 10), unit="TeV", name="energy", interp="log"
)
geom = WcsGeom.create(
skydir=(0, 0),
binsz=0.02,
width=(6, 6),
frame="galactic",
axes=[energy_reco],
)
# It is usually useful to have a separate binning for the true energy axis
energy_true = MapAxis.from_edges(
np.logspace(-1.5, 1.5, 30), unit="TeV", name="energy_true", interp="log"
)
empty = MapDataset.create(
geom, name="dataset-simu", energy_axis_true=energy_true
)
# Define sky model to used simulate the data.
# Here we use a Gaussian spatial model and a Power Law spectral model.
spatial_model = GaussianSpatialModel(
lon_0="0.2 deg", lat_0="0.1 deg", sigma="0.3 deg", frame="galactic"
)
spectral_model = PowerLawSpectralModel(
index=3, amplitude="1e-11 cm-2 s-1 TeV-1", reference="1 TeV"
)
model_simu = SkyModel(
spatial_model=spatial_model,
spectral_model=spectral_model,
name="model-simu",
)
bkg_model = FoVBackgroundModel(dataset_name="dataset-simu")
models = Models([model_simu, bkg_model])
print(models)
###Output
_____no_output_____
###Markdown
Now, comes the main part of dataset simulation. We create an in-memory observation and an empty dataset. We then predict the number of counts for the given model, and Poission fluctuate it using `fake()` to make a simulated counts maps. Keep in mind that it is important to specify the `selection` of the maps that you want to produce
###Code
# Create an in-memory observation
obs = Observation.create(pointing=pointing, livetime=livetime, irfs=irfs)
print(obs)
# Make the MapDataset
maker = MapDatasetMaker(selection=["exposure", "background", "psf", "edisp"])
maker_safe_mask = SafeMaskMaker(methods=["offset-max"], offset_max=4.0 * u.deg)
dataset = maker.run(empty, obs)
dataset = maker_safe_mask.run(dataset, obs)
print(dataset)
# Add the model on the dataset and Poission fluctuate
dataset.models = models
dataset.fake()
# Do a print on the dataset - there is now a counts maps
print(dataset)
###Output
_____no_output_____
###Markdown
Now use this dataset as you would in all standard analysis. You can plot the maps, or proceed with your custom analysis. In the next section, we show the standard 3D fitting as in [analysis_3d](analysis_3d.ipynb).
###Code
# To plot, eg, counts:
dataset.counts.smooth(0.05 * u.deg).plot_interactive(
add_cbar=True, stretch="linear"
)
###Output
_____no_output_____
###Markdown
FitIn this section, we do a usual 3D fit with the same model used to simulated the data and see the stability of the simulations. Often, it is useful to simulate many such datasets and look at the distribution of the reconstructed parameters.
###Code
models_fit = models.copy()
# We do not want to fit the background in this case, so we will freeze the parameters
models_fit["dataset-simu-bkg"].spectral_model.norm.frozen = True
models_fit["dataset-simu-bkg"].spectral_model.tilt.frozen = True
dataset.models = models_fit
print(dataset.models)
%%time
fit = Fit(optimize_opts={"print_level": 1})
result = fit.run(datasets=[dataset])
dataset.plot_residuals_spatial(method="diff/sqrt(model)", vmin=-0.5, vmax=0.5)
###Output
_____no_output_____
###Markdown
Compare the injected and fitted models:
###Code
print(
"True model: \n",
model_simu,
"\n\n Fitted model: \n",
models_fit["model-simu"],
)
###Output
_____no_output_____
###Markdown
Get the errors on the fitted parameters from the parameter table
###Code
result.parameters.to_table()
###Output
_____no_output_____
###Markdown
3D map simulation Prerequisites- Knowledge of 3D extraction and datasets used in gammapy, see for instance the [first analysis tutorial](../../starting/analysis_1.ipynb) ContextTo simulate a specific observation, it is not always necessary to simulate the full photon list. For many uses cases, simulating directly a reduced binned dataset is enough: the IRFs reduced in the correct geometry are combined with a source model to predict an actual number of counts per bin. The latter is then used to simulate a reduced dataset using Poisson probability distribution.This can be done to check the feasibility of a measurement (performance / sensitivity study), to test whether fitted parameters really provide a good fit to the data etc.Here we will see how to perform a 3D simulation of a CTA observation, assuming both the spectral and spatial morphology of an observed source.**Objective: simulate a 3D observation of a source with CTA using the CTA 1DC response and fit it with the assumed source model.** Proposed approach:Here we can't use the regular observation objects that are connected to a `DataStore`. Instead we will create a fake `~gammapy.data.Observation` that contain some pointing information and the CTA 1DC IRFs (that are loaded with `~gammapy.irf.load_cta_irfs`).Then we will create a `~gammapy.datasets.MapDataset` geometry and create it with the `~gammapy.makers.MapDatasetMaker`.Then we will be able to define a model consisting of a `~gammapy.modeling.models.PowerLawSpectralModel` and a `~gammapy.modeling.models.GaussianSpatialModel`. We will assign it to the dataset and fake the count data. Imports and versions
###Code
%matplotlib inline
import numpy as np
import astropy.units as u
from astropy.coordinates import SkyCoord
from gammapy.irf import load_cta_irfs
from gammapy.maps import WcsGeom, MapAxis
from gammapy.modeling.models import (
PowerLawSpectralModel,
GaussianSpatialModel,
SkyModel,
Models,
FoVBackgroundModel,
)
from gammapy.makers import MapDatasetMaker, SafeMaskMaker
from gammapy.modeling import Fit
from gammapy.data import Observation
from gammapy.datasets import MapDataset
!gammapy info --no-envvar --no-dependencies --no-system
###Output
_____no_output_____
###Markdown
Simulation We will simulate using the CTA-1DC IRFs shipped with gammapy. Note that for dedictaed CTA simulations, you can simply use [`Observation.from_caldb()`]() without having to externally load the IRFs
###Code
# Loading IRFs
irfs = load_cta_irfs(
"$GAMMAPY_DATA/cta-1dc/caldb/data/cta/1dc/bcf/South_z20_50h/irf_file.fits"
)
# Define the observation parameters (typically the observation duration and the pointing position):
livetime = 2.0 * u.hr
pointing = SkyCoord(0, 0, unit="deg", frame="galactic")
# Define map geometry for binned simulation
energy_reco = MapAxis.from_edges(
np.logspace(-1.0, 1.0, 10), unit="TeV", name="energy", interp="log"
)
geom = WcsGeom.create(
skydir=(0, 0),
binsz=0.02,
width=(6, 6),
frame="galactic",
axes=[energy_reco],
)
# It is usually useful to have a separate binning for the true energy axis
energy_true = MapAxis.from_edges(
np.logspace(-1.5, 1.5, 30), unit="TeV", name="energy", interp="log"
)
empty = MapDataset.create(geom, name="dataset-simu")
# Define sky model to used simulate the data.
# Here we use a Gaussian spatial model and a Power Law spectral model.
spatial_model = GaussianSpatialModel(
lon_0="0.2 deg", lat_0="0.1 deg", sigma="0.3 deg", frame="galactic"
)
spectral_model = PowerLawSpectralModel(
index=3, amplitude="1e-11 cm-2 s-1 TeV-1", reference="1 TeV"
)
model_simu = SkyModel(
spatial_model=spatial_model,
spectral_model=spectral_model,
name="model-simu",
)
bkg_model = FoVBackgroundModel(dataset_name="dataset-simu")
models = Models([model_simu, bkg_model])
print(models)
###Output
_____no_output_____
###Markdown
Now, comes the main part of dataset simulation. We create an in-memory observation and an empty dataset. We then predict the number of counts for the given model, and Poission fluctuate it using `fake()` to make a simulated counts maps. Keep in mind that it is important to specify the `selection` of the maps that you want to produce
###Code
# Create an in-memory observation
obs = Observation.create(pointing=pointing, livetime=livetime, irfs=irfs)
print(obs)
# Make the MapDataset
maker = MapDatasetMaker(selection=["exposure", "background", "psf", "edisp"])
maker_safe_mask = SafeMaskMaker(methods=["offset-max"], offset_max=4.0 * u.deg)
dataset = maker.run(empty, obs)
dataset = maker_safe_mask.run(dataset, obs)
print(dataset)
# Add the model on the dataset and Poission fluctuate
dataset.models = models
dataset.fake()
# Do a print on the dataset - there is now a counts maps
print(dataset)
###Output
_____no_output_____
###Markdown
Now use this dataset as you would in all standard analysis. You can plot the maps, or proceed with your custom analysis. In the next section, we show the standard 3D fitting as in [analysis_3d](analysis_3d.ipynb).
###Code
# To plot, eg, counts:
dataset.counts.smooth(0.05 * u.deg).plot_interactive(
add_cbar=True, stretch="linear"
)
###Output
_____no_output_____
###Markdown
FitIn this section, we do a usual 3D fit with the same model used to simulated the data and see the stability of the simulations. Often, it is useful to simulate many such datasets and look at the distribution of the reconstructed parameters.
###Code
models_fit = models.copy()
# We do not want to fit the background in this case, so we will freeze the parameters
models_fit["dataset-simu-bkg"].spectral_model.norm.frozen = True
models_fit["dataset-simu-bkg"].spectral_model.tilt.frozen = True
dataset.models = models_fit
print(dataset.models)
%%time
fit = Fit(optimize_opts={"print_level": 1})
result = fit.run(datasets=[dataset])
dataset.plot_residuals_spatial(method="diff/sqrt(model)", vmin=-0.5, vmax=0.5)
###Output
_____no_output_____
###Markdown
Compare the injected and fitted models:
###Code
print(
"True model: \n",
model_simu,
"\n\n Fitted model: \n",
models_fit["model-simu"],
)
###Output
_____no_output_____
###Markdown
Get the errors on the fitted parameters from the parameter table
###Code
result.parameters.to_table()
###Output
_____no_output_____ |
Tune hyperparameters with the Keras Tuner.ipynb | ###Markdown
Introduction to the Keras Tuner OverviewKeras Tuner is a library that helps to pick the optimal set of hyperparameters for TensorFlow program. The process of selecting the right set of hyperparameters for machine learning (ML) application is called *hyperparameter tuning* or *hypertuning*.Hyperparameters are the variables that govern the training process and the topology of an ML model. These variables remain constant over the training process and directly impact the performance of ML program. Hyperparameters are of two types:1. **Model hyperparameters** which influence model selection such as the number and width of hidden layers2. **Algorithm hyperparameters** which influence the speed and quality of the learning algorithm such as the learning rate for Stochastic Gradient Descent (SGD) and the number of nearest neighbors for a k Nearest Neighbors (KNN) classifier Setup
###Code
import tensorflow as tf
from tensorflow import keras
###Output
_____no_output_____
###Markdown
Install and import the Keras Tuner.
###Code
!pip install -q -U keras-tuner
import keras_tuner as kt
###Output
_____no_output_____
###Markdown
Download and prepare the datasetUse the Keras Tuner to find the best hyperparameters for a machine learning model that classifies images of clothing from the [Fashion MNIST dataset] Load the data.
###Code
(img_train, label_train), (img_test, label_test) = keras.datasets.fashion_mnist.load_data()
# Normalize pixel values between 0 and 1
img_train = img_train.astype('float32') / 255.0
img_test = img_test.astype('float32') / 255.0
###Output
_____no_output_____
###Markdown
Define the modelWhen building a model for hypertuning, the defination of the hyperparameter search space in addition to the model architecture is also determined. The model set up for hypertuning is called a *hypermodel*.Define a hypermodel through two approaches:* By using a model builder function* By subclassing the `HyperModel` class of the Keras Tuner API
###Code
def model_builder(hp):
model = keras.Sequential()
model.add(keras.layers.Flatten(input_shape=(28, 28)))
# Tune the number of units in the first Dense layer
# Choose an optimal value between 32-512
hp_units = hp.Int('units', min_value=32, max_value=512, step=32)
model.add(keras.layers.Dense(units=hp_units, activation='relu'))
model.add(keras.layers.Dense(10))
# Tune the learning rate for the optimizer
# Choose an optimal value from 0.01, 0.001, or 0.0001
hp_learning_rate = hp.Choice('learning_rate', values=[1e-2, 1e-3, 1e-4])
model.compile(optimizer=keras.optimizers.Adam(learning_rate=hp_learning_rate),
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
return model
###Output
_____no_output_____
###Markdown
Instantiate the tuner and perform hypertuningInstantiate the tuner to perform the hypertuning. The Keras Tuner has four tuners available - `RandomSearch`, `Hyperband`, `BayesianOptimization`, and `Sklearn`. [Hyperband](https://arxiv.org/pdf/1603.06560.pdf) tuner is used in this tutorial.To instantiate the Hyperband tuner, specify the hypermodel and the `objective` to optimize and the maximum number of epochs to train (`max_epochs`).
###Code
tuner = kt.Hyperband(model_builder,
objective='val_accuracy',
max_epochs=10,
factor=3,
directory='my_dir',
project_name='intro_to_kt')
###Output
INFO:tensorflow:Reloading Oracle from existing project my_dir\intro_to_kt\oracle.json
INFO:tensorflow:Reloading Tuner from my_dir\intro_to_kt\tuner0.json
###Markdown
The Hyperband tuning algorithm uses adaptive resource allocation and early-stopping to quickly converge on a high-performing model. This is done using a sports championship style bracket. The algorithm trains a large number of models for a few epochs and carries forward only the top-performing half of models to the next round. Hyperband determines the number of models to train in a bracket by computing 1 + log`factor`(`max_epochs`) and rounding it up to the nearest integer. Create a callback to stop training early after reaching a certain value for the validation loss.
###Code
stop_early = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=5)
###Output
_____no_output_____
###Markdown
Run the hyperparameter search. The arguments for the search method are the same as those used for `tf.keras.model.fit` in addition to the callback above.
###Code
tuner.search(img_train, label_train, epochs=50, validation_split=0.2, callbacks=[stop_early])
# Get the optimal hyperparameters
best_hps=tuner.get_best_hyperparameters(num_trials=1)[0]
print(f"""
The hyperparameter search is complete. The optimal number of units in the first densely-connected
layer is {best_hps.get('units')} and the optimal learning rate for the optimizer
is {best_hps.get('learning_rate')}.
""")
###Output
Trial 10 Complete [00h 00m 55s]
val_accuracy: 0.8882499933242798
Best val_accuracy So Far: 0.8882499933242798
Total elapsed time: 00h 06m 25s
INFO:tensorflow:Oracle triggered exit
The hyperparameter search is complete. The optimal number of units in the first densely-connected
layer is 256 and the optimal learning rate for the optimizer
is 0.001.
###Markdown
Train the modelFind the optimal number of epochs to train the model with the hyperparameters obtained from the search.
###Code
# Build the model with the optimal hyperparameters and train it on the data for 50 epochs
model = tuner.hypermodel.build(best_hps)
history = model.fit(img_train, label_train, epochs=50, validation_split=0.2)
val_acc_per_epoch = history.history['val_accuracy']
best_epoch = val_acc_per_epoch.index(max(val_acc_per_epoch)) + 1
print('Best epoch: %d' % (best_epoch,))
###Output
Epoch 1/50
1500/1500 [==============================] - 7s 3ms/step - loss: 0.5021 - accuracy: 0.8242 - val_loss: 0.4242 - val_accuracy: 0.8500
Epoch 2/50
1500/1500 [==============================] - 5s 3ms/step - loss: 0.3735 - accuracy: 0.8638 - val_loss: 0.3483 - val_accuracy: 0.8748
Epoch 3/50
1500/1500 [==============================] - 5s 3ms/step - loss: 0.3341 - accuracy: 0.8774 - val_loss: 0.3343 - val_accuracy: 0.8772
Epoch 4/50
1500/1500 [==============================] - 5s 3ms/step - loss: 0.3064 - accuracy: 0.8868 - val_loss: 0.3290 - val_accuracy: 0.8811
Epoch 5/50
1500/1500 [==============================] - 5s 3ms/step - loss: 0.2900 - accuracy: 0.8922 - val_loss: 0.3541 - val_accuracy: 0.8742
Epoch 6/50
1500/1500 [==============================] - 6s 4ms/step - loss: 0.2745 - accuracy: 0.8995 - val_loss: 0.3128 - val_accuracy: 0.8871
Epoch 7/50
1500/1500 [==============================] - 5s 3ms/step - loss: 0.2637 - accuracy: 0.9023 - val_loss: 0.3385 - val_accuracy: 0.8812
Epoch 8/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.2496 - accuracy: 0.9064 - val_loss: 0.3230 - val_accuracy: 0.8832
Epoch 9/50
1500/1500 [==============================] - 6s 4ms/step - loss: 0.2393 - accuracy: 0.9094 - val_loss: 0.3130 - val_accuracy: 0.8936
Epoch 10/50
1500/1500 [==============================] - 5s 3ms/step - loss: 0.2281 - accuracy: 0.9137 - val_loss: 0.3228 - val_accuracy: 0.8890
Epoch 11/50
1500/1500 [==============================] - 5s 3ms/step - loss: 0.2215 - accuracy: 0.9167 - val_loss: 0.3249 - val_accuracy: 0.8872
Epoch 12/50
1500/1500 [==============================] - 5s 3ms/step - loss: 0.2113 - accuracy: 0.9220 - val_loss: 0.3388 - val_accuracy: 0.8839
Epoch 13/50
1500/1500 [==============================] - 5s 3ms/step - loss: 0.2054 - accuracy: 0.9224 - val_loss: 0.3169 - val_accuracy: 0.8928
Epoch 14/50
1500/1500 [==============================] - 5s 3ms/step - loss: 0.1965 - accuracy: 0.9264 - val_loss: 0.3270 - val_accuracy: 0.8915
Epoch 15/50
1500/1500 [==============================] - 5s 3ms/step - loss: 0.1902 - accuracy: 0.9287 - val_loss: 0.3271 - val_accuracy: 0.8903
Epoch 16/50
1500/1500 [==============================] - 5s 3ms/step - loss: 0.1832 - accuracy: 0.9312 - val_loss: 0.3398 - val_accuracy: 0.8921
Epoch 17/50
1500/1500 [==============================] - 5s 3ms/step - loss: 0.1786 - accuracy: 0.9328 - val_loss: 0.3577 - val_accuracy: 0.8882
Epoch 18/50
1500/1500 [==============================] - 5s 3ms/step - loss: 0.1707 - accuracy: 0.9356 - val_loss: 0.3288 - val_accuracy: 0.8978
Epoch 19/50
1500/1500 [==============================] - 5s 3ms/step - loss: 0.1657 - accuracy: 0.9371 - val_loss: 0.3573 - val_accuracy: 0.8888
Epoch 20/50
1500/1500 [==============================] - 6s 4ms/step - loss: 0.1633 - accuracy: 0.9390 - val_loss: 0.3490 - val_accuracy: 0.8892
Epoch 21/50
1500/1500 [==============================] - 5s 3ms/step - loss: 0.1550 - accuracy: 0.9420 - val_loss: 0.3605 - val_accuracy: 0.8934
Epoch 22/50
1500/1500 [==============================] - 5s 3ms/step - loss: 0.1534 - accuracy: 0.9423 - val_loss: 0.3674 - val_accuracy: 0.8931
Epoch 23/50
1500/1500 [==============================] - 5s 3ms/step - loss: 0.1479 - accuracy: 0.9441 - val_loss: 0.3653 - val_accuracy: 0.8918
Epoch 24/50
1500/1500 [==============================] - 5s 3ms/step - loss: 0.1416 - accuracy: 0.9464 - val_loss: 0.3652 - val_accuracy: 0.8957
Epoch 25/50
1500/1500 [==============================] - 5s 3ms/step - loss: 0.1381 - accuracy: 0.9474 - val_loss: 0.3652 - val_accuracy: 0.8959
Epoch 26/50
1500/1500 [==============================] - 5s 3ms/step - loss: 0.1357 - accuracy: 0.9489 - val_loss: 0.3877 - val_accuracy: 0.8904
Epoch 27/50
1500/1500 [==============================] - 5s 3ms/step - loss: 0.1337 - accuracy: 0.9495 - val_loss: 0.3788 - val_accuracy: 0.8953
Epoch 28/50
1500/1500 [==============================] - 6s 4ms/step - loss: 0.1274 - accuracy: 0.9528 - val_loss: 0.3985 - val_accuracy: 0.8926
Epoch 29/50
1500/1500 [==============================] - 5s 3ms/step - loss: 0.1249 - accuracy: 0.9538 - val_loss: 0.3941 - val_accuracy: 0.8950
Epoch 30/50
1500/1500 [==============================] - 5s 3ms/step - loss: 0.1218 - accuracy: 0.9540 - val_loss: 0.4078 - val_accuracy: 0.8966
Epoch 31/50
1500/1500 [==============================] - 3s 2ms/step - loss: 0.1181 - accuracy: 0.9555 - val_loss: 0.4070 - val_accuracy: 0.8947
Epoch 32/50
1500/1500 [==============================] - 6s 4ms/step - loss: 0.1160 - accuracy: 0.9566 - val_loss: 0.4260 - val_accuracy: 0.8964
Epoch 33/50
1500/1500 [==============================] - 5s 3ms/step - loss: 0.1111 - accuracy: 0.9588 - val_loss: 0.4102 - val_accuracy: 0.8934
Epoch 34/50
1500/1500 [==============================] - 5s 3ms/step - loss: 0.1109 - accuracy: 0.9585 - val_loss: 0.4353 - val_accuracy: 0.8921
Epoch 35/50
1500/1500 [==============================] - 5s 3ms/step - loss: 0.1052 - accuracy: 0.9611 - val_loss: 0.4406 - val_accuracy: 0.8929
Epoch 36/50
1500/1500 [==============================] - 5s 3ms/step - loss: 0.1037 - accuracy: 0.9604 - val_loss: 0.4316 - val_accuracy: 0.8946
Epoch 37/50
1500/1500 [==============================] - 5s 3ms/step - loss: 0.1033 - accuracy: 0.9610 - val_loss: 0.4397 - val_accuracy: 0.8957
Epoch 38/50
1500/1500 [==============================] - 5s 3ms/step - loss: 0.1000 - accuracy: 0.9629 - val_loss: 0.4516 - val_accuracy: 0.8901
Epoch 39/50
1500/1500 [==============================] - 5s 3ms/step - loss: 0.0960 - accuracy: 0.9634 - val_loss: 0.4641 - val_accuracy: 0.8925
Epoch 40/50
1500/1500 [==============================] - 5s 3ms/step - loss: 0.0944 - accuracy: 0.9648 - val_loss: 0.4413 - val_accuracy: 0.8960
Epoch 41/50
1500/1500 [==============================] - 5s 3ms/step - loss: 0.0946 - accuracy: 0.9659 - val_loss: 0.4957 - val_accuracy: 0.8844
Epoch 42/50
1500/1500 [==============================] - 5s 3ms/step - loss: 0.0915 - accuracy: 0.9654 - val_loss: 0.4689 - val_accuracy: 0.8915
Epoch 43/50
1500/1500 [==============================] - 5s 3ms/step - loss: 0.0856 - accuracy: 0.9681 - val_loss: 0.4663 - val_accuracy: 0.8963
Epoch 44/50
1500/1500 [==============================] - 5s 3ms/step - loss: 0.0876 - accuracy: 0.9677 - val_loss: 0.4781 - val_accuracy: 0.8992
Epoch 45/50
1500/1500 [==============================] - 5s 3ms/step - loss: 0.0841 - accuracy: 0.9685 - val_loss: 0.4943 - val_accuracy: 0.8951
Epoch 46/50
1500/1500 [==============================] - 6s 3ms/step - loss: 0.0848 - accuracy: 0.9681 - val_loss: 0.4819 - val_accuracy: 0.8943
Epoch 47/50
1500/1500 [==============================] - 5s 3ms/step - loss: 0.0848 - accuracy: 0.9688 - val_loss: 0.5313 - val_accuracy: 0.8873
Epoch 48/50
1500/1500 [==============================] - 5s 3ms/step - loss: 0.0817 - accuracy: 0.9692 - val_loss: 0.4855 - val_accuracy: 0.8960
Epoch 49/50
1500/1500 [==============================] - 5s 3ms/step - loss: 0.0768 - accuracy: 0.9713 - val_loss: 0.5495 - val_accuracy: 0.8910
Epoch 50/50
1500/1500 [==============================] - 5s 3ms/step - loss: 0.0795 - accuracy: 0.9699 - val_loss: 0.5283 - val_accuracy: 0.8906
Best epoch: 44
###Markdown
Re-instantiate the hypermodel and train it with the optimal number of epochs from above.
###Code
hypermodel = tuner.hypermodel.build(best_hps)
# Retrain the model
hypermodel.fit(img_train, label_train, epochs=best_epoch, validation_split=0.2)
###Output
Epoch 1/44
1500/1500 [==============================] - 7s 4ms/step - loss: 0.5090 - accuracy: 0.8203 - val_loss: 0.4184 - val_accuracy: 0.8499
Epoch 2/44
1500/1500 [==============================] - 5s 3ms/step - loss: 0.3775 - accuracy: 0.8652 - val_loss: 0.3936 - val_accuracy: 0.8552
Epoch 3/44
1500/1500 [==============================] - 5s 3ms/step - loss: 0.3368 - accuracy: 0.8775 - val_loss: 0.3588 - val_accuracy: 0.8694
Epoch 4/44
1500/1500 [==============================] - 5s 3ms/step - loss: 0.3118 - accuracy: 0.8845 - val_loss: 0.3420 - val_accuracy: 0.8781
Epoch 5/44
1500/1500 [==============================] - 5s 3ms/step - loss: 0.2939 - accuracy: 0.8910 - val_loss: 0.3495 - val_accuracy: 0.8734
Epoch 6/44
1500/1500 [==============================] - 6s 4ms/step - loss: 0.2768 - accuracy: 0.8961 - val_loss: 0.3098 - val_accuracy: 0.8892
Epoch 7/44
1500/1500 [==============================] - 5s 3ms/step - loss: 0.2633 - accuracy: 0.9015 - val_loss: 0.3111 - val_accuracy: 0.8882
Epoch 8/44
1500/1500 [==============================] - 5s 3ms/step - loss: 0.2523 - accuracy: 0.9062 - val_loss: 0.3315 - val_accuracy: 0.8816
Epoch 9/44
1500/1500 [==============================] - 5s 3ms/step - loss: 0.2383 - accuracy: 0.9101 - val_loss: 0.3240 - val_accuracy: 0.8878
Epoch 10/44
1500/1500 [==============================] - 5s 3ms/step - loss: 0.2306 - accuracy: 0.9134 - val_loss: 0.3053 - val_accuracy: 0.8928
Epoch 11/44
1500/1500 [==============================] - 5s 3ms/step - loss: 0.2211 - accuracy: 0.9167 - val_loss: 0.3107 - val_accuracy: 0.8908
Epoch 12/44
1500/1500 [==============================] - 6s 4ms/step - loss: 0.2135 - accuracy: 0.9191 - val_loss: 0.3211 - val_accuracy: 0.8921
Epoch 13/44
1500/1500 [==============================] - 5s 3ms/step - loss: 0.2035 - accuracy: 0.9234 - val_loss: 0.3002 - val_accuracy: 0.8963
Epoch 14/44
1500/1500 [==============================] - 5s 3ms/step - loss: 0.1976 - accuracy: 0.9267 - val_loss: 0.3201 - val_accuracy: 0.8913
Epoch 15/44
1500/1500 [==============================] - 7s 4ms/step - loss: 0.1917 - accuracy: 0.9277 - val_loss: 0.3272 - val_accuracy: 0.8894
Epoch 16/44
1500/1500 [==============================] - 5s 3ms/step - loss: 0.1852 - accuracy: 0.9300 - val_loss: 0.3457 - val_accuracy: 0.8849
Epoch 17/44
1500/1500 [==============================] - 5s 3ms/step - loss: 0.1781 - accuracy: 0.9326 - val_loss: 0.3270 - val_accuracy: 0.8959
Epoch 18/44
1500/1500 [==============================] - 5s 3ms/step - loss: 0.1742 - accuracy: 0.9349 - val_loss: 0.3392 - val_accuracy: 0.8927
Epoch 19/44
1500/1500 [==============================] - 5s 3ms/step - loss: 0.1674 - accuracy: 0.9376 - val_loss: 0.3463 - val_accuracy: 0.8934
Epoch 20/44
1500/1500 [==============================] - 6s 4ms/step - loss: 0.1643 - accuracy: 0.9381 - val_loss: 0.3508 - val_accuracy: 0.8916
Epoch 21/44
1500/1500 [==============================] - 5s 3ms/step - loss: 0.1608 - accuracy: 0.9388 - val_loss: 0.3620 - val_accuracy: 0.8906
Epoch 22/44
1500/1500 [==============================] - 5s 3ms/step - loss: 0.1529 - accuracy: 0.9423 - val_loss: 0.3384 - val_accuracy: 0.8933
Epoch 23/44
1500/1500 [==============================] - 5s 3ms/step - loss: 0.1469 - accuracy: 0.9444 - val_loss: 0.3828 - val_accuracy: 0.8898
Epoch 24/44
1500/1500 [==============================] - 5s 3ms/step - loss: 0.1442 - accuracy: 0.9459 - val_loss: 0.3661 - val_accuracy: 0.8913
Epoch 25/44
1500/1500 [==============================] - 5s 3ms/step - loss: 0.1393 - accuracy: 0.9470 - val_loss: 0.3844 - val_accuracy: 0.8890
Epoch 26/44
1500/1500 [==============================] - 5s 3ms/step - loss: 0.1389 - accuracy: 0.9474 - val_loss: 0.3702 - val_accuracy: 0.8951
Epoch 27/44
1500/1500 [==============================] - 5s 3ms/step - loss: 0.1336 - accuracy: 0.9495 - val_loss: 0.3640 - val_accuracy: 0.8917
Epoch 28/44
1500/1500 [==============================] - 5s 3ms/step - loss: 0.1279 - accuracy: 0.9529 - val_loss: 0.3799 - val_accuracy: 0.8945
Epoch 29/44
1500/1500 [==============================] - 6s 3ms/step - loss: 0.1242 - accuracy: 0.9528 - val_loss: 0.3882 - val_accuracy: 0.8910
Epoch 30/44
1500/1500 [==============================] - 5s 3ms/step - loss: 0.1236 - accuracy: 0.9536 - val_loss: 0.3873 - val_accuracy: 0.8932
Epoch 31/44
1500/1500 [==============================] - 5s 3ms/step - loss: 0.1185 - accuracy: 0.9544 - val_loss: 0.4191 - val_accuracy: 0.8908
Epoch 32/44
1500/1500 [==============================] - 5s 3ms/step - loss: 0.1173 - accuracy: 0.9560 - val_loss: 0.4525 - val_accuracy: 0.8839
Epoch 33/44
1500/1500 [==============================] - 7s 4ms/step - loss: 0.1157 - accuracy: 0.9559 - val_loss: 0.4051 - val_accuracy: 0.8890
Epoch 34/44
1500/1500 [==============================] - 5s 3ms/step - loss: 0.1102 - accuracy: 0.9582 - val_loss: 0.4285 - val_accuracy: 0.8918
Epoch 35/44
1500/1500 [==============================] - 5s 3ms/step - loss: 0.1073 - accuracy: 0.9597 - val_loss: 0.4190 - val_accuracy: 0.8885
Epoch 36/44
1500/1500 [==============================] - 6s 4ms/step - loss: 0.1050 - accuracy: 0.9604 - val_loss: 0.4548 - val_accuracy: 0.8894
Epoch 37/44
1500/1500 [==============================] - 5s 3ms/step - loss: 0.1021 - accuracy: 0.9619 - val_loss: 0.4609 - val_accuracy: 0.8868
Epoch 38/44
1500/1500 [==============================] - 5s 3ms/step - loss: 0.0989 - accuracy: 0.9628 - val_loss: 0.4640 - val_accuracy: 0.8840
Epoch 39/44
1500/1500 [==============================] - 5s 3ms/step - loss: 0.0992 - accuracy: 0.9629 - val_loss: 0.4440 - val_accuracy: 0.8923
Epoch 40/44
1500/1500 [==============================] - 5s 3ms/step - loss: 0.0951 - accuracy: 0.9640 - val_loss: 0.4530 - val_accuracy: 0.8888
Epoch 41/44
1500/1500 [==============================] - 5s 3ms/step - loss: 0.0954 - accuracy: 0.9645 - val_loss: 0.4747 - val_accuracy: 0.8863
Epoch 42/44
1500/1500 [==============================] - 7s 4ms/step - loss: 0.0911 - accuracy: 0.9663 - val_loss: 0.4762 - val_accuracy: 0.8914
Epoch 43/44
1500/1500 [==============================] - 5s 3ms/step - loss: 0.0911 - accuracy: 0.9653 - val_loss: 0.5152 - val_accuracy: 0.8880
Epoch 44/44
1500/1500 [==============================] - 5s 3ms/step - loss: 0.0875 - accuracy: 0.9673 - val_loss: 0.4565 - val_accuracy: 0.8943
###Markdown
To finish this tutorial, evaluate the hypermodel on the test data.
###Code
eval_result = hypermodel.evaluate(img_test, label_test)
print("[test loss, test accuracy]:", eval_result)
###Output
313/313 [==============================] - 2s 6ms/step - loss: 0.5062 - accuracy: 0.8881
[test loss, test accuracy]: [0.5062196254730225, 0.8881000280380249]
|
lab9_answered.ipynb | ###Markdown
Simulating Language, Lab 9, Gene-culture co-evolution We're going to use the same code as the last lab to do something similar to Smith & Kirby (2008) and discover what types of prior and learning strategy combinations are evolutionarily stable. You may be surprised to find that we really don't need much more than the code we already have to do this! Code from Lab 8Here's the code from Lab 8, with no changes.
###Code
import random
%matplotlib inline
import matplotlib.pyplot as plt
from IPython.display import set_matplotlib_formats
set_matplotlib_formats('svg', 'pdf')
from math import log, log1p, exp
from scipy.special import logsumexp
from numpy import mean # This is a handy function that calculate the average of a list
###Output
_____no_output_____
###Markdown
Parameters for language
###Code
variables = 2 # The number of different variables in the language
variants = 2 # The number of different variants each variable can take
###Output
_____no_output_____
###Markdown
Log probability functions
###Code
def log_subtract(x,y):
return x + log1p(-exp(y - x))
def normalize_logprobs(logprobs):
logtotal = logsumexp(logprobs) #calculates the summed log probabilities
normedlogs = []
for logp in logprobs:
normedlogs.append(logp - logtotal) #normalise - subtracting in the log domain
#equivalent to dividing in the normal domain
return normedlogs
def log_roulette_wheel(normedlogs):
r = log(random.random()) #generate a random number in [0,1), then convert to log
accumulator = normedlogs[0]
for i in range(len(normedlogs)):
if r < accumulator:
return i
accumulator = logsumexp([accumulator, normedlogs[i + 1]])
def wta(probs):
maxprob = max(probs) # Find the maximum probability (works if these are logs or not)
candidates = []
for i in range(len(probs)):
if probs[i] == maxprob:
candidates.append(i) # Make a list of all the indices with that maximum probability
return random.choice(candidates)
###Output
_____no_output_____
###Markdown
Production of data
###Code
def produce(language, log_error_probability):
variable = random.randrange(len(language)) # Pick a variant to produce
correct_variant = language[variable]
if log(random.random()) > log_error_probability:
return variable, correct_variant # Return the variable, variant pair
else:
possible_error_variants = list(range(variants))
possible_error_variants.remove(correct_variant)
error_variant = random.choice(possible_error_variants)
return variable, error_variant
###Output
_____no_output_____
###Markdown
Function to check if language is regular
###Code
def regular(language):
first_variant = language[0]
for variant in language:
if variant != first_variant:
return False # The language can only be regular if every variant is the same as the first
return True
###Output
_____no_output_____
###Markdown
Prior
###Code
def logprior(language, log_bias):
if regular(language):
number_of_regular_languages = variants
return log_bias - log(number_of_regular_languages) #subtracting logs = dividing
else:
number_of_irregular_languages = variants ** variables - variants # the double star here means raise to the power
# e.g. 4 ** 2 is four squared
return log_subtract(0, log_bias) - log(number_of_irregular_languages)
# log(1) is 0, so log_subtract(0, bias) is equivalent to (1 - bias) in the
# non-log domain
###Output
_____no_output_____
###Markdown
Likelihood
###Code
def loglikelihood(data, language, log_error_probability):
loglikelihoods = []
logp_correct = log_subtract(0, log_error_probability) #probability of producing correct form
logp_incorrect = log_error_probability - log(variants - 1) #logprob of each incorrect variant
for utterance in data:
variable = utterance[0]
variant = utterance[1]
if variant == language[variable]:
loglikelihoods.append(logp_correct)
else:
loglikelihoods.append(logp_incorrect)
return sum(loglikelihoods) #summing log likelihoods = multiplying likelihoods
###Output
_____no_output_____
###Markdown
Learning
###Code
def all_languages(variables, variants):
if variables == 0:
return [[]] # The list of all languages with zero variables is just one language, and that's empty
else:
result = [] # If we are looking for a list of languages with more than zero variables,
# then we'll need to build a list
smaller_langs = all_languages(variables - 1, variants) # Let's first find all the languages with one
# fewer variables
for language in smaller_langs: # For each of these smaller languages, we're going to have to create a more
# complex language by adding each of the possible variants
for variant in range(variants):
result.append(language + [variant])
return result
def learn(data, log_bias, log_error_probability, learning_type):
list_of_all_languages = all_languages(variables, variants) # uses the parameters we set above
list_of_posteriors = []
for language in list_of_all_languages:
this_language_posterior = loglikelihood(data, language, log_error_probability) + logprior(language, log_bias)
list_of_posteriors.append(this_language_posterior)
if learning_type == 'map':
map_language_index = wta(list_of_posteriors) # For MAP learning, we pick the best language
map_language = list_of_all_languages[map_language_index]
return map_language
if learning_type == 'sample':
normalized_posteriors = normalize_logprobs(list_of_posteriors)
sampled_language_index = log_roulette_wheel(normalized_posteriors) # For sampling, we use the roulette wheel
sampled_language = list_of_all_languages[sampled_language_index]
return sampled_language
###Output
_____no_output_____
###Markdown
Iterated learning
###Code
def iterate(generations, bottleneck, log_bias, log_error_probability, learning_type):
language = random.choice(all_languages(variables, variants))
if regular(language):
accumulator = [1]
else:
accumulator = [0]
language_accumulator = [language]
for generation in range(generations):
data = []
for i in range(bottleneck):
data.append(produce(language, log_error_probability))
language = learn(data, log_bias, log_error_probability, learning_type)
if regular(language):
accumulator.append(1)
else:
accumulator.append(0)
language_accumulator.append(language)
return accumulator, language_accumulator
###Output
_____no_output_____
###Markdown
New codeImagine we have a population of individuals who share a cognitive bias and a learning strategy (i.e., sampling or map) that they are born with. In other words, it is encoded in their genes. These individuals transmit their linguistic behaviour culturally through iterated learning, eventually leading to a particular distribution over languages emerging. We can find that distribution for a particular combination of prior bias and learning strategy by running a long iterated learning chain, just like we were doing in the last lab.Now, imagine that there is some genetic mutation in this population and we have an individual who has a different prior and/or learning strategy. We can ask the question: will this mutation have an evolutionary advantage? In other words, will it spread through the population, or will it die out?To answer this question, we need first to think about what it means to have a survival advantage? One obvious answer is that you might have a survival advantage if you are able to learn the language of the population well. Presumably, if you learn the language of the population poorly you won't be able to communicate as well and will be at a disadvantage.The function `learning_success` allows us to estimate how well a particular type of learner will do when attempting to learn any one of a set of languages we input. The function takes the usual parameters you might expect: the bottleneck, the bias, the error probability, and the type of learner (`sample` or `map`). However, it also takes a list of different languages, and a number of test trials. Each test trial involves:1. picking at random one of the languages in the list, 2. producing a number of utterances from that language (using the `bottleneck` parameter)3. learning a new language from that list of utterances4. checking whether the new language is identical to the one we originally picked (in which case we count this as a learning success)At the end it gives us the proportion of trials which were successful.
###Code
def learning_success(bottleneck, log_bias, log_error_probability, learning_type, languages, trials):
success = 0
for i in range(trials):
input_language = random.choice(languages)
data = []
for i in range(bottleneck):
data.append(produce(input_language, log_error_probability))
output_language = learn(data, log_bias, log_error_probability, learning_type)
if output_language == input_language:
success = success + 1
return success / trials
###Output
_____no_output_____
###Markdown
We can use this function in combination with the iterate function to see how well a particular type of learner will learn languages that emerge from cultural evolution. For example, try the following:```languages = iterate(100000, 5, log(0.6), log(0.05), 'map')[1]print(learning_success(5, log(0.6), log(0.05), 'map', languages, 100000))```This will run an iterated learning simulation for 100,000 generations with a MAP learner and a bias of 0.6. Then it will test how well the same kind of learner learns the languages that emerge from that simulation. To get an accurate result, it runs the learning test for 100,000 trials. These two numbers (the generations and the test trials) don't need to be the same, but should ideally be quite large so that we can get accurate estimates. You can try running them with lower numbers a bunch of times and see how variable the results are to get a rough and ready idea of how accurate the samples are.
###Code
languages = iterate(100000, 5, log(0.6), log(0.05), 'map')[1]
print(learning_success(5, log(0.6), log(0.05), 'map', languages, 100000))
###Output
0.963
###Markdown
OK, but how does this help us tell what kind of biases and learning strategies will evolve? As I discussed above, we want to see if a mutation will have an advantage (and therefore is likely to spread through a population) or not. So, really, we want to know how well a learner will do at learning, who *isn't* the same as the one that created the languages. Try this:```print(learning_success(5, log(0.6), log(0.05), 'sample', languages, 100000))```The original list of languages was created by a population of MAP learners. Now we're testing what the expected success of a learner with a sampling strategy would be if exposed to one of these languages. If this number is higher than the number we got above, then the mutation could spread through the population. If this number is lower than the number we got above, we can expect it to die out. You may find that these numbers are quite similar (which is why we need large numbers for learning trials and genenerations to get an accurate estimate). This suggests that in some cases the selection pressure on the evolution of these genes might not be enormous, but nevertheless small differences in fitness can nevertheless lead to big changes over time.
###Code
print(learning_success(5, log(0.6), log(0.05), 'sample', languages, 100000))
###Output
0.91062
###Markdown
QuestionThere's only one question for this lab, because I want you to think about how best you can explore it with the tools I've given you here! You could answer this question just by typing in a bunch of commands like the examples above, or you could try and come up with a way of looping through different combinations. If you want, you could try and come up with a measure quantifying how big an advantage (or disadvantage) a mutation has in a particular population. If you want to be really fancy would be to then visualise these results in a graph somehow (hint: you can use `plt.imshow` to visualise a 2-dimensional list of numbers).1. Which mutations will spread in different populations of learners, which mutations will die out, and which are selectively neutral (i.e. are neither better nor worse)? *My approach to this is going to be to try three different prior biases, from very weak to very strong, plus the two types of learner (sample vs. map). So first up, for each of these combinations we'll run a long simulation to gather the set of languages that would emerge in a population with that learning strategy/bias combination. Just to keep things neat, let's write a function to do that.*
###Code
def generate_stationary_distributions(bias_learning_type_pairs):
stationary_distributions = []
for bias, learning_type in bias_learning_type_pairs:
print(bias, learning_type)
languages = iterate(100000, 5, log(bias), log(0.05), learning_type)[1]
stationary_distributions.append(languages)
return stationary_distributions
###Output
_____no_output_____
###Markdown
*This function I've just defined takes a list of bias, learning type pairs and runs a long simulation for each of them. You can think of a combination of a learning bias and a learning type (i.e. hypothesis selection strategy) as characterising a learner - it's what we assume is innate, and therefore provided by evolution. Let's choose a range of biases in favour of regularity from relatively weak (near 0.5) to relatively strong (near 1.0) and run these for both sample and map. This list below gives these different possible learners.*
###Code
learners = [(0.6, 'sample'), (0.7, 'sample'), (0.8, 'sample'),
(0.6, 'map'), (0.7, 'map'), (0.8, 'map')]
###Output
_____no_output_____
###Markdown
*Now we use this list and the function I defined to generate a list of stationary distributions (i.e. a list of languages) for each of these. **Strictly speaking, these aren't exactly the stationary distributions** since it should take some time for the culturally evolving system to settle into the stationary distribution. In other words, it'll take some time for the influence of the first language to be "washed out". However, since we're running for 100,000 generations, we can probably ignore this. (But maybe it would be better to change this to look only at the second half of the run?). For some values of bias (very high or very low), you may need to run longer simulations (both here and when evaluating learning in the next step) before you get accurate values, so please do bear that in mind!*
###Code
stationary_distributions = generate_stationary_distributions(learners)
###Output
0.6 sample
0.7 sample
0.8 sample
0.6 map
0.7 map
0.8 map
###Markdown
*Now we need to test each of our six learners on each of these six distributions. This corresponds to how well a "mutant" learner will fare in a majority learner's culture. Here's a function to do this, which will give the result as a table (actually a list of lists). Each row of the table will correspond to the mutant learner, and each column will be the stationary distribution (i.e. the majority learner).*
###Code
def table_of_success(bias_learning_type_pairs, stationary_distributions):
table = []
for bias, learning_type in bias_learning_type_pairs:
print(bias, learning_type)
table_row = []
for languages in stationary_distributions:
success = learning_success(5, log(bias), log(0.05), learning_type, languages, 100000)
table_row.append(success)
table.append(table_row)
return table
results = table_of_success(learners, stationary_distributions)
###Output
0.6 sample
0.7 sample
0.8 sample
0.6 map
0.7 map
0.8 map
###Markdown
*Let's look at those results... we'll start by just printing the table out, then trying to print it a bit more neatly!*
###Code
print(results)
for row in results:
for cell in row:
print(cell, end='\t') # this prints with a tab instead of a new line
print('\n') # this prints a newline
###Output
0.89853 0.90327 0.90714 0.91037 0.91037 0.90928
0.89818 0.90649 0.91742 0.92222 0.92156 0.92237
0.89445 0.90784 0.92282 0.93332 0.93339 0.93432
0.92808 0.94137 0.95254 0.9619 0.96154 0.96123
0.9272 0.93935 0.95323 0.96234 0.96197 0.96215
0.92731 0.94078 0.95305 0.96371 0.96151 0.96259
###Markdown
*Let's try and visualise these a bit better. Here's my first attempt, with `plt.imshow`*
###Code
plt.imshow(results)
###Output
_____no_output_____
###Markdown
*If I get a graph that looks useful, I then go to the matplotlib website and try and figure out how to make it more useful... This was a bit fiddly, but here's what I came up with after reading that website and googling around a bit :-)*
###Code
fig, ax = plt.subplots(1, 1)
fig = ax.imshow(results, extent=[0,6,6,0], cmap='coolwarm')
labels = ['.6 S', '.7 S', '.8 S', '.6 M', '.7 M', '.8 M']
ax.set_xticks([.5,1.5,2.5,3.5,4.5,5.5])
ax.set_xticklabels(labels)
ax.set_yticks([.5,1.5,2.5,3.5,4.5,5.5])
ax.set_yticklabels(labels)
ax.set_ylabel("Mutant")
ax.set_xlabel("Majority")
plt.colorbar(fig)
###Output
_____no_output_____
###Markdown
*So, it looks like there are general differences in strategy, with MAP learners learning better than samplers. But really, we want to know is not the overall learning success, but whether a mutant learner is better than the majority learner in the population into which it is born. If it is better, then it has a chance of taking over the population. To figure this out we need to know how well the learner will do if born into a population of other learners who are the same and then compare a mutant to this. If you think about it, this is the diagonal of the table above (i.e. when the mutant *is* the learner that created the stationary distribution). We can extract this as follows:*
###Code
self_learning = []
for i in range(6):
self_learning.append(results[i][i])
print(self_learning)
###Output
[0.89853, 0.90649, 0.92282, 0.9619, 0.96197, 0.96259]
###Markdown
*Now we can compare each cell in the table and see if the learning success for the mutant is higher than the non-mutant, lower or the same.*
###Code
for minority in range(6):
for majority in range(6):
if results[minority][majority] > self_learning[majority]:
print(learners[minority], end=' ')
print('invades a population of', end=' ')
print(learners[majority])
elif results[minority][majority] < self_learning[majority]:
print(learners[minority], end=' ')
print('dies out in a population of', end=' ')
print(learners[majority])
###Output
(0.6, 'sample') dies out in a population of (0.7, 'sample')
(0.6, 'sample') dies out in a population of (0.8, 'sample')
(0.6, 'sample') dies out in a population of (0.6, 'map')
(0.6, 'sample') dies out in a population of (0.7, 'map')
(0.6, 'sample') dies out in a population of (0.8, 'map')
(0.7, 'sample') dies out in a population of (0.6, 'sample')
(0.7, 'sample') dies out in a population of (0.8, 'sample')
(0.7, 'sample') dies out in a population of (0.6, 'map')
(0.7, 'sample') dies out in a population of (0.7, 'map')
(0.7, 'sample') dies out in a population of (0.8, 'map')
(0.8, 'sample') dies out in a population of (0.6, 'sample')
(0.8, 'sample') invades a population of (0.7, 'sample')
(0.8, 'sample') dies out in a population of (0.6, 'map')
(0.8, 'sample') dies out in a population of (0.7, 'map')
(0.8, 'sample') dies out in a population of (0.8, 'map')
(0.6, 'map') invades a population of (0.6, 'sample')
(0.6, 'map') invades a population of (0.7, 'sample')
(0.6, 'map') invades a population of (0.8, 'sample')
(0.6, 'map') dies out in a population of (0.7, 'map')
(0.6, 'map') dies out in a population of (0.8, 'map')
(0.7, 'map') invades a population of (0.6, 'sample')
(0.7, 'map') invades a population of (0.7, 'sample')
(0.7, 'map') invades a population of (0.8, 'sample')
(0.7, 'map') invades a population of (0.6, 'map')
(0.7, 'map') dies out in a population of (0.8, 'map')
(0.8, 'map') invades a population of (0.6, 'sample')
(0.8, 'map') invades a population of (0.7, 'sample')
(0.8, 'map') invades a population of (0.8, 'sample')
(0.8, 'map') invades a population of (0.6, 'map')
(0.8, 'map') dies out in a population of (0.7, 'map')
###Markdown
*So, it looks like MAP learners invade populations of samplers often, but never the other way around. Also, it looks like samplers that don't match the specific bias of the population die out, whereas that's not so clearly the case with MAP. However, there's a problem with this way of looking at things. This doesn't show us how big an advantage one type of learner has over another, and because these are simulation runs, the results are going to be quite variable and we might have a tiny difference showing up just by chance. Because of this, let's instead plot the results but using a ratio of mutant success to majority success. This will give us an estimate of the **selective advantage** the mutant has. We'll make a new table and ratios and plot this.*
###Code
new_results = []
for minority in range(6):
new_row = []
for majority in range(6):
new_row.append(results[minority][majority] / self_learning[majority])
new_results.append(new_row)
fig, ax = plt.subplots(1, 1)
fig = ax.imshow(new_results, extent=[0,6,6,0], cmap='coolwarm')
labels = ['.6 S', '.7 S', '.8 S', '.6 M', '.7 M', '.8 M']
ax.set_xticks([.5,1.5,2.5,3.5,4.5,5.5])
ax.set_xticklabels(labels)
ax.set_yticks([.5,1.5,2.5,3.5,4.5,5.5])
ax.set_yticklabels(labels)
ax.set_ylabel("Mutant")
ax.set_xlabel("Majority")
plt.colorbar(fig)
###Output
_____no_output_____ |
tareas/Tarea Transformada Laplace_v2.ipynb | ###Markdown
TareaNombres: **Ponga aquí sus nombres completos separados por coma** Suponga que tiene un sistema de tiempo continuo que se excita con una entrada $x(t)$ y responde con una señal $y(t)$, como lo muestra la figura:Analice el modelo del sistema para los modelos en cada uno de los casos siguientes: |Caso | Ecuación ||------|----------------------------------------------------------------------------------------|| A | \begin{equation} \frac{dy}{dt} + 5y(t) = 5x(t) \end{equation} || B | \begin{equation} \frac{dy}{dt} - 5y(t) = 5x(t) \end{equation} || C | \begin{equation} \frac{d^{2}y}{dt^{2}} + 5\frac{dy}{dt} + y(t) = x(t) \end{equation} || D | \begin{equation} \frac{d^{2}y}{dt^{2}} + y(t) = x(t) \end{equation} | Análisis- Lleve la ecuación diferencial al dominio de las frecuencias usando la transformada de Laplace.\begin{equation}EscribaAquíLaEcuaciónTransformada\end{equation}- Encuentre la función de transferencia del sistema.\begin{equation}F_A(s) = EscribaAquíLaFunciónDeTransferencia\end{equation}- Grafique el mapa de polos y ceros- Grafique la respuesta al escalón
###Code
## Aquí va el código para generar la gráfica pedida. Ejecute el código para generar la gráfica.
###Output
_____no_output_____
###Markdown
- Analice las gráfica obtenidas, escriba su análisis y determine la estabilidad del sistema y el tipo de amortiguamiento. - Escriba aquí su discusión. - Puede usar viñetas o párrafos. - Conserve las sangrías para facilitar la lectura.
###Code
## Aquí va el código extra que puede requerir para responder a las preguntas.
###Output
_____no_output_____
###Markdown
Suponga que los sistemas $B$ y $C$ interactúan de manera que la salida de $B$ es la entrada de $C$.- ¿Cuál es la función de transferencia equivalente para estos sistemas conectados?- Grafique el mapa de polos y ceros- Grafique la respuesta al escalón
###Code
## Aquí va el código para generar la gráfica pedida. Ejecute el código para generar la gráfica.
###Output
_____no_output_____ |
compare_pan_genomes/report_template.ipynb | ###Markdown
Pan-genomes comparison report
###Code
import re
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib_venn import venn2, venn3
import plotly.graph_objects as go
import plotly.express as px
import plotly.figure_factory as ff
# inputs
# PG names
pg1_name = "<PG1_NAME>"
pg2_name = "<PG2_NAME>"
# PAV
pg1_pav = "<PG1_PAV>"
pg2_pav = "<PG2_PAV>"
true_pg_pav = "<TRUE_PAV>"
# Non-ref matches TSV
pg1_vs_pg2_matches = "<PG1_VS_PG2_NON_REF_MATCHES>"
pg1_vs_true_matches = "<PG1_VS_TRUE_NON_REF_MATCHES>"
pg2_vs_true_matches = "<PG2_VS_TRUE_NON_REF_MATCHES>"
# read in PAV and matches tables
pg1_pav_df = pd.read_csv(pg1_pav, sep='\t', index_col = 0)
pg2_pav_df = pd.read_csv(pg2_pav, sep='\t', index_col = 0)
# assuming same sample names, adjust order
pg2_pav_df = pg2_pav_df[list(pg1_pav_df.columns)]
pg1_vs_pg2_matches_df = pd.read_csv(pg1_vs_pg2_matches, sep='\t')
# number of samples in PGs
n_samples = pg1_pav_df.shape[1]
# convert PG1 and PG2 to common naming, according to true PG
def match_name(row, matches_df, pg_name, other_pg_name, rename):
if row.name.startswith('PanGene'):
if row.name in matches_df[pg_name].values:
if rename:
return matches_df.loc[matches_df[pg_name] == row.name][other_pg_name].iloc[0]
else:
return row.name
else:
return row.name + '__' + pg_name + "_unmatched"
else:
return re.sub(r'[^0-9a-zA-Z\-\._]+','_',row.name)
# create rename series
pg1_rename = pg1_pav_df.apply(match_name, args=(pg1_vs_pg2_matches_df, pg1_name, pg2_name, False), axis=1)
pg2_rename = pg2_pav_df.apply(match_name, args=(pg1_vs_pg2_matches_df, pg2_name, pg1_name, True), axis=1)
# rename
pg1_pav_df.index = pg1_pav_df.index.map(pg1_rename)
pg2_pav_df.index = pg2_pav_df.index.map(pg2_rename)
# calculate pan-gene occupancies
pg1_occupancy = pg1_pav_df.sum(axis=1)
pg1_occupancy = pg1_occupancy.loc[pg1_occupancy > 0]
pg2_occupancy = pg2_pav_df.sum(axis=1)
pg2_occupancy = pg2_occupancy.loc[pg2_occupancy > 0]
###Output
_____no_output_____
###Markdown
Basic stats
###Code
def stats_from_pav_df(df):
total_pangenes = df.shape[0]
non_ref_pangenes = df.loc[df.index.str.startswith('PanGene')].shape[0]
ref_pangenes = total_pangenes - non_ref_pangenes
non_ref_unmatched = df.loc[(df.index.str.startswith('PanGene')) & (df.index.str.endswith('_unmatched'))].shape[0]
non_ref_matched = non_ref_pangenes - non_ref_unmatched
n_samples = df.shape[1]
occup = df.sum(axis=1)
core = (occup == n_samples).sum()
shell = (occup.between(1,n_samples,inclusive=False)).sum()
singletons = (occup == 1).sum()
index = ['Total pan-genes', 'Reference pan-genes', 'Non-reference pan-genes',
'Matched non-reference pan-genes', 'Unmatched non-reference pan-genes',
'Core pan-genes', 'Shell pan-genes', 'Singletons']
values = [total_pangenes, ref_pangenes, non_ref_pangenes, non_ref_matched, non_ref_unmatched, core, shell, singletons]
return pd.Series(values, index = index)
pg1_stats = stats_from_pav_df(pg1_pav_df)
pg2_stats = stats_from_pav_df(pg2_pav_df)
stats_df = pd.concat([pg1_stats,pg2_stats], axis=1)
stats_df.columns = [pg1_name, pg2_name]
stats_df
# plot overlap of non-ref genes
pg1_nonref_genes = set(pg1_pav_df.loc[pg1_pav_df.index.str.startswith('PanGene')].index)
pg2_nonref_genes = set(pg2_pav_df.loc[pg2_pav_df.index.str.startswith('PanGene')].index)
venn2([pg1_nonref_genes, pg2_nonref_genes], set_labels=[pg1_name,pg2_name])
plt.title('Overlap of non-reference genes')
plt.show()
# plot occupancy distributions
pg1_occup_counts = pg1_occupancy.value_counts().sort_index()
pg2_occup_counts = pg2_occupancy.value_counts().sort_index()
x = pg1_occup_counts.index
fig = go.Figure(data=[
go.Bar(name=pg1_name, x=x, y=pg1_occup_counts),
go.Bar(name=pg2_name, x=x, y=pg2_occup_counts)]
)
# Change the bar mode
fig.update_layout(barmode='group', title='Occupancy histogram', xaxis_title="Occupancy", yaxis_title="# of pan-genes")
fig.show()
# plot number of genes per accession
pg1_genes_per_acc = pg1_pav_df.sum()
pg2_genes_per_acc = pg2_pav_df.sum()
x = pg1_genes_per_acc.index
fig = go.Figure(data=[
go.Bar(name=pg1_name, x=x, y=pg1_genes_per_acc),
go.Bar(name=pg2_name, x=x, y=pg2_genes_per_acc),
])
# Change the bar mode
fig.update_layout(barmode='group', title='Pan-genes per accession', xaxis_title="Accession", yaxis_title="# of pan-genes")
fig.show()
###Output
_____no_output_____
###Markdown
Discrepancies between pan-genomes
###Code
# Add unmatched pan-genes from each PG to the other PG (as absent in all samples)
# this ensures both PGs have the same set of genes
pg1_unmatched_df = pg1_pav_df.loc[~pg1_pav_df.index.isin(pg2_pav_df.index)]
for col in pg1_unmatched_df.columns:
pg1_unmatched_df[col].values[:] = 0
pg2_unmatched_df = pg2_pav_df.loc[~pg2_pav_df.index.isin(pg1_pav_df.index)]
for col in pg2_unmatched_df.columns:
pg2_unmatched_df[col].values[:] = 0
pg1_pav_df_plus_pg2_unmatched = pg1_pav_df.append(pg2_unmatched_df)
pg2_pav_df_plus_pg1_unmatched = pg2_pav_df.append(pg1_unmatched_df)
# sort columns and gene nmaes in both DFs, so the order is identical
accessions = list(pg1_pav_df_plus_pg2_unmatched.columns.sort_values())
pg1_pav_df_plus_pg2_unmatched = pg1_pav_df_plus_pg2_unmatched[accessions].sort_index()
pg2_pav_df_plus_pg1_unmatched = pg2_pav_df_plus_pg1_unmatched[accessions].sort_index()
# find discrepancies
pav_diff = (pg1_pav_df_plus_pg2_unmatched - pg2_pav_df_plus_pg1_unmatched)
pg1_raname_df = pd.DataFrame(pg1_rename).reset_index()
pg1_raname_df.columns = [pg1_name + '_orig_name', 'new_name']
pg2_raname_df = pd.DataFrame(pg2_rename).reset_index()
pg2_raname_df.columns = [pg2_name + '_orig_name', 'new_name']
# create discrepancies table
discrep_df = pav_diff.reset_index().melt(id_vars='gene', value_vars=pav_diff.columns)
discrep_df.columns = ['gene','sample','type']
discrep_df = discrep_df.loc[discrep_df['type'] != 0]
# add original gene names
discrep_df = discrep_df.merge(pg1_raname_df, how='left', left_on='gene', right_on='new_name')
discrep_df =discrep_df.merge(pg2_raname_df, how='left', left_on='gene', right_on='new_name')
discrep_df = discrep_df[['gene', pg1_name + '_orig_name', pg2_name + '_orig_name', 'sample', 'type']]
# print to file
discrep_df.to_csv('discrepancies.tsv', sep='\t', index=False)
# calculate stats (separate by ref vs. non-ref)
total_cells = pav_diff.count().sum()
total_discrep = (pav_diff != 0).astype(int).sum(axis=1).sum()
in_pg1_not_in_pg2 = (pav_diff == 1).astype(int).sum(axis=1).sum()
in_pg2_not_in_pg1 = (pav_diff == -1).astype(int).sum(axis=1).sum()
pav_diff_ref = pav_diff.loc[~(pav_diff.index.str.startswith('PanGene'))]
pav_diff_nonref = pav_diff.loc[pav_diff.index.str.startswith('PanGene')]
total_ref_cells = pav_diff_ref.count().sum()
total_nonref_cells = pav_diff_nonref.count().sum()
ref_discrep = (pav_diff_ref != 0).astype(int).sum(axis=1).sum()
ref_in_pg1_not_in_pg2 = (pav_diff_ref == 1).astype(int).sum(axis=1).sum()
ref_in_pg2_not_in_pg1 = (pav_diff_ref == -1).astype(int).sum(axis=1).sum()
nonref_discrep = (pav_diff_nonref != 0).astype(int).sum(axis=1).sum()
nonref_in_pg1_not_in_pg2 = (pav_diff_nonref == 1).astype(int).sum(axis=1).sum()
nonref_in_pg2_not_in_pg1 = (pav_diff_nonref == -1).astype(int).sum(axis=1).sum()
# create discrepancies stats table
ind = ['All', 'Ref', 'Non-ref']
cells = [total_cells, total_ref_cells, total_nonref_cells]
discrep = [total_discrep, ref_discrep, nonref_discrep]
pres_in_pg1_abs_in_pg2 = [in_pg1_not_in_pg2, ref_in_pg1_not_in_pg2, nonref_in_pg1_not_in_pg2]
pres_in_pg2_abs_in_pg1 = [in_pg2_not_in_pg1, ref_in_pg2_not_in_pg1, nonref_in_pg2_not_in_pg1]
discrep_stats_df = pd.DataFrame({'Cells': cells,
"Total discrepancies": discrep,
"P in %s and A in %s" %(pg1_name,pg2_name) : pres_in_pg1_abs_in_pg2,
"P in %s and A in %s" %(pg2_name,pg1_name) : pres_in_pg2_abs_in_pg1},
index = ind)
discrep_stats_df
# discrepancies per gene
discrep_per_gene = pav_diff.apply(lambda row: abs(row).sum(), axis=1)
fig = px.histogram(discrep_per_gene, title="Histogram of discrepancies per pan-gene",
labels={'value': '# of discrepancies'})
fig.show()
# discrepancies per gene - non-ref only
discrep_per_nonref_gene = pav_diff_nonref.apply(lambda row: abs(row).sum(), axis=1)
fig = px.histogram(discrep_per_nonref_gene, title="Histogram of discrepancies per non-ref pan-gene",
labels={'value': '# of discrepancies'})
fig.show()
# occupancy diff
pg1_pav_df_plus_pg2_unmatched['occupancy'] = pg1_pav_df_plus_pg2_unmatched.sum(axis=1)
pg2_pav_df_plus_pg1_unmatched['occupancy'] = pg2_pav_df_plus_pg1_unmatched.sum(axis=1)
occup_diff = pg1_pav_df_plus_pg2_unmatched['occupancy'] - pg2_pav_df_plus_pg1_unmatched['occupancy']
fig = px.histogram(occup_diff, title="Histogram of occupancy differences",
labels={'value': 'Occupancy difference'})
fig.show()
# occupancy diff - non-ref only
occup_diff_nonref = pg1_pav_df_plus_pg2_unmatched.loc[pav_diff.index.str.startswith('PanGene')]['occupancy'] - pg2_pav_df_plus_pg1_unmatched.loc[pav_diff.index.str.startswith('PanGene')]['occupancy']
fig = px.histogram(occup_diff_nonref, title="Histogram of occupancy differences of non-reference pan-genes",
labels={'value': 'Occupancy difference'})
fig.show()
# occupancy in PG1 vs. occupancy in PG2
tmp_df = pd.concat([pg1_pav_df_plus_pg2_unmatched['occupancy'], pg2_pav_df_plus_pg1_unmatched['occupancy']], axis=1)
tmp_df['pan-gene'] = tmp_df.index
tmp_df.columns = [pg1_name + ' occupancy', pg2_name + ' occupancy','pan-gene']
tmp_df = tmp_df.groupby([pg1_name + ' occupancy', pg2_name + ' occupancy']).count().unstack(level=0).fillna(0)
tmp_df.columns = tmp_df.columns.droplevel(0)
tmp_df = tmp_df.transpose()
tmp_df.loc[:,0:n_samples] = tmp_df.loc[:,0:n_samples].div(tmp_df.sum(axis=1), axis=0)*100
fig = px.imshow(tmp_df)
fig.show()
# occupancy vs. discrepancies
# use occupancies of true PG
tmp_df = pd.concat([pg1_occupancy, discrep_per_gene], axis=1, join='inner')
tmp_df['pan-gene'] = tmp_df.index
tmp_df.columns = ['occupancy','discrepancies','pan-gene']
tmp_df = tmp_df.groupby(['occupancy', 'discrepancies']).count().unstack(level=0).fillna(0)
tmp_df.columns = tmp_df.columns.droplevel(0)
tmp_df = tmp_df.transpose()
tmp_df.loc[:,0:n_samples] = tmp_df.loc[:,0:n_samples].div(tmp_df.sum(axis=1), axis=0)*100
fig = px.imshow(tmp_df)
fig.show()
###Output
_____no_output_____ |
lab 1.ipynb | ###Markdown
**Welcome to COMP 593!**For your first lab, we will experiment with running a script, and saving our project to our personal github repositories: Installing DependanciesDependancies are routines, Objects, and methods that a project requires. We add Dependancies to our project in the form of **Libraries** when we want to unlock functionality that already exists, this could be as simple as file IO or as complex as fully fledged Machine Learning libraries. Libraries can be added to our project manually, by downloading them and placing them in our runtime enviornment, or using a **Package Manager** such as PIP. Run the below code to download the **pyfiglet** library, which we will use to generate some ASCII art.
###Code
pip install pyfiglet
###Output
Collecting pyfiglet
Downloading pyfiglet-0.8.post1-py2.py3-none-any.whl (865 kB)
[?25l
[K |▍ | 10 kB 28.9 MB/s eta 0:00:01
[K |▊ | 20 kB 34.7 MB/s eta 0:00:01
[K |█▏ | 30 kB 24.3 MB/s eta 0:00:01
[K |█▌ | 40 kB 19.1 MB/s eta 0:00:01
[K |██ | 51 kB 21.0 MB/s eta 0:00:01
[K |██▎ | 61 kB 17.7 MB/s eta 0:00:01
[K |██▋ | 71 kB 16.6 MB/s eta 0:00:01
[K |███ | 81 kB 17.6 MB/s eta 0:00:01
[K |███▍ | 92 kB 18.9 MB/s eta 0:00:01
[K |███▉ | 102 kB 19.5 MB/s eta 0:00:01
[K |████▏ | 112 kB 19.5 MB/s eta 0:00:01
[K |████▌ | 122 kB 19.5 MB/s eta 0:00:01
[K |█████ | 133 kB 19.5 MB/s eta 0:00:01
[K |█████▎ | 143 kB 19.5 MB/s eta 0:00:01
[K |█████▊ | 153 kB 19.5 MB/s eta 0:00:01
[K |██████ | 163 kB 19.5 MB/s eta 0:00:01
[K |██████▍ | 174 kB 19.5 MB/s eta 0:00:01
[K |██████▉ | 184 kB 19.5 MB/s eta 0:00:01
[K |███████▏ | 194 kB 19.5 MB/s eta 0:00:01
[K |███████▋ | 204 kB 19.5 MB/s eta 0:00:01
[K |████████ | 215 kB 19.5 MB/s eta 0:00:01
[K |████████▎ | 225 kB 19.5 MB/s eta 0:00:01
[K |████████▊ | 235 kB 19.5 MB/s eta 0:00:01
[K |█████████ | 245 kB 19.5 MB/s eta 0:00:01
[K |█████████▌ | 256 kB 19.5 MB/s eta 0:00:01
[K |█████████▉ | 266 kB 19.5 MB/s eta 0:00:01
[K |██████████▏ | 276 kB 19.5 MB/s eta 0:00:01
[K |██████████▋ | 286 kB 19.5 MB/s eta 0:00:01
[K |███████████ | 296 kB 19.5 MB/s eta 0:00:01
[K |███████████▍ | 307 kB 19.5 MB/s eta 0:00:01
[K |███████████▊ | 317 kB 19.5 MB/s eta 0:00:01
[K |████████████ | 327 kB 19.5 MB/s eta 0:00:01
[K |████████████▌ | 337 kB 19.5 MB/s eta 0:00:01
[K |████████████▉ | 348 kB 19.5 MB/s eta 0:00:01
[K |█████████████▎ | 358 kB 19.5 MB/s eta 0:00:01
[K |█████████████▋ | 368 kB 19.5 MB/s eta 0:00:01
[K |██████████████ | 378 kB 19.5 MB/s eta 0:00:01
[K |██████████████▍ | 389 kB 19.5 MB/s eta 0:00:01
[K |██████████████▊ | 399 kB 19.5 MB/s eta 0:00:01
[K |███████████████▏ | 409 kB 19.5 MB/s eta 0:00:01
[K |███████████████▌ | 419 kB 19.5 MB/s eta 0:00:01
[K |████████████████ | 430 kB 19.5 MB/s eta 0:00:01
[K |████████████████▎ | 440 kB 19.5 MB/s eta 0:00:01
[K |████████████████▋ | 450 kB 19.5 MB/s eta 0:00:01
[K |█████████████████ | 460 kB 19.5 MB/s eta 0:00:01
[K |█████████████████▍ | 471 kB 19.5 MB/s eta 0:00:01
[K |█████████████████▉ | 481 kB 19.5 MB/s eta 0:00:01
[K |██████████████████▏ | 491 kB 19.5 MB/s eta 0:00:01
[K |██████████████████▌ | 501 kB 19.5 MB/s eta 0:00:01
[K |███████████████████ | 512 kB 19.5 MB/s eta 0:00:01
[K |███████████████████▎ | 522 kB 19.5 MB/s eta 0:00:01
[K |███████████████████▊ | 532 kB 19.5 MB/s eta 0:00:01
[K |████████████████████ | 542 kB 19.5 MB/s eta 0:00:01
[K |████████████████████▍ | 552 kB 19.5 MB/s eta 0:00:01
[K |████████████████████▉ | 563 kB 19.5 MB/s eta 0:00:01
[K |█████████████████████▏ | 573 kB 19.5 MB/s eta 0:00:01
[K |█████████████████████▋ | 583 kB 19.5 MB/s eta 0:00:01
[K |██████████████████████ | 593 kB 19.5 MB/s eta 0:00:01
[K |██████████████████████▎ | 604 kB 19.5 MB/s eta 0:00:01
[K |██████████████████████▊ | 614 kB 19.5 MB/s eta 0:00:01
[K |███████████████████████ | 624 kB 19.5 MB/s eta 0:00:01
[K |███████████████████████▌ | 634 kB 19.5 MB/s eta 0:00:01
[K |███████████████████████▉ | 645 kB 19.5 MB/s eta 0:00:01
[K |████████████████████████▏ | 655 kB 19.5 MB/s eta 0:00:01
[K |████████████████████████▋ | 665 kB 19.5 MB/s eta 0:00:01
[K |█████████████████████████ | 675 kB 19.5 MB/s eta 0:00:01
[K |█████████████████████████▍ | 686 kB 19.5 MB/s eta 0:00:01
[K |█████████████████████████▊ | 696 kB 19.5 MB/s eta 0:00:01
[K |██████████████████████████▏ | 706 kB 19.5 MB/s eta 0:00:01
[K |██████████████████████████▌ | 716 kB 19.5 MB/s eta 0:00:01
[K |██████████████████████████▉ | 727 kB 19.5 MB/s eta 0:00:01
[K |███████████████████████████▎ | 737 kB 19.5 MB/s eta 0:00:01
[K |███████████████████████████▋ | 747 kB 19.5 MB/s eta 0:00:01
[K |████████████████████████████ | 757 kB 19.5 MB/s eta 0:00:01
[K |████████████████████████████▍ | 768 kB 19.5 MB/s eta 0:00:01
[K |████████████████████████████▊ | 778 kB 19.5 MB/s eta 0:00:01
[K |█████████████████████████████▏ | 788 kB 19.5 MB/s eta 0:00:01
[K |█████████████████████████████▌ | 798 kB 19.5 MB/s eta 0:00:01
[K |██████████████████████████████ | 808 kB 19.5 MB/s eta 0:00:01
[K |██████████████████████████████▎ | 819 kB 19.5 MB/s eta 0:00:01
[K |██████████████████████████████▋ | 829 kB 19.5 MB/s eta 0:00:01
[K |███████████████████████████████ | 839 kB 19.5 MB/s eta 0:00:01
[K |███████████████████████████████▍| 849 kB 19.5 MB/s eta 0:00:01
[K |███████████████████████████████▉| 860 kB 19.5 MB/s eta 0:00:01
[K |████████████████████████████████| 865 kB 19.5 MB/s
[?25hInstalling collected packages: pyfiglet
Successfully installed pyfiglet-0.8.post1
###Markdown
There are *hundreds of thousands* of python libraries at your disposal. Some may suit your needs better than others depending on the goals of your scripts or applications. The [PyPi Repository](https://https://pypi.org/) contains a serchable database of packages that are installable via the pip package manager.Run the code below to get an idea of the number of packages that are included for your user within Colab. You don't need to know what all of these do, but it should indicate that python is a very powerful language.
###Code
pip list
###Output
Package Version
----------------------------- --------------
absl-py 1.0.0
alabaster 0.7.12
albumentations 0.1.12
altair 4.2.0
appdirs 1.4.4
argon2-cffi 21.3.0
argon2-cffi-bindings 21.2.0
arviz 0.11.4
astor 0.8.1
astropy 4.3.1
astunparse 1.6.3
atari-py 0.2.9
atomicwrites 1.4.0
attrs 21.4.0
audioread 2.1.9
autograd 1.3
Babel 2.9.1
backcall 0.2.0
beautifulsoup4 4.6.3
bleach 4.1.0
blis 0.4.1
bokeh 2.3.3
Bottleneck 1.3.2
branca 0.4.2
bs4 0.0.1
CacheControl 0.12.10
cached-property 1.5.2
cachetools 4.2.4
catalogue 1.0.0
certifi 2021.10.8
cffi 1.15.0
cftime 1.5.2
chardet 3.0.4
charset-normalizer 2.0.11
click 7.1.2
cloudpickle 1.3.0
cmake 3.12.0
cmdstanpy 0.9.5
colorcet 3.0.0
colorlover 0.3.0
community 1.0.0b1
contextlib2 0.5.5
convertdate 2.4.0
coverage 3.7.1
coveralls 0.5
crcmod 1.7
cufflinks 0.17.3
cvxopt 1.2.7
cvxpy 1.0.31
cycler 0.11.0
cymem 2.0.6
Cython 0.29.27
daft 0.0.4
dask 2.12.0
datascience 0.10.6
debugpy 1.0.0
decorator 4.4.2
defusedxml 0.7.1
descartes 1.1.0
dill 0.3.4
distributed 1.25.3
dlib 19.18.0
dm-tree 0.1.6
docopt 0.6.2
docutils 0.17.1
dopamine-rl 1.0.5
earthengine-api 0.1.296
easydict 1.9
ecos 2.0.10
editdistance 0.5.3
en-core-web-sm 2.2.5
entrypoints 0.4
ephem 4.1.3
et-xmlfile 1.1.0
fa2 0.3.5
fastai 1.0.61
fastdtw 0.3.4
fastprogress 1.0.0
fastrlock 0.8
fbprophet 0.7.1
feather-format 0.4.1
filelock 3.4.2
firebase-admin 4.4.0
fix-yahoo-finance 0.0.22
Flask 1.1.4
flatbuffers 2.0
folium 0.8.3
future 0.16.0
gast 0.4.0
GDAL 2.2.2
gdown 4.2.1
gensim 3.6.0
geographiclib 1.52
geopy 1.17.0
gin-config 0.5.0
glob2 0.7
google 2.0.3
google-api-core 1.26.3
google-api-python-client 1.12.10
google-auth 1.35.0
google-auth-httplib2 0.0.4
google-auth-oauthlib 0.4.6
google-cloud-bigquery 1.21.0
google-cloud-bigquery-storage 1.1.0
google-cloud-core 1.0.3
google-cloud-datastore 1.8.0
google-cloud-firestore 1.7.0
google-cloud-language 1.2.0
google-cloud-storage 1.18.1
google-cloud-translate 1.5.0
google-colab 1.0.0
google-pasta 0.2.0
google-resumable-media 0.4.1
googleapis-common-protos 1.54.0
googledrivedownloader 0.4
graphviz 0.10.1
greenlet 1.1.2
grpcio 1.43.0
gspread 3.4.2
gspread-dataframe 3.0.8
gym 0.17.3
h5py 3.1.0
HeapDict 1.0.1
hijri-converter 2.2.2
holidays 0.10.5.2
holoviews 1.14.7
html5lib 1.0.1
httpimport 0.5.18
httplib2 0.17.4
httplib2shim 0.0.3
humanize 0.5.1
hyperopt 0.1.2
ideep4py 2.0.0.post3
idna 2.10
imageio 2.4.1
imagesize 1.3.0
imbalanced-learn 0.8.1
imblearn 0.0
imgaug 0.2.9
importlib-metadata 4.10.1
importlib-resources 5.4.0
imutils 0.5.4
inflect 2.1.0
iniconfig 1.1.1
intel-openmp 2022.0.2
intervaltree 2.1.0
ipykernel 4.10.1
ipython 5.5.0
ipython-genutils 0.2.0
ipython-sql 0.3.9
ipywidgets 7.6.5
itsdangerous 1.1.0
jax 0.2.25
jaxlib 0.1.71+cuda111
jedi 0.18.1
jieba 0.42.1
Jinja2 2.11.3
joblib 1.1.0
jpeg4py 0.1.4
jsonschema 4.3.3
jupyter 1.0.0
jupyter-client 5.3.5
jupyter-console 5.2.0
jupyter-core 4.9.1
jupyterlab-pygments 0.1.2
jupyterlab-widgets 1.0.2
kaggle 1.5.12
kapre 0.3.7
keras 2.7.0
Keras-Preprocessing 1.1.2
keras-vis 0.4.1
kiwisolver 1.3.2
korean-lunar-calendar 0.2.1
libclang 13.0.0
librosa 0.8.1
lightgbm 2.2.3
llvmlite 0.34.0
lmdb 0.99
LunarCalendar 0.0.9
lxml 4.2.6
Markdown 3.3.6
MarkupSafe 2.0.1
matplotlib 3.2.2
matplotlib-inline 0.1.3
matplotlib-venn 0.11.6
missingno 0.5.0
mistune 0.8.4
mizani 0.6.0
mkl 2019.0
mlxtend 0.14.0
more-itertools 8.12.0
moviepy 0.2.3.5
mpmath 1.2.1
msgpack 1.0.3
multiprocess 0.70.12.2
multitasking 0.0.10
murmurhash 1.0.6
music21 5.5.0
natsort 5.5.0
nbclient 0.5.10
nbconvert 5.6.1
nbformat 5.1.3
nest-asyncio 1.5.4
netCDF4 1.5.8
networkx 2.6.3
nibabel 3.0.2
nltk 3.2.5
notebook 5.3.1
numba 0.51.2
numexpr 2.8.1
numpy 1.19.5
nvidia-ml-py3 7.352.0
oauth2client 4.1.3
oauthlib 3.2.0
okgrade 0.4.3
opencv-contrib-python 4.1.2.30
opencv-python 4.1.2.30
openpyxl 3.0.9
opt-einsum 3.3.0
osqp 0.6.2.post0
packaging 21.3
palettable 3.3.0
pandas 1.3.5
pandas-datareader 0.9.0
pandas-gbq 0.13.3
pandas-profiling 1.4.1
pandocfilters 1.5.0
panel 0.12.1
param 1.12.0
parso 0.8.3
pathlib 1.0.1
patsy 0.5.2
pep517 0.12.0
pexpect 4.8.0
pickleshare 0.7.5
Pillow 7.1.2
pip 21.1.3
pip-tools 6.2.0
plac 1.1.3
plotly 5.5.0
plotnine 0.6.0
pluggy 0.7.1
pooch 1.6.0
portpicker 1.3.9
prefetch-generator 1.0.1
preshed 3.0.6
prettytable 3.0.0
progressbar2 3.38.0
prometheus-client 0.13.1
promise 2.3
prompt-toolkit 1.0.18
protobuf 3.17.3
psutil 5.4.8
psycopg2 2.7.6.1
ptyprocess 0.7.0
py 1.11.0
pyarrow 6.0.1
pyasn1 0.4.8
pyasn1-modules 0.2.8
pycocotools 2.0.4
pycparser 2.21
pyct 0.4.8
pydata-google-auth 1.3.0
pydot 1.3.0
pydot-ng 2.0.0
pydotplus 2.0.2
PyDrive 1.3.1
pyemd 0.5.1
pyerfa 2.0.0.1
pyfiglet 0.8.post1
pyglet 1.5.0
Pygments 2.6.1
pygobject 3.26.1
pymc3 3.11.4
PyMeeus 0.5.11
pymongo 4.0.1
pymystem3 0.2.0
PyOpenGL 3.1.5
pyparsing 3.0.7
pyrsistent 0.18.1
pysndfile 1.3.8
PySocks 1.7.1
pystan 2.19.1.1
pytest 3.6.4
python-apt 0.0.0
python-chess 0.23.11
python-dateutil 2.8.2
python-louvain 0.16
python-slugify 5.0.2
python-utils 3.1.0
pytz 2018.9
pyviz-comms 2.1.0
PyWavelets 1.2.0
PyYAML 3.13
pyzmq 22.3.0
qdldl 0.1.5.post0
qtconsole 5.2.2
QtPy 2.0.1
regex 2019.12.20
requests 2.23.0
requests-oauthlib 1.3.1
resampy 0.2.2
rpy2 3.4.5
rsa 4.8
scikit-image 0.18.3
scikit-learn 1.0.2
scipy 1.4.1
screen-resolution-extra 0.0.0
scs 3.1.0
seaborn 0.11.2
semver 2.13.0
Send2Trash 1.8.0
setuptools 57.4.0
setuptools-git 1.2
Shapely 1.8.0
simplegeneric 0.8.1
six 1.15.0
sklearn 0.0
sklearn-pandas 1.8.0
smart-open 5.2.1
snowballstemmer 2.2.0
sortedcontainers 2.4.0
SoundFile 0.10.3.post1
spacy 2.2.4
Sphinx 1.8.6
sphinxcontrib-serializinghtml 1.1.5
sphinxcontrib-websupport 1.2.4
SQLAlchemy 1.4.31
sqlparse 0.4.2
srsly 1.0.5
statsmodels 0.10.2
sympy 1.7.1
tables 3.4.4
tabulate 0.8.9
tblib 1.7.0
tenacity 8.0.1
tensorboard 2.7.0
tensorboard-data-server 0.6.1
tensorboard-plugin-wit 1.8.1
tensorflow 2.7.0
tensorflow-datasets 4.0.1
tensorflow-estimator 2.7.0
tensorflow-gcs-config 2.7.0
tensorflow-hub 0.12.0
tensorflow-io-gcs-filesystem 0.23.1
tensorflow-metadata 1.6.0
tensorflow-probability 0.15.0
termcolor 1.1.0
terminado 0.13.1
testpath 0.5.0
text-unidecode 1.3
textblob 0.15.3
Theano-PyMC 1.1.2
thinc 7.4.0
threadpoolctl 3.1.0
tifffile 2021.11.2
toml 0.10.2
tomli 2.0.0
toolz 0.11.2
torch 1.10.0+cu111
torchaudio 0.10.0+cu111
torchsummary 1.5.1
torchtext 0.11.0
torchvision 0.11.1+cu111
tornado 5.1.1
tqdm 4.62.3
traitlets 5.1.1
tweepy 3.10.0
typeguard 2.7.1
typing-extensions 3.10.0.2
tzlocal 1.5.1
uritemplate 3.0.1
urllib3 1.24.3
vega-datasets 0.9.0
wasabi 0.9.0
wcwidth 0.2.5
webencodings 0.5.1
Werkzeug 1.0.1
wheel 0.37.1
widgetsnbextension 3.5.2
wordcloud 1.5.0
wrapt 1.13.3
xarray 0.18.2
xgboost 0.90
xkit 0.0.0
xlrd 1.1.0
xlwt 1.3.0
yellowbrick 1.3.post1
zict 2.0.0
zipp 3.7.0
###Markdown
If you would like to see if pyfiglet was installed, you could scan the list above, or you could **pipe** the output of `pip list` to a console command known as `grep` that will filter for specific strings. This is an example of **redirecting output,** which you have learned about already.
###Code
pip list | grep pyfiglet
###Output
pyfiglet 0.8.post1
###Markdown
Writing Our Script The intention of this Colab introduction is to get you familiar with using google Colab to accomplish scripting goals. Today, we will be using the `pyfiglet` library we have just installed to output some text. To understand the methods available to us in `pyfiglet` we can look up [the github repository.](https://github.com/pwaller/pyfiglet)***Remember: Since open source packages are at the mercy of their developers or maintainers, comprehensive documentation is never a guarantee.***The help documentation outlines a command line `--help` argument, which means that documentation exists. We can't call command line arguments for imported libraries in colab, but we *can* accomplish the same goal in colab by using the python `help([Object])` function.
###Code
from pyfiglet import Figlet
help(Figlet)
###Output
Help on class Figlet in module pyfiglet:
class Figlet(builtins.object)
| Figlet(font='standard', direction='auto', justify='auto', width=80)
|
| Main figlet class.
|
| Methods defined here:
|
| __init__(self, font='standard', direction='auto', justify='auto', width=80)
| Initialize self. See help(type(self)) for accurate signature.
|
| getDirection(self)
|
| getFonts(self)
|
| getJustify(self)
|
| renderText(self, text)
|
| setFont(self, **kwargs)
|
| ----------------------------------------------------------------------
| Data descriptors defined here:
|
| __dict__
| dictionary for instance variables (if defined)
|
| __weakref__
| list of weak references to the object (if defined)
|
| direction
|
| justify
###Markdown
Using this function we can see that the `Figlet` object has several methods available. This will bring us to your task:**In the editor below, finish a script that accomplishes the following goals:**1. Prompt the user to select from a list of 5 fonts.2. Prompt the user to input the string they would like output in that font.3. Render the text using the selected font.*Hint:* Call the `getFonts()` method to get a list of the available fonts.
###Code
from pyfiglet import Figlet
#Instantiate a Figlet Object
f = Figlet()
#Prompt the user to select a font
selectedFont = input("Select a font: \n 1. avatar \n 2. banner3 \n 3. barbwire \n 4. big \n 5. binary \n >>> ")
#Prompt the user to type a message
my_art =f.renderText('The real magic')
print(my_art)
###Output
Select a font:
1. avatar
2. banner3
3. barbwire
4. big
5. binary
>>> big
_____ _ _ _
|_ _| |__ ___ _ __ ___ __ _| | _ __ ___ __ _ __ _(_) ___
| | | '_ \ / _ \ | '__/ _ \/ _` | | | '_ ` _ \ / _` |/ _` | |/ __|
| | | | | | __/ | | | __/ (_| | | | | | | | | (_| | (_| | | (__
|_| |_| |_|\___| |_| \___|\__,_|_| |_| |_| |_|\__,_|\__, |_|\___|
|___/
|
code/notebooks/python/showing_best_results/none_box.ipynb | ###Markdown
Muestra para cada dataset todo lo que se puede hacer con el none_box
###Code
from demo_utils.demo10 import Demo10
from demo_utils.general import SUPPORTED_DATASETS
from IPython.display import Markdown as md
from demo_utils.get_hyper_params import get_hyper_params
%%javascript
IPython.OutputArea.prototype._should_scroll = function(lines) {
return false;
}
import warnings
warnings.filterwarnings('ignore')
# d10_data = {
# # 'dts_name': testing_dataset,
# 'dts_size': 1000,
# 'features_range': (500, 501),
# 'rbfsampler_gamma': 'UNUSED',
# 'nystroem_gamma': 'UNUSED',
# 'hparams': {'dt': {'max_depth': None,
# 'min_samples_split': 2,
# 'min_samples_leaf': 1,
# 'min_weight_fraction_leaf': 0.0,
# 'max_leaf_nodes': None,
# 'min_impurity_decrease': 0.0},
# 'logit': {'C': 1000.0},
# 'linear_svc': {'C': 5}}
# }
d10_data = {
# 'dts_name': testing_dataset,
'dts_size': 1000,
'features_range': (500, 501),
}
def get_a_model(model_name, sampler_name, dts_name):
box_type = 'none'
n_estim = None
# más adelante habrá que soportar distintas box
# {'model_name': model_name,
# 'sampler_name': 'identity',
# 'sampler_gamma': None,
# 'model_params': {},
# # 'box_type': 'none',
# 'box_type': box_type,
# 'n_estim': None,
# 'pca': False,
# 'pca_first': False}
ret_dic = {'model_name': model_name,
# 'sampler_name': 'identity',
'sampler_name': sampler_name,
'sampler_gamma': None,
'model_params': {},
'box_type': box_type,
'n_estim': n_estim,
'pca': False,
'pca_first': False}
hyper_params = get_hyper_params(dts_name=dts_name, box_name=box_type,
model_name=model_name, sampler_name=sampler_name)
gamma = hyper_params.pop('gamma', None)
# ret_dic['sampler_gamma'] = gamma
ret_dic['gamma'] = gamma
# ret_dic['model_params'] = hyper_params
ret_dic['base_model_params'] = hyper_params
if sampler_name == 'rff':
ret_dic['sampler_name'] = 'rbf'
# elif sampler_name == 'nystroem':
# ret_dic['sampler_name'] = 'nystroem'
return ret_dic
def test_dt(d10_data):
d10 = Demo10()
new_data = dict(d10_data)
dts_name = new_data['dts_name']
model_name = 'dt'
# dt solo, dt con rff y dt con nystroem
m1 = get_a_model(model_name=model_name, sampler_name='identity', dts_name=dts_name)
m2 = get_a_model(model_name=model_name, sampler_name='rff', dts_name=dts_name)
m3 = get_a_model(model_name=model_name, sampler_name='nystroem', dts_name=dts_name)
models = [m1, m2, m3,]
new_data['models'] = models
d10.non_interactive(**new_data)
def test_logit(d10_data):
d10 = Demo10()
new_data = dict(d10_data)
dts_name = new_data['dts_name']
model_name = 'logit'
# logit solo, logit con rff y logit con nystroem
m1 = get_a_model(model_name=model_name, sampler_name='identity', dts_name=dts_name)
m2 = get_a_model(model_name=model_name, sampler_name='rff', dts_name=dts_name)
m3 = get_a_model(model_name=model_name, sampler_name='nystroem', dts_name=dts_name)
models = [m1, m2, m3,]
new_data['models'] = models
d10.non_interactive(**new_data)
def test_linear_svc(d10_data):
d10 = Demo10()
new_data = dict(d10_data)
dts_name = new_data['dts_name']
model_name = 'linear_svc'
# linear_svc solo, linear_svc con rff y linear_svc con nystroem
m1 = get_a_model(model_name=model_name, sampler_name='identity', dts_name=dts_name)
m2 = get_a_model(model_name=model_name, sampler_name='rff', dts_name=dts_name)
m3 = get_a_model(model_name=model_name, sampler_name='nystroem', dts_name=dts_name)
models = [m1, m2, m3,]
new_data['models'] = models
d10.non_interactive(**new_data)
def test_dataset(d10_data, dts_name):
new_data = dict(d10_data)
new_data['dts_name'] = dts_name
display(md(f'# {dts_name}'))
test_dt(new_data)
test_logit(new_data)
test_linear_svc(new_data)
def test_everything():
for dts_name in SUPPORTED_DATASETS:
test_dataset(d10_data, dts_name=dts_name)
test_everything()
###Output
_____no_output_____ |
Data_Analysis/EmailSpamClassifier/EmailSpamClassifier.ipynb | ###Markdown
Email Spam Classifier
###Code
import pandas as pd
from sklearn.feature_extraction.text import CountVectorizer
from sklearn import svm
from sklearn.model_selection import GridSearchCV
###Output
_____no_output_____
###Markdown
Loading the Dataset
###Code
df = pd.read_csv('spam.csv')
df.head()
df.describe()
x = df["EmailText"]
y = df["Label"]
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=0.3, random_state=0)
###Output
_____no_output_____
###Markdown
Extract Features
###Code
cv = CountVectorizer()
features = cv.fit_transform(X_train)
###Output
_____no_output_____
###Markdown
Build the Model
###Code
model = svm.SVC()
model.fit(features, y_train)
###Output
_____no_output_____
###Markdown
Testing Accuracy
###Code
features_test = cv.transform(X_test)
print("Accuracy of the model is",model.score(features_test,y_test))
###Output
Accuracy of the model is 0.9754784688995215
|
Section 3/net_present_value.ipynb | ###Markdown
Net Present Value
###Code
import numpy as np
initial_investment = 100000
cost_of_capital = 0.12
years = np.arange(1,6)
cashflows = np.ones(5) * 30000
discounted_cashflows = cashflows / (1 + cost_of_capital) ** years
discounted_cashflows
npv = sum(discounted_cashflows) - initial_investment
npv
np.npv(cost_of_capital,[-100000,30000,30000,30000,30000,30000])
###Output
_____no_output_____ |
Natural-Language-Processing/01-NLP (Natural Language Processing) with Python.ipynb | ###Markdown
___ NLP (Natural Language Processing) with PythonThis is the notebook that goes along with the NLP video lecture!In this lecture we will discuss a higher level overview of the basics of Natural Language Processing, which basically consists of combining machine learning techniques with text, and using math and statistics to get that text in a format that the machine learning algorithms can understand!Once you've completed this lecture you'll have a project using some Yelp Text Data! **Requirements: You will need to have NLTK installed, along with downloading the corpus for stopwords. To download everything with a conda installation, run the cell below. Or reference the full video lecture**
###Code
# ONLY RUN THIS CELL IF YOU NEED
# TO DOWNLOAD NLTK AND HAVE CONDA
# WATCH THE VIDEO FOR FULL INSTRUCTIONS ON THIS STEP
# Uncomment the code below and run:
# !conda install nltk #This installs nltk
# import nltk # Imports the library
# nltk.download() #Download the necessary datasets
###Output
_____no_output_____
###Markdown
Get the Data We'll be using a dataset from the [UCI datasets](https://archive.ics.uci.edu/ml/datasets/SMS+Spam+Collection)! This dataset is already located in the folder for this section. The file we are using contains a collection of more than 5 thousand SMS phone messages. You can check out the **readme** file for more info.Let's go ahead and use rstrip() plus a list comprehension to get a list of all the lines of text messages:
###Code
messages = [line.rstrip() for line in open('smsspamcollection/SMSSpamCollection')]
print(len(messages))
###Output
5574
###Markdown
A collection of texts is also sometimes called "corpus". Let's print the first ten messages and number them using **enumerate**:
###Code
for message_no, message in enumerate(messages[:10]):
print(message_no, message)
print('\n')
###Output
0 ham Go until jurong point, crazy.. Available only in bugis n great world la e buffet... Cine there got amore wat...
1 ham Ok lar... Joking wif u oni...
2 spam Free entry in 2 a wkly comp to win FA Cup final tkts 21st May 2005. Text FA to 87121 to receive entry question(std txt rate)T&C's apply 08452810075over18's
3 ham U dun say so early hor... U c already then say...
4 ham Nah I don't think he goes to usf, he lives around here though
5 spam FreeMsg Hey there darling it's been 3 week's now and no word back! I'd like some fun you up for it still? Tb ok! XxX std chgs to send, £1.50 to rcv
6 ham Even my brother is not like to speak with me. They treat me like aids patent.
7 ham As per your request 'Melle Melle (Oru Minnaminunginte Nurungu Vettam)' has been set as your callertune for all Callers. Press *9 to copy your friends Callertune
8 spam WINNER!! As a valued network customer you have been selected to receivea £900 prize reward! To claim call 09061701461. Claim code KL341. Valid 12 hours only.
9 spam Had your mobile 11 months or more? U R entitled to Update to the latest colour mobiles with camera for Free! Call The Mobile Update Co FREE on 08002986030
###Markdown
Due to the spacing we can tell that this is a [TSV](http://en.wikipedia.org/wiki/Tab-separated_values) ("tab separated values") file, where the first column is a label saying whether the given message is a normal message (commonly known as "ham") or "spam". The second column is the message itself. (Note our numbers aren't part of the file, they are just from the **enumerate** call).Using these labeled ham and spam examples, we'll **train a machine learning model to learn to discriminate between ham/spam automatically**. Then, with a trained model, we'll be able to **classify arbitrary unlabeled messages** as ham or spam.From the official SciKit Learn documentation, we can visualize our process: Instead of parsing TSV manually using Python, we can just take advantage of pandas! Let's go ahead and import it!
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
We'll use **read_csv** and make note of the **sep** argument, we can also specify the desired column names by passing in a list of *names*.
###Code
messages = pd.read_csv('smsspamcollection/SMSSpamCollection', sep='\t',
names=["label", "message"])
messages.head()
###Output
_____no_output_____
###Markdown
Exploratory Data AnalysisLet's check out some of the stats with some plots and the built-in methods in pandas!
###Code
messages.describe()
###Output
_____no_output_____
###Markdown
Let's use **groupby** to use describe by label, this way we can begin to think about the features that separate ham and spam!
###Code
messages.groupby('label').describe()
###Output
_____no_output_____
###Markdown
As we continue our analysis we want to start thinking about the features we are going to be using. This goes along with the general idea of [feature engineering](https://en.wikipedia.org/wiki/Feature_engineering). The better your domain knowledge on the data, the better your ability to engineer more features from it. Feature engineering is a very large part of spam detection in general. I encourage you to read up on the topic!Let's make a new column to detect how long the text messages are:
###Code
messages['length'] = messages['message'].apply(len)
messages.head()
###Output
_____no_output_____
###Markdown
Data VisualizationLet's visualize this! Let's do the imports:
###Code
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
messages['length'].plot(bins=50, kind='hist')
###Output
_____no_output_____
###Markdown
Play around with the bin size! Looks like text length may be a good feature to think about! Let's try to explain why the x-axis goes all the way to 1000ish, this must mean that there is some really long message!
###Code
messages.length.describe()
###Output
_____no_output_____
###Markdown
Woah! 910 characters, let's use masking to find this message:
###Code
messages[messages['length'] == 910]['message'].iloc[0]
###Output
_____no_output_____
###Markdown
Looks like we have some sort of Romeo sending texts! But let's focus back on the idea of trying to see if message length is a distinguishing feature between ham and spam:
###Code
messages.hist(column='length', by='label', bins=50,figsize=(12,4))
###Output
_____no_output_____
###Markdown
Very interesting! Through just basic EDA we've been able to discover a trend that spam messages tend to have more characters. (Sorry Romeo!)Now let's begin to process the data so we can eventually use it with SciKit Learn! Text Pre-processing Our main issue with our data is that it is all in text format (strings). The classification algorithms that we've learned about so far will need some sort of numerical feature vector in order to perform the classification task. There are actually many methods to convert a corpus to a vector format. The simplest is the the [bag-of-words](http://en.wikipedia.org/wiki/Bag-of-words_model) approach, where each unique word in a text will be represented by one number.In this section we'll convert the raw messages (sequence of characters) into vectors (sequences of numbers).As a first step, let's write a function that will split a message into its individual words and return a list. We'll also remove very common words, ('the', 'a', etc..). To do this we will take advantage of the NLTK library. It's pretty much the standard library in Python for processing text and has a lot of useful features. We'll only use some of the basic ones here.Let's create a function that will process the string in the message column, then we can just use **apply()** in pandas do process all the text in the DataFrame.First removing punctuation. We can just take advantage of Python's built-in **string** library to get a quick list of all the possible punctuation:
###Code
import string
mess = 'Sample message! Notice: it has punctuation.'
# Check characters to see if they are in punctuation
nopunc = [char for char in mess if char not in string.punctuation]
# Join the characters again to form the string.
nopunc = ''.join(nopunc)
###Output
_____no_output_____
###Markdown
Now let's see how to remove stopwords. We can impot a list of english stopwords from NLTK (check the documentation for more languages and info).
###Code
from nltk.corpus import stopwords
stopwords.words('english')[0:10] # Show some stop words
nopunc.split()
# Now just remove any stopwords
clean_mess = [word for word in nopunc.split() if word.lower() not in stopwords.words('english')]
clean_mess
###Output
_____no_output_____
###Markdown
Now let's put both of these together in a function to apply it to our DataFrame later on:
###Code
def text_process(mess):
"""
Takes in a string of text, then performs the following:
1. Remove all punctuation
2. Remove all stopwords
3. Returns a list of the cleaned text
"""
# Check characters to see if they are in punctuation
nopunc = [char for char in mess if char not in string.punctuation]
# Join the characters again to form the string.
nopunc = ''.join(nopunc)
# Now just remove any stopwords
return [word for word in nopunc.split() if word.lower() not in stopwords.words('english')]
###Output
_____no_output_____
###Markdown
Here is the original DataFrame again:
###Code
messages.head()
###Output
_____no_output_____
###Markdown
Now let's "tokenize" these messages. Tokenization is just the term used to describe the process of converting the normal text strings in to a list of tokens (words that we actually want).Let's see an example output on on column:**Note:**We may get some warnings or errors for symbols we didn't account for or that weren't in Unicode (like a British pound symbol)
###Code
# Check to make sure its working
messages['message'].head(5).apply(text_process)
# Show original dataframe
messages.head()
###Output
_____no_output_____
###Markdown
Continuing NormalizationThere are a lot of ways to continue normalizing this text. Such as [Stemming](https://en.wikipedia.org/wiki/Stemming) or distinguishing by [part of speech](http://www.nltk.org/book/ch05.html).NLTK has lots of built-in tools and great documentation on a lot of these methods. Sometimes they don't work well for text-messages due to the way a lot of people tend to use abbreviations or shorthand, For example: 'Nah dawg, IDK! Wut time u headin to da club?' versus 'No dog, I don't know! What time are you heading to the club?' Some text normalization methods will have trouble with this type of shorthand and so I'll leave you to explore those more advanced methods through the [NLTK book online](http://www.nltk.org/book/).For now we will just focus on using what we have to convert our list of words to an actual vector that SciKit-Learn can use. Vectorization Currently, we have the messages as lists of tokens (also known as [lemmas](http://nlp.stanford.edu/IR-book/html/htmledition/stemming-and-lemmatization-1.html)) and now we need to convert each of those messages into a vector the SciKit Learn's algorithm models can work with.Now we'll convert each message, represented as a list of tokens (lemmas) above, into a vector that machine learning models can understand.We'll do that in three steps using the bag-of-words model:1. Count how many times does a word occur in each message (Known as term frequency)2. Weigh the counts, so that frequent tokens get lower weight (inverse document frequency)3. Normalize the vectors to unit length, to abstract from the original text length (L2 norm)Let's begin the first step: Each vector will have as many dimensions as there are unique words in the SMS corpus. We will first use SciKit Learn's **CountVectorizer**. This model will convert a collection of text documents to a matrix of token counts.We can imagine this as a 2-Dimensional matrix. Where the 1-dimension is the entire vocabulary (1 row per word) and the other dimension are the actual documents, in this case a column per text message. For example: Message 1 Message 2 ... Message N Word 1 Count01...0Word 2 Count00...0... 12...0Word N Count 01...1Since there are so many messages, we can expect a lot of zero counts for the presence of that word in that document. Because of this, SciKit Learn will output a [Sparse Matrix](https://en.wikipedia.org/wiki/Sparse_matrix).
###Code
from sklearn.feature_extraction.text import CountVectorizer
###Output
_____no_output_____
###Markdown
There are a lot of arguments and parameters that can be passed to the CountVectorizer. In this case we will just specify the **analyzer** to be our own previously defined function:
###Code
# Might take awhile...
bow_transformer = CountVectorizer(analyzer=text_process).fit(messages['message'])
# Print total number of vocab words
print(len(bow_transformer.vocabulary_))
###Output
11444
###Markdown
Let's take one text message and get its bag-of-words counts as a vector, putting to use our new `bow_transformer`:
###Code
message4 = messages['message'][3]
print(message4)
###Output
U dun say so early hor... U c already then say...
###Markdown
Now let's see its vector representation:
###Code
bow4 = bow_transformer.transform([message4])
print(bow4)
print(bow4.shape)
###Output
(0, 4073) 2
(0, 4638) 1
(0, 5270) 1
(0, 6214) 1
(0, 6232) 1
(0, 7197) 1
(0, 9570) 2
(1, 11444)
###Markdown
This means that there are seven unique words in message number 4 (after removing common stop words). Two of them appear twice, the rest only once. Let's go ahead and check and confirm which ones appear twice:
###Code
print(bow_transformer.get_feature_names()[4073])
print(bow_transformer.get_feature_names()[9570])
###Output
U
say
###Markdown
Now we can use **.transform** on our Bag-of-Words (bow) transformed object and transform the entire DataFrame of messages. Let's go ahead and check out how the bag-of-words counts for the entire SMS corpus is a large, sparse matrix:
###Code
messages_bow = bow_transformer.transform(messages['message'])
print('Shape of Sparse Matrix: ', messages_bow.shape)
print('Amount of Non-Zero occurences: ', messages_bow.nnz)
sparsity = (100.0 * messages_bow.nnz / (messages_bow.shape[0] * messages_bow.shape[1]))
print('sparsity: {}'.format(round(sparsity)))
###Output
sparsity: 0
###Markdown
After the counting, the term weighting and normalization can be done with [TF-IDF](http://en.wikipedia.org/wiki/Tf%E2%80%93idf), using scikit-learn's `TfidfTransformer`.____ So what is TF-IDF?TF-IDF stands for *term frequency-inverse document frequency*, and the tf-idf weight is a weight often used in information retrieval and text mining. This weight is a statistical measure used to evaluate how important a word is to a document in a collection or corpus. The importance increases proportionally to the number of times a word appears in the document but is offset by the frequency of the word in the corpus. Variations of the tf-idf weighting scheme are often used by search engines as a central tool in scoring and ranking a document's relevance given a user query.One of the simplest ranking functions is computed by summing the tf-idf for each query term; many more sophisticated ranking functions are variants of this simple model.Typically, the tf-idf weight is composed by two terms: the first computes the normalized Term Frequency (TF), aka. the number of times a word appears in a document, divided by the total number of words in that document; the second term is the Inverse Document Frequency (IDF), computed as the logarithm of the number of the documents in the corpus divided by the number of documents where the specific term appears.**TF: Term Frequency**, which measures how frequently a term occurs in a document. Since every document is different in length, it is possible that a term would appear much more times in long documents than shorter ones. Thus, the term frequency is often divided by the document length (aka. the total number of terms in the document) as a way of normalization: *TF(t) = (Number of times term t appears in a document) / (Total number of terms in the document).***IDF: Inverse Document Frequency**, which measures how important a term is. While computing TF, all terms are considered equally important. However it is known that certain terms, such as "is", "of", and "that", may appear a lot of times but have little importance. Thus we need to weigh down the frequent terms while scale up the rare ones, by computing the following: *IDF(t) = log_e(Total number of documents / Number of documents with term t in it).*See below for a simple example.**Example:**Consider a document containing 100 words wherein the word cat appears 3 times. The term frequency (i.e., tf) for cat is then (3 / 100) = 0.03. Now, assume we have 10 million documents and the word cat appears in one thousand of these. Then, the inverse document frequency (i.e., idf) is calculated as log(10,000,000 / 1,000) = 4. Thus, the Tf-idf weight is the product of these quantities: 0.03 * 4 = 0.12.____Let's go ahead and see how we can do this in SciKit Learn:
###Code
from sklearn.feature_extraction.text import TfidfTransformer
tfidf_transformer = TfidfTransformer().fit(messages_bow)
tfidf4 = tfidf_transformer.transform(bow4)
print(tfidf4)
###Output
(0, 9570) 0.538562626293
(0, 7197) 0.438936565338
(0, 6232) 0.318721689295
(0, 6214) 0.299537997237
(0, 5270) 0.297299574059
(0, 4638) 0.266198019061
(0, 4073) 0.408325899334
###Markdown
We'll go ahead and check what is the IDF (inverse document frequency) of the word `"u"` and of word `"university"`?
###Code
print(tfidf_transformer.idf_[bow_transformer.vocabulary_['u']])
print(tfidf_transformer.idf_[bow_transformer.vocabulary_['university']])
###Output
3.28005242674
8.5270764989
###Markdown
To transform the entire bag-of-words corpus into TF-IDF corpus at once:
###Code
messages_tfidf = tfidf_transformer.transform(messages_bow)
print(messages_tfidf.shape)
###Output
(5572, 11444)
###Markdown
There are many ways the data can be preprocessed and vectorized. These steps involve feature engineering and building a "pipeline". I encourage you to check out SciKit Learn's documentation on dealing with text data as well as the expansive collection of available papers and books on the general topic of NLP. Training a model With messages represented as vectors, we can finally train our spam/ham classifier. Now we can actually use almost any sort of classification algorithms. For a [variety of reasons](http://www.inf.ed.ac.uk/teaching/courses/inf2b/learnnotes/inf2b-learn-note07-2up.pdf), the Naive Bayes classifier algorithm is a good choice. We'll be using scikit-learn here, choosing the [Naive Bayes](http://en.wikipedia.org/wiki/Naive_Bayes_classifier) classifier to start with:
###Code
from sklearn.naive_bayes import MultinomialNB
spam_detect_model = MultinomialNB().fit(messages_tfidf, messages['label'])
###Output
_____no_output_____
###Markdown
Let's try classifying our single random message and checking how we do:
###Code
print('predicted:', spam_detect_model.predict(tfidf4)[0])
print('expected:', messages.label[3])
###Output
predicted: ham
expected: ham
###Markdown
Fantastic! We've developed a model that can attempt to predict spam vs ham classification! Part 6: Model EvaluationNow we want to determine how well our model will do overall on the entire dataset. Let's begin by getting all the predictions:
###Code
all_predictions = spam_detect_model.predict(messages_tfidf)
print(all_predictions)
###Output
['ham' 'ham' 'spam' ..., 'ham' 'ham' 'ham']
###Markdown
We can use SciKit Learn's built-in classification report, which returns [precision, recall,](https://en.wikipedia.org/wiki/Precision_and_recall) [f1-score](https://en.wikipedia.org/wiki/F1_score), and a column for support (meaning how many cases supported that classification). Check out the links for more detailed info on each of these metrics and the figure below:
###Code
from sklearn.metrics import classification_report
print (classification_report(messages['label'], all_predictions))
###Output
precision recall f1-score support
ham 0.98 1.00 0.99 4825
spam 1.00 0.85 0.92 747
avg / total 0.98 0.98 0.98 5572
###Markdown
There are quite a few possible metrics for evaluating model performance. Which one is the most important depends on the task and the business effects of decisions based off of the model. For example, the cost of mis-predicting "spam" as "ham" is probably much lower than mis-predicting "ham" as "spam". In the above "evaluation",we evaluated accuracy on the same data we used for training. **You should never actually evaluate on the same dataset you train on!**Such evaluation tells us nothing about the true predictive power of our model. If we simply remembered each example during training, the accuracy on training data would trivially be 100%, even though we wouldn't be able to classify any new messages.A proper way is to split the data into a training/test set, where the model only ever sees the **training data** during its model fitting and parameter tuning. The **test data** is never used in any way. This is then our final evaluation on test data is representative of true predictive performance. Train Test Split
###Code
from sklearn.model_selection import train_test_split
msg_train, msg_test, label_train, label_test = \
train_test_split(messages['message'], messages['label'], test_size=0.2)
print(len(msg_train), len(msg_test), len(msg_train) + len(msg_test))
###Output
4457 1115 5572
###Markdown
The test size is 20% of the entire dataset (1115 messages out of total 5572), and the training is the rest (4457 out of 5572). Note the default split would have been 30/70. Creating a Data PipelineLet's run our model again and then predict off the test set. We will use SciKit Learn's [pipeline](http://scikit-learn.org/stable/modules/pipeline.html) capabilities to store a pipeline of workflow. This will allow us to set up all the transformations that we will do to the data for future use. Let's see an example of how it works:
###Code
from sklearn.pipeline import Pipeline
pipeline = Pipeline([
('bow', CountVectorizer(analyzer=text_process)), # strings to token integer counts
('tfidf', TfidfTransformer()), # integer counts to weighted TF-IDF scores
('classifier', MultinomialNB()), # train on TF-IDF vectors w/ Naive Bayes classifier
])
###Output
_____no_output_____
###Markdown
Now we can directly pass message text data and the pipeline will do our pre-processing for us! We can treat it as a model/estimator API:
###Code
pipeline.fit(msg_train,label_train)
predictions = pipeline.predict(msg_test)
print(classification_report(predictions,label_test))
###Output
precision recall f1-score support
ham 1.00 0.96 0.98 1001
spam 0.75 1.00 0.85 114
avg / total 0.97 0.97 0.97 1115
|
anac_eda-opt.ipynb | ###Markdown
An optimization problem in the Brazilian flight data 1. Introduction The Brazilian flight data shared by their Civil Aviation Authority (ANAC) brings some airline marketing metrics, and also variables that enables one to recalculate these metrics. While testing for the consistency of these values, I have arrived at a model optimization problem: what is the average weight for passengers that airlines use for their flight plans? Are they the same for Brazilian and foreign airlines?Let's check it out. The data used in this notebook may be found at:- https://www.gov.br/anac/pt-br/assuntos/dados-e-estatisticas/dados-estatisticos/arquivos/resumo_anual_2019.csv- https://www.gov.br/anac/pt-br/assuntos/dados-e-estatisticas/dados-estatisticos/arquivos/resumo_anual_2020.csv- https://www.gov.br/anac/pt-br/assuntos/dados-e-estatisticas/dados-estatisticos/arquivos/resumo_anual_2021.csv 2. Importing the libraries and data clean-up NOTE: this section 2 is exactly the same found in the EDA article below: LINK FOR ARTICLEIf you have already read it, you can skip this section. First of all, let's import the libraries we are going to use:
###Code
import os
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import unidecode
###Output
_____no_output_____
###Markdown
I am using the Seaborn library instead of matplotlib. I am also using the unidecode library to convert the column names to a more friendly format. Now the files are loaded and merged into a single dataframe.
###Code
folder = r'C:\Users\thiag\data\ANAC-transport'
dffiles = ['resumo_anual_2019.csv',
'resumo_anual_2020.csv',
'resumo_anual_2021.csv']
df = pd.concat([pd.read_csv(os.path.join(folder, x),
sep=';', encoding=('ISO-8859-1'))
for x in dffiles])
###Output
_____no_output_____
###Markdown
Let's look at the data.
###Code
print(df.head())
###Output
EMPRESA (SIGLA) EMPRESA (NOME) EMPRESA (NACIONALIDADE) ANO MÊS \
0 AAF AIGLE AZUR ESTRANGEIRA 2019 1
1 AAF AIGLE AZUR ESTRANGEIRA 2019 1
2 AAF AIGLE AZUR ESTRANGEIRA 2019 2
3 AAF AIGLE AZUR ESTRANGEIRA 2019 2
4 AAF AIGLE AZUR ESTRANGEIRA 2019 3
AEROPORTO DE ORIGEM (SIGLA) AEROPORTO DE ORIGEM (NOME) \
0 LFPO ORLY (NEAR PARIS)
1 SBKP CAMPINAS
2 LFPO ORLY (NEAR PARIS)
3 SBKP CAMPINAS
4 LFPO ORLY (NEAR PARIS)
AEROPORTO DE ORIGEM (UF) AEROPORTO DE ORIGEM (REGIÃO) \
0 NaN NaN
1 SP SUDESTE
2 NaN NaN
3 SP SUDESTE
4 NaN NaN
AEROPORTO DE ORIGEM (PAÍS) ... COMBUSTÍVEL (LITROS) DISTÂNCIA VOADA (KM) \
0 FRANÇA ... NaN 149856.0
1 BRASIL ... NaN 149856.0
2 FRANÇA ... NaN 149856.0
3 BRASIL ... NaN 149856.0
4 FRANÇA ... NaN 159222.0
DECOLAGENS CARGA PAGA KM CARGA GRATIS KM CORREIO KM ASSENTOS PAYLOAD \
0 16.0 920725000.0 0.0 0.0 4592.0 770089.0
1 16.0 263700000.0 25232000.0 0.0 4592.0 770089.0
2 16.0 617173000.0 0.0 0.0 4592.0 770089.0
3 16.0 0.0 0.0 0.0 4592.0 770089.0
4 17.0 933032000.0 0.0 0.0 4879.0 1252270.0
HORAS VOADAS BAGAGEM (KG)
0 144,86 NaN
1 227,34 NaN
2 107,35 NaN
3 267,29 NaN
4 134,73 NaN
[5 rows x 38 columns]
###Markdown
The following can be observed about the column names:- They are written in Portuguese and contain accentuation;- They are all in upper case letters;- They contain spaces and parenthesis.To facilitate readability we will modify the column names by:- Replacing the spaces with underlines "_";- Removing the parenthesis;- Making all letters lowercase; and- Removing the accents.This convention is called snake_case and, even though not standard, it is frequently used. For more information, refer to: https://en.wikipedia.org/wiki/Snake_case
###Code
print("Column names before changes:\n")
print(df.columns)
df.columns = [unidecode.unidecode(z.lower())
.replace(' ','_')
.replace('(','')
.replace(')','')
for z in df.columns]
df.to_csv('3years.csv', sep=';', index=False)
print("Column names after changes:\n")
print(df.columns)
###Output
Column names before changes:
Index(['EMPRESA (SIGLA)', 'EMPRESA (NOME)', 'EMPRESA (NACIONALIDADE)', 'ANO',
'MÊS', 'AEROPORTO DE ORIGEM (SIGLA)', 'AEROPORTO DE ORIGEM (NOME)',
'AEROPORTO DE ORIGEM (UF)', 'AEROPORTO DE ORIGEM (REGIÃO)',
'AEROPORTO DE ORIGEM (PAÍS)', 'AEROPORTO DE ORIGEM (CONTINENTE)',
'AEROPORTO DE DESTINO (SIGLA)', 'AEROPORTO DE DESTINO (NOME)',
'AEROPORTO DE DESTINO (UF)', 'AEROPORTO DE DESTINO (REGIÃO)',
'AEROPORTO DE DESTINO (PAÍS)', 'AEROPORTO DE DESTINO (CONTINENTE)',
'NATUREZA', 'GRUPO DE VOO', 'PASSAGEIROS PAGOS', 'PASSAGEIROS GRÁTIS',
'CARGA PAGA (KG)', 'CARGA GRÁTIS (KG)', 'CORREIO (KG)', 'ASK', 'RPK',
'ATK', 'RTK', 'COMBUSTÍVEL (LITROS)', 'DISTÂNCIA VOADA (KM)',
'DECOLAGENS', 'CARGA PAGA KM', 'CARGA GRATIS KM', 'CORREIO KM',
'ASSENTOS', 'PAYLOAD', 'HORAS VOADAS', 'BAGAGEM (KG)'],
dtype='object')
Column names after changes:
Index(['empresa_sigla', 'empresa_nome', 'empresa_nacionalidade', 'ano', 'mes',
'aeroporto_de_origem_sigla', 'aeroporto_de_origem_nome',
'aeroporto_de_origem_uf', 'aeroporto_de_origem_regiao',
'aeroporto_de_origem_pais', 'aeroporto_de_origem_continente',
'aeroporto_de_destino_sigla', 'aeroporto_de_destino_nome',
'aeroporto_de_destino_uf', 'aeroporto_de_destino_regiao',
'aeroporto_de_destino_pais', 'aeroporto_de_destino_continente',
'natureza', 'grupo_de_voo', 'passageiros_pagos', 'passageiros_gratis',
'carga_paga_kg', 'carga_gratis_kg', 'correio_kg', 'ask', 'rpk', 'atk',
'rtk', 'combustivel_litros', 'distancia_voada_km', 'decolagens',
'carga_paga_km', 'carga_gratis_km', 'correio_km', 'assentos', 'payload',
'horas_voadas', 'bagagem_kg'],
dtype='object')
###Markdown
This looks better.Let's add some new columns to this dataframe, to support our analysis:- Since we are looking for a cronologic observation, it is insteresting to concatenate the calendar months and years into a single variable called 'data' (Portuguese for date. I am keeping Portuguese names for consistency). Let's also add a column named 'quarto' (Portuguese for quarter) to concatenate variables around the months of the year 3-by-3.- We can also infer the routes from the origin and destination airport variables (respectivelly called aeroporto_de_origem_sigla and aeroporto_de_destino_sigla). A variable named 'rota' (Portuguese for route) will be created to store the 'origin->destination' string. Another variable with the names of the airports (instead of the codes) will be created (and alled 'rota_nome') for readability (not everyone knows all airport codes).- Dividing RPK for ASK we get the load factor, which is a very important metric for airlines economics. This variable will also be created.
###Code
df['data'] = [str(x['ano']) + '-' + "{:02}".format(x['mes'])
for index, x in df.iterrows()]
df['rota'] = [str(x['aeroporto_de_origem_sigla']) + '->' +
str(x['aeroporto_de_destino_sigla'])
for index, x in df.iterrows()]
df['rota_nome'] = [str(x['aeroporto_de_origem_nome']) + '->' +
str(x['aeroporto_de_destino_nome'])
for index, x in df.iterrows()]
df['load_factor'] = df['rpk']/df['ask']
def quarter(x):
year = x['ano']
mes = x['mes']
if mes in [1, 2, 3]:
quarter = str(year) + '-Q1'
elif mes in [4, 5, 6]:
quarter = str(year) + '-Q2'
elif mes in [7, 8, 9]:
quarter = str(year) + '-Q3'
elif mes in [10, 11, 12]:
quarter = str(year) + '-Q4'
return quarter
df['quarter'] = df.apply(quarter, axis=1)
###Output
_____no_output_____
###Markdown
3. Airline metrics for efficiency and capacity Since there is no data dictionary, it is now a good time to talk about some interesting variables:- RPK meaning "Revenue Passenger Kilometers" is an air transport industry metric that aggregates the number of paying passengers and the quantity of kilometers traveled by them. It is calculated by multiplying the number of paying passengers by the distance traveled in kilometers.- ASK meaning "Available Seat Kilometers" is similar to the RPK but instead of using the paying passengers, the passenger capacity (number of seats available in the aircraft) is multiplied by the traveled distance.- RTK (for "Revenue tonne kilometres") measures the revenue cargo load in tonnes multiplied by the distance flown in kilometers.- ATK (for "Available tonne kilometres") measures the aircraft capacity of cargo load in tonnes multiplied by the distance flown in kilometers.The dataframe presents not only the value of these parameters but also the variables that compose their formula. Therefore, let's make a consistency check, verifying it is possible to reproduce their values through the variables. The formulas of the variables are: $ RPK = \frac{\sum{PayingPassengers} \ \times \ distance}{\sum{flights}} $$ ASK = \frac{\sum{Seats} \ \times \ distance}{\sum{flights}} $$ RTK = \frac{(AvgWeight \ \times \ \sum{PayingPassengers \ + \ BaggageWeight \ + \ CargoWeight \ + \ MailWeight) } \ \times \ distance}{1000 \ \times \ \sum{flights}} $$ ASK = \frac{\sum{Payload} \ \times \ distance}{1000 \ \times \ \sum{flights}} $ The only variable not given in our data set is the AvgWeight variable. How about we calculate the AvgWeight that gives the best difference between the given RTK and the calculated RTK?This is an optimization problem that we will define below: $$\min_{AvgWeight} RTK_{given} - \frac{(AvgWeight \ \times \ \sum{PayingPassengers \ + \ BaggageWeight \ + \ CargoWeight \ + \ MailWeight) } \ \times \ distance}{1000 \ \times \ \sum{flights}} $$ Let's define the optimization function (with some margin of error) and use the library Scipy to optimize this problem.
###Code
def matching(k):
dummy = []
for index, x in df.iterrows():
if x['decolagens'] == 0:
dummy.append(abs(x['rtk']) < 1000)
else:
dummy.append(abs(x['rtk'] - (k*x['passageiros_pagos']+x['carga_paga_kg']+x['correio_kg']+x['bagagem_kg'])*x['distancia_voada_km']/
(1000*x['decolagens'])) < 1000)
return 1/sum(dummy)
from scipy import optimize
res = optimize.minimize_scalar(matching, bounds=(70,150), method='bounded',
options={'maxiter':100})
print(res)
###Output
fun: 2.477700693756194e-05
message: 'Solution found.'
nfev: 25
status: 0
success: True
x: 75.0006857462938
###Markdown
Great, so we have the value 75. Let's apply it and calculate the consistency of this vari
###Code
dummy = []
for index, x in df.iterrows():
if x['decolagens'] == 0:
dummy.append(abs(x['rtk']) < 1000)
else:
dummy.append(abs(x['rtk'] - (75*x['passageiros_pagos']+x['carga_paga_kg']+x['correio_kg']+x['bagagem_kg'] )*
x['distancia_voada_km']/(1000*x['decolagens'])) < 1000)
print('The number of rtk values that correspond to rtk calculation is: {:.2f}%'.format(100*sum(dummy)/len(dummy)))
df['rtk_calc']=(75*df['passageiros_pagos']+df['carga_paga_kg']+df['correio_kg']+df['bagagem_kg']
)*df['distancia_voada_km']/(1000*df['decolagens'])
###Output
The number of rtk values that correspond to rtk calculation is: 56.28%
###Markdown
We can see that the consistency is a little over 50%.One clear disadvantage of the calculated RTK is that the same average weight (75 kg) was used for all passengers of all airlines. This assumption implies that Brazilian and foreign companies use (or have to use) the same value for passenger weight to do their flight planning, which may not be true.Let's observe if being a Brazilian airline or foreign airline has an effect in the relationship between reported RTK and calculated RTK:
###Code
sns.scatterplot(x=df['rtk'],y=df['rtk_calc'],hue=df['empresa_nacionalidade'])
###Output
_____no_output_____
###Markdown
We can see clearly that the line y=x has many Brazilian airlines into it, but not foreign. Also, there is a second line below the y=x line, suggesting a different tendency for some foreign airlines.Let's improve the optimization problem by considering this fact. The optimization function defined above will be split in two: one to optimize the weight for Brazilian airlines and the other one for foreign airlines.
###Code
def matching_br(k):
dummy = []
for index, x in df[df['empresa_nacionalidade']=='BRASILEIRA'].iterrows():
if x['decolagens'] == 0:
dummy.append(abs(x['rtk']) < 1000)
else:
dummy.append(abs(x['rtk'] - (k*x['passageiros_pagos']+x['carga_paga_kg']+x['correio_kg']+x['bagagem_kg'])*x['distancia_voada_km']/
(1000*x['decolagens'])) < 1000)
return 1/sum(dummy)
def matching_frgn(k):
dummy = []
for index, x in df[df['empresa_nacionalidade']=='ESTRANGEIRA'].iterrows():
if x['decolagens'] == 0:
dummy.append(abs(x['rtk']) < 1000)
else:
dummy.append(abs(x['rtk'] - (k*x['passageiros_pagos']+x['carga_paga_kg']+x['correio_kg']+x['bagagem_kg'])*x['distancia_voada_km']/
(1000*x['decolagens'])) < 1000)
return 1/sum(dummy)
res_br = optimize.minimize_scalar(matching_br, bounds=(70,150), method='bounded',
options={'maxiter':100})
print(res_br)
res_frgn = optimize.minimize_scalar(matching_frgn, bounds=(70,150), method='bounded',
options={'maxiter':100})
print(res_frgn)
###Output
fun: 2.5802456393848696e-05
message: 'Solution found.'
nfev: 27
status: 0
success: True
x: 75.00044845613596
fun: 0.00028669724770642203
message: 'Solution found.'
nfev: 22
status: 0
success: True
x: 90.0005090318264
###Markdown
By optimizing the error between RKT and calculated RTK for Brazilian airlines and foreign airlines separately, we arrive at the following values:- Brazilian airlines have 75kg as the best average value for passenger weight;- Foreign airlines have 90kg as the best average value for passenger weight.With this knowledge, let's calculate again the RTK:
###Code
dummy = []
rtk_calc = []
for index, x in df.iterrows():
if x['empresa_nacionalidade'] == 'BRASILEIRA':
avgw = 75
elif x['empresa_nacionalidade'] == 'ESTRANGEIRA':
avgw = 90
if x['decolagens'] == 0:
rtk = float('NaN')
dummy.append(abs(x['rtk']) < 1000)
else:
rtk = (avgw*x['passageiros_pagos']+x['carga_paga_kg']+x['correio_kg']+x['bagagem_kg']
)*x['distancia_voada_km']/(1000*x['decolagens'])
dummy.append(abs(x['rtk'] - rtk) < 1000)
rtk_calc.append(rtk)
print('The number of rtk values that correspond to rtk calculation is: {:.2f}%'.format(100*sum(dummy)/len(dummy)))
df['rtk_calc'] = rtk_calc
del dummy, rtk_calc, rtk
###Output
The number of rtk values that correspond to rtk calculation is: 58.90%
###Markdown
We see now that the match of RTK values passed from 56.28% to 58.90%. Let's also reprint the previous graphic with the corrected calculated RTK.
###Code
sns.scatterplot(x=df['rtk'],y=df['rtk_calc'],hue=df['empresa_nacionalidade'])
###Output
_____no_output_____
###Markdown
We can see that the second tendency line is gone, since we have took into consideration its behaviour in our model. It would be very interesting to find other behaviors to use in this optimization problem. Other variables, however, are not clearly related to clusters in the model to account for their use.Out of curiosity, let's check a few examples.
###Code
ax = sns.scatterplot(x=df['rtk'],y=df['rtk_calc'],hue=df['decolagens'])
ax = sns.scatterplot(x=df['rtk'],y=df['rtk_calc'],hue=df['assentos'])
ax = sns.scatterplot(x=df['rtk'],y=df['rtk_calc'],hue=df['payload'])
###Output
_____no_output_____ |
notebooks/compute_nino3_4index.ipynb | ###Markdown
Compute Nino3.4 DJF index for each model, and save to file
###Code
import warnings
warnings.filterwarnings('ignore')
import numpy as np
from scipy.signal import detrend
from matplotlib import pyplot as plt
from eofs.xarray import Eof
from scipy import signal
import pandas as pd
import xarray as xr
import xesmf as xe
import pprint
import intake
import util
# choose where to load data from:
load_data_from = 'cloud'
#load_data_from = 'glade'
if load_data_from == 'glade':
col = intake.open_esm_datastore("../catalogs/glade-cmip6.json")
file = 'available_data.txt'
else:
col_url = "https://raw.githubusercontent.com/NCAR/intake-esm-datastore/master/catalogs/pangeo-cmip6.json"
col = intake.open_esm_datastore(col_url)
#col = intake.open_esm_datastore("../catalogs/pangeo-cmip6.json")
file = 'available_data_cloud.txt'
# pick only models with at least 496 yrs in piControl
minyrs_control = 496;
# models with fewer years often missed future scenarios, so they are not so interesting for us
# load table:
data_table = pd.read_table(file,index_col=0)
models_used = data_table['piControl (yrs)'][data_table['piControl (yrs)'] >= minyrs_control].index
print(models_used)
###Output
Index(['BCC-CSM2-MR', 'CanESM5', 'CNRM-CM6-1', 'CNRM-ESM2-1', 'E3SM-1-0',
'EC-Earth3', 'EC-Earth3-Veg', 'MIROC-ES2L', 'MIROC6', 'HadGEM3-GC31-LL',
'HadGEM3-GC31-MM', 'UKESM1-0-LL', 'MRI-ESM2-0', 'GISS-E2-1-G', 'CESM2',
'CESM2-WACCM', 'GFDL-ESM4', 'SAM0-UNICON', 'MCM-UA-1-0'],
dtype='object')
###Markdown
Choose what model to use
###Code
model = models_used[14]
model
data_table.loc[model]
# what experiments does this model have that we want to study?
if any(data_table.loc[model][:6] == 'data problem') == False:
exp_list = [exp[:-11] for exp in data_table.loc[model][:6].index if float(data_table.loc[model][:6][exp]) > 0]
else:
exp_list = []
for exp in (data_table.loc[model][:6].index):
if (data_table.loc[model][:6][exp] != 'data problem'):
exp_list = np.append(exp_list, exp[:-11])
print(exp_list)
exp_keys = {}; datasets = {}
for exp in exp_list:
#for exp in [exp_list[1]]:
print(exp)
#cat = col.search(experiment_id = exp, source_id = model, variable_id='ts', table_id='Amon', member_id = 'r1i1p1f1')
cat = col.search(experiment_id = exp, source_id = model, variable_id='ts', table_id='Amon')
dset_dict = cat.to_dataset_dict(zarr_kwargs={'consolidated': True}, cdf_kwargs={'chunks': {}})
for key in dset_dict.keys():
exp_keys[exp] = key
datasets[key] = dset_dict[key]
exp_keys
# load a dataset for manual calendar check:
exp = exp_list[0]; print(exp)
key = exp_keys[exp]
exp_datasets = datasets[key]
members_sorted = exp_datasets.member_id.sortby(exp_datasets.member_id)
ds = exp_datasets.sel(member_id = members_sorted[0])
#print(ds.time)
# results are stored in this if-test:
if model in ['BCC-CSM2-MR', 'FGOALS-g3', 'CanESM5', 'E3SM-1-0', 'GISS-E2-1-G', 'GISS-E2-1-H', 'CESM2', 'CESM2-WACCM', 'GFDL-CM4', 'SAM0-UNICON', 'GFDL-ESM4', 'MCM-UA-1-0']:
ds_calendar = 'noleap'
elif model in ['CNRM-CM6-1', 'CNRM-ESM2-1', 'IPSL-CM6A-LR', 'MIROC-ES2L', 'MIROC6']:
ds_calendar = 'gregorian'
elif model in ['EC-Earth3', 'EC-Earth3-Veg', 'MRI-ESM2-0']:
ds_calendar = 'proleptic_gregorian'
elif model in ['UKESM1-0-LL', 'HadGEM3-GC31-LL', 'HadGEM3-GC31-MM']:
ds_calendar = '360_day'
print(ds_calendar, 'calendar')
def area_weights(lat_bnds, lon_bnds):
# computes exact area weigths assuming earth is a perfect sphere
lowerlats = np.radians(lat_bnds[:,0]); upperlats = np.radians(lat_bnds[:,1])
difflon = np.radians(np.diff(lon_bnds[0,:])) # if the differences in longitudes are all the same
areaweights = difflon*(np.sin(upperlats) - np.sin(lowerlats));
areaweights /= areaweights.mean()
return areaweights # list of weights, of same dimension as latitude
# function copied from: http://xarray.pydata.org/en/stable/examples/monthly-means.html
def leap_year(year, calendar='standard'):
"""Determine if year is a leap year"""
leap = False
if ((calendar in ['standard', 'gregorian',
'proleptic_gregorian', 'julian']) and
(year % 4 == 0)):
leap = True
if ((calendar == 'proleptic_gregorian') and
(year % 100 == 0) and
(year % 400 != 0)):
leap = False
elif ((calendar in ['standard', 'gregorian']) and
(year % 100 == 0) and (year % 400 != 0) and
(year < 1583)):
leap = False
return leap
# function copied from: http://xarray.pydata.org/en/stable/examples/monthly-means.html
def get_dpm(time, calendar='standard'):
"""
return a array of days per month corresponding to the months provided in `months`
"""
month_length = np.zeros(len(time), dtype=np.int)
cal_days = dpm[calendar]
for i, (month, year) in enumerate(zip(time.month, time.year)):
month_length[i] = cal_days[month]
if leap_year(year, calendar=calendar) and month == 2: # the feb-test is missing at the website!
month_length[i] += 1
return month_length
# inspiration taken from: http://xarray.pydata.org/en/stable/examples/monthly-means.html
# days per month:
dpm = {'noleap': [0, 31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31],
'gregorian': [0, 31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31],
'proleptic_gregorian': [0, 31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31],
'360_day': [0, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30]
}
def day_weights(ds, chosen_season = 'DJF', calendar = 'noleap'):
month_length = xr.DataArray(get_dpm((ds.time.to_index()), calendar=ds_calendar), coords=[ds.time], name='month_length')
if chosen_season == 'DJF':
season_months = month_length.where(month_length['time.season'] == season)
# repeat last December month, and move it to the beginning
season_months = xr.concat([season_months[-1], season_months], dim = 'time')
norm_by_annual = season_months[1:].groupby('time.year').mean('time') # make annual mean
norm_by_monthly = np.concatenate([np.tile(norm_by_annual.values[i], 12) for i in range(len(norm_by_annual.values))])
# repeat last December month to give it equal length as season_months. Value of last month will not be used.
norm_by_monthly = np.concatenate([norm_by_monthly, [norm_by_monthly[-1]]])
weights = season_months/norm_by_monthly
# make weigths have mean 1 in chosen season for all years
# can be checked by running weights.rolling(min_periods=3, center=True, time=3).mean()
# note that these weights start with a December month
elif chosen_season == 'all':
norm_by_annual = month_length.groupby('time.year').mean('time') # make annual mean
norm_by_monthly = np.concatenate([np.tile(norm_by_annual.values[i], 12) for i in range(len(norm_by_annual.values))])
weights = month_length/norm_by_monthly
# normalized to have mean 1
# if other season wanted, continue developing this if-test
# NB: normalised weights do not care what numbers are produced for other seasons
return weights
latregion = slice(-5,5); lonregion = slice(190, 240) # = 120 W - 170 W
# use larger region before regridding, that adds 5 deg to each border:
larger_latregion = slice(-10,10); larger_lonregion = slice(185, 245)
resolution = 1;
ds_out = xr.Dataset({'lon': (['lon'], np.arange(lonregion.start+resolution/2, lonregion.stop+resolution/2, resolution)),
'lat': (['lat'], np.arange(latregion.start+resolution/2, latregion.stop+resolution/2, resolution))
}
)
regr_lat_bnds = np.array([[upper, upper+resolution] for upper in range(latregion.start,latregion.stop)])
regr_lon_bnds = np.array([[upper, upper+resolution] for upper in range(lonregion.start,lonregion.stop)])
area_w = area_weights(regr_lat_bnds, regr_lon_bnds)
season = 'DJF'
lastD = {}
for exp in exp_list:
#for exp in exp_list[:2]:
key = exp_keys[exp]
exp_datasets = datasets[key]
members_sorted = exp_datasets.member_id.sortby(exp_datasets.member_id)
#for member in [members_sorted.values[0]]: # check for first member only
for member in members_sorted.values:
print(exp, member)
ds = exp_datasets.sel(member_id = member)
# select regional data, perform a regridding, and compute area average
if model == 'MCM-UA-1-0':
ds = ds.rename({'longitude': 'lon','latitude': 'lat'})
regional_data = ds.ts.sel(lat = larger_latregion, lon = larger_lonregion)
regridder = xe.Regridder(regional_data, ds_out, 'bilinear', reuse_weights = True)
regridded_data = regridder(regional_data)
area_avg = (regridded_data.transpose('time', 'lon', 'lat') * area_w).mean(dim=['lon', 'lat'])
yrs = int(area_avg.shape[0]/12)
weights = day_weights(area_avg, chosen_season = season, calendar = ds_calendar)
# double check that weights are 1 for all seasons
meanweights = weights.rolling(min_periods=3, center=True, time=3).mean()
print('years in experiment:', yrs, ' ', 'mean weights all 1?', all(meanweights.dropna(dim = 'time') == 1))
if exp == 'historical':
# save last december month for each member for use in season mean in first year of ssp exps
lastD[member] = area_avg[-1]
weights = weights[1:] # drop first december month
elif exp == 'piControl':
weights = weights[1:] # drop first december month
elif exp not in ['piControl','historical']: # then it must be future scenario
area_avg = xr.concat([lastD[member], area_avg], dim = 'time')
weights = weights.assign_coords(time = area_avg.time)
# average over season
day_weighted_avg = area_avg*weights
ds_season = day_weighted_avg.where(day_weighted_avg['time.season'] == season) # creates nan in all other months
ds_season3 = ds_season.rolling(min_periods=3, center=True, time=3).mean()
if exp not in ['piControl','historical']:
# remove nan-value obtained from inserting last december month from historical
ds_season3 = ds_season3[1:]
seasonmean = ds_season3.groupby('time.year').mean('time') # make annual mean
# no information the first year of piControl and historical, since we are missing the december month before
# day-weighted rolling 3-months mean for all months (with seasonal variations)
#day_weighted_avg_allyear = area_avg*day_weights(yrs, chosen_season = 'all')
#smoothed_allyear = day_weighted_avg_allyear.rolling(min_periods=3, center=True, time=3).mean()
colname = [(exp, member)]
first_member_piControl = 'r1i1p1f1'
if model in ['CNRM-CM6-1', 'CNRM-ESM2-1', 'UKESM1-0-LL', 'MIROC-ES2L']:
first_member_piControl = 'r1i1p1f2'
elif model in ['GISS-E2-1-G']:
first_member_piControl = 'r101i1p1f1'
if exp == 'piControl' and member == first_member_piControl:
# create dataframe for storing all results and make the piControl years the index
df = pd.DataFrame(seasonmean.values, columns = colname)
else:
df_col = pd.DataFrame(seasonmean.values, columns = colname)
df = pd.merge(df, df_col, left_index=True, right_index=True, how='outer')
df.columns = pd.MultiIndex.from_tuples(df.columns, names=['Experiment','Member'])
# check values in last December for historical
[lastD[member].values for member in lastD.keys()]
###Output
_____no_output_____
###Markdown
check first and last rows of ssp exps
###Code
#pd.set_option('display.min_rows', 90)
df.iloc[0]
pd.set_option('display.max_columns', 100)
df.iloc[85:88]
###Output
_____no_output_____
###Markdown
Save data to file
###Code
#df.to_csv('../Processed_data/Nino3_4_DJF/' + model + '_DJF_nino3_4index.txt')
###Output
_____no_output_____
###Markdown
Similar code as above, but for computing 3-month running mean index for all months:
###Code
# For Nino3.4 region:
#latregion = slice(-5,5); lonregion = slice(190, 240) # = 120 W - 170 W
# use larger region before regridding, that adds 5 deg to each border:
#larger_latregion = slice(-10,10); larger_lonregion = slice(185, 245)
# For Nino3 region:
#latregion = slice(-5,5); lonregion = slice(210, 270) # = 150 W - 90 W
#larger_latregion = slice(-10,10); larger_lonregion = slice(205, 275)
# For warm pool:
latregion = slice(-5,5); lonregion = slice(120, 170)
larger_latregion = slice(-10,10); larger_lonregion = slice(115, 175)
resolution = 1;
ds_out = xr.Dataset({'lon': (['lon'], np.arange(lonregion.start+resolution/2, lonregion.stop+resolution/2, resolution)),
'lat': (['lat'], np.arange(latregion.start+resolution/2, latregion.stop+resolution/2, resolution))
}
)
regr_lat_bnds = np.array([[upper, upper+resolution] for upper in range(latregion.start,latregion.stop)])
regr_lon_bnds = np.array([[upper, upper+resolution] for upper in range(lonregion.start,lonregion.stop)])
area_w = area_weights(regr_lat_bnds, regr_lon_bnds)
season = 'all'
lastD = {}; lastW = {}
for exp in exp_list:
#for exp in exp_list[:2]:
key = exp_keys[exp]
exp_datasets = datasets[key]
members_sorted = exp_datasets.member_id.sortby(exp_datasets.member_id)
#for member in [members_sorted.values[0]]: # check for first member only
for member in members_sorted.values:
print(exp, member)
ds = exp_datasets.sel(member_id = member)
# select regional data, perform a regridding, and compute area average
if model == 'MCM-UA-1-0':
ds = ds.rename({'longitude': 'lon','latitude': 'lat'})
regional_data = ds.ts.sel(lat = larger_latregion, lon = larger_lonregion)
regridder = xe.Regridder(regional_data, ds_out, 'bilinear', reuse_weights = True)
regridded_data = regridder(regional_data)
area_avg = (regridded_data.transpose('time', 'lon', 'lat') * area_w).mean(dim=['lon', 'lat'])
yrs = int(area_avg.shape[0]/12)
weights = day_weights(area_avg, chosen_season = season, calendar = ds_calendar)
if exp == 'historical':
# save last december month for each member for use in season mean in first year of ssp exps
lastD[member] = area_avg[-1]
lastW[member] = weights[-1]
elif exp not in ['piControl','historical']: # then it must be future scenario
area_avg = xr.concat([lastD[member], area_avg], dim = 'time')
weights = xr.concat([lastW[member], weights], dim = 'time')
# average over season with area weights of mean 1 within each year
#day_weighted_avg = area_avg*weights
#ds_season3 = day_weighted_avg.rolling(min_periods=3, center=True, time=3).mean()
# convert to numpy array for increased computational speed
weights = np.array(weights); area_avg = np.array(area_avg)
# do rolling mean in for-loop, to give weigths a mean of 1 in each season
ds_season3 = np.full(len(area_avg), np.nan)
for t in range(1, len(area_avg)-1):
season_weigths = weights[t-1:t+2]/weights[t-1:t+2].mean();
ds_season3[t] = np.mean(area_avg[t-1:t+2]*season_weigths)
if exp not in ['piControl','historical']:
# remove nan-value obtained from inserting last december month from historical
ds_season3 = ds_season3[1:]
colname = [(exp, member)]
first_member_piControl = 'r1i1p1f1'
if model in ['CNRM-CM6-1', 'CNRM-ESM2-1', 'UKESM1-0-LL', 'MIROC-ES2L']:
first_member_piControl = 'r1i1p1f2'
elif model in ['GISS-E2-1-G']:
first_member_piControl = 'r101i1p1f1'
if exp == 'piControl' and member == first_member_piControl:
# create dataframe for storing all results and make the piControl years the index
#df = pd.DataFrame(ds_season3.values, columns = colname)
df = pd.DataFrame(ds_season3, columns = colname)
else:
#df_col = pd.DataFrame(ds_season3.values, columns = colname)
df_col = pd.DataFrame(ds_season3, columns = colname)
df = pd.merge(df, df_col, left_index=True, right_index=True, how='outer')
df.columns = pd.MultiIndex.from_tuples(df.columns, names=['Experiment','Member'])
df.iloc[:5]
df.iloc[1030:1035]
###Output
_____no_output_____
###Markdown
save data to file:
###Code
#df.to_csv('../Processed_data/Nino3_4_monthly/' + model + '_nino3_4monthlyindex.txt')
#df.to_csv('../Processed_data/Nino3_monthly/' + model + '_nino3_monthlyindex.txt')
#df.to_csv('../Processed_data/WP_monthly/' + model + '_wp_monthlyindex.txt')
###Output
_____no_output_____ |
MicroGrad.ipynb | ###Markdown
MicroGradA tiny Autograd engine 
###Code
import random
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# The tiniest Autograd engine. It's so cute!
class Value:
""" stores a single scalar value and its gradient """
def __init__(self, data):
self.data = data
self.grad = 0
self.backward = lambda: None
def __add__(self, other):
other = other if isinstance(other, Value) else Value(other) # attempt to wrap if given an int/float/etc
out = Value(self.data + other.data)
def backward():
self.grad += out.grad
other.grad += out.grad
self.backward()
other.backward()
out.backward = backward
return out
def __radd__(self, other):
return self.__add__(other)
def __mul__(self, other):
other = other if isinstance(other, Value) else Value(other) # attempt to wrap if given an int/float/etc
out = Value(self.data * other.data)
def backward():
self.grad += other.data * out.grad
other.grad += self.data * out.grad
self.backward()
other.backward()
out.backward = backward
return out
def __rmul__(self, other):
return self.__mul__(other)
def relu(self):
out = Value(0 if self.data < 0 else self.data)
def backward():
self.grad += (out.data > 0) * out.grad
self.backward()
out.backward = backward
return out
def __repr__(self):
return f"Value(data={self.data}, grad={self.grad})"
# A neural networks "library" :D on top of it! I'm dying
class Module:
def zero_grad(self):
for p in self.parameters():
p.grad = 0
class Neuron(Module):
def __init__(self, nin, nonlin=True):
self.w = [Value(random.uniform(-1,1)) for _ in range(nin)]
self.b = Value(0)
self.nonlin = nonlin
def __call__(self, x):
act = sum([wi*xi for wi,xi in zip(self.w, x)], self.b)
return act.relu() if self.nonlin else act
def parameters(self):
return self.w + [self.b]
def __repr__(self):
return f"{'ReLU' if self.nonlin else 'Linear'}Neuron({len(self.w)})"
class Layer(Module):
def __init__(self, nin, nout, **kwargs):
self.neurons = [Neuron(nin, **kwargs) for _ in range(nout)]
def __call__(self, x):
out = [n(x) for n in self.neurons]
return out[0] if len(out) == 1 else out
def parameters(self):
return [p for n in self.neurons for p in n.parameters()]
def __repr__(self):
return f"Layer of [{', '.join(str(n) for n in self.neurons)}]"
class MLP(Module):
def __init__(self, nin, nouts):
sz = [nin] + nouts
self.layers = [Layer(sz[i], sz[i+1], nonlin=i!=len(nouts)-1) for i in range(len(nouts))]
def __call__(self, x):
for layer in self.layers:
x = layer(x)
return x
def parameters(self):
return [p for layer in self.layers for p in layer.parameters()]
def __repr__(self):
return f"MLP of [{', '.join(str(layer) for layer in self.layers)}]"
np.random.seed(1337)
random.seed(1337)
# make up a dataset
from sklearn.datasets import make_moons, make_blobs
X, y = make_moons(n_samples=100, noise=0.1)
y = y*2 - 1 # make y be -1 or 1
# visualize in 2D
plt.figure(figsize=(5,5))
plt.scatter(X[:,0], X[:,1], c=y, s=20, cmap='jet')
# initialize a model
#model = MLP(2, [12, 10, 1]) # 2-layer neural network
model = MLP(2, [16, 16, 1]) # 2-layer neural network
print(model)
print("number of parameters", len(model.parameters()))
# loss function
def loss(batch_size=None):
# inline DataLoader :)
if batch_size is None:
Xb, yb = X, y
else:
ri = np.random.permutation(X.shape[0])[:batch_size]
Xb, yb = X[ri], y[ri]
inputs = [list(map(Value, xrow)) for xrow in Xb]
# forward the model to get scores
scores = list(map(model, inputs))
# svm "max-margin" loss
losses = [(Value(yi) * scorei + 1).relu() for yi, scorei in zip(yb, scores)]
data_loss = sum(losses) * (1.0 / len(losses))
# L2 regularization
alpha = 1e-4
reg_loss = alpha * sum((p*p for p in model.parameters()))
total_loss = data_loss + reg_loss
# also get accuracy
accuracy = [yi == (int(scorei.data < 0)*2-1) for yi, scorei in zip(yb, scores)]
return total_loss, sum(accuracy) / len(accuracy)
total_loss, acc = loss()
print(total_loss, acc)
# optimization
learning_rate = 0.001
for k in range(200):
# forward
total_loss, acc = loss()
# backward
model.zero_grad()
total_loss.grad = 1
total_loss.backward()
# update (sgd)
for p in model.parameters():
p.data -= learning_rate * p.grad
if k % 1 == 0:
print(f"step {k} loss {total_loss.data}, accuracy {acc*100}%")
# visualize decision boundary
h = 0.25
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
Xmesh = np.c_[xx.ravel(), yy.ravel()]
inputs = [list(map(Value, xrow)) for xrow in Xmesh]
scores = list(map(model, inputs))
Z = np.array([s.data < 0 for s in scores])
Z = Z.reshape(xx.shape)
fig = plt.figure()
plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral, alpha=0.8)
plt.scatter(X[:, 0], X[:, 1], c=y, s=40, cmap=plt.cm.Spectral)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
###Output
_____no_output_____ |
notebook/2018-02-01_slice_and_dice_biomarkers.ipynb | ###Markdown
Slice and Dice Biomarkers Brian is wanting the top 10 named biomarkers for each cluster to start doing a literature review. There are several ways that this slice and dice can be done, so it will probably be easier to present him with a few tables.
###Code
import sys
import os
from pathlib import Path
import numpy as np
import pandas as pd
sys.path.insert(0, '../lib')
from larval_gonad.x_to_a import CHROMS_CHR
# Constants
REF = os.environ['REFERENCES_DIR']
OUTPUT = '../output/testis_scRNAseq_pilot'
Path(OUTPUT).mkdir(exist_ok=True)
NAME = '2018-02-01_slice_and_dice_biomarkers'
# Create fbgn2symbol and symbol2fbgn map
annot = pd.read_csv(Path(REF, 'dmel/r6-16/fb_annotation/dmel_r6-16.fb_annotation'),
sep='\t', index_col=1)
fbgn2symbol = annot['gene_symbol'].to_dict()
symbol2fbgn = {v: k for k, v in fbgn2symbol.items()}
# Create fbgn2chrom
genes = []
with Path(REF, 'dmel/r6-16/gtf/dmel_r6-16.gtf').open() as fh:
for row in fh:
rows = row.strip().split()
if len(rows) == 0:
continue
if rows[2] == 'gene':
genes.append((rows[0], rows[9].replace('"', '').replace(';', '')))
fbgn2chrom = pd.DataFrame(genes, columns=['chrom', 'FBgn'])
fbgn2chrom.set_index('FBgn', inplace=True)
fbgn2chrom = fbgn2chrom[fbgn2chrom['chrom'].isin(CHROMS_CHR)]
# Get biomarker datas and cleanup
df = pd.read_csv(f'{OUTPUT}/biomarkers.tsv', sep='\t', index_col='gene')
df.index.name = 'FBgn'
df['gene'] = df.index.map(lambda x: fbgn2symbol[x])
df.set_index('gene', append=True, inplace=True)
# Remove CG and CRs
cg = ~df.index.get_level_values('gene').str.startswith('CG')
cr = ~df.index.get_level_values('gene').str.startswith('CR')
pv = df.p_val_adj < .01
df = df[cg & cr & pv]
df.to_csv(f'{OUTPUT}/{NAME}_named_cluster_markers.tsv', sep='\t')
# Sort by adj p-val
clean = df.sort_values(by='p_val_adj').groupby('cluster').head(10).sort_values('cluster').drop(['p_val', 'pct.1', 'pct.2'], axis=1)
clean['link'] = clean.index.get_level_values('FBgn').map(lambda fbgn: '=HYPERLINK("http://flybase.org/reports/{}", "FlyBase")'.format(fbgn))
clean.to_csv(f'{OUTPUT}/{NAME}_top10_adj-pval_cluster_markers.tsv', sep='\t')
# Sort by logFC
df['abs_avg_logFC'] = np.abs(df.avg_logFC)
clean = df.sort_values(by='abs_avg_logFC', ascending=False).groupby('cluster').head(10).sort_values('cluster').drop(['p_val', 'pct.1', 'pct.2'], axis=1)
clean['link'] = clean.index.get_level_values('FBgn').map(lambda fbgn: '=HYPERLINK("http://flybase.org/reports/{}", "FlyBase")'.format(fbgn))
clean.to_csv(f'{OUTPUT}/{NAME}_top10_avg-logFC_cluster_markers.tsv', sep='\t')
# sort by difference pct cells expressed
df['pct_diff'] = np.abs(df['pct.1'] - df['pct.2'])
clean = df.sort_values(by='pct_diff', ascending=False).groupby('cluster').head(10).sort_values('cluster').drop(['p_val'], axis=1)
clean['link'] = clean.index.get_level_values('FBgn').map(lambda fbgn: '=HYPERLINK("http://flybase.org/reports/{}", "FlyBase")'.format(fbgn))
clean.to_csv(f'{OUTPUT}/{NAME}_top10_pct-cells-diff_cluster_markers.tsv', sep='\t')
###Output
_____no_output_____ |
_notebooks/2020-05-07-chapter8.ipynb | ###Markdown
Statistical Rethinking Chapter 8> Code rewitten in Python for this chapter's practice- toc: true - badges: true- comments: true- categories: [statistical_rethinking]
###Code
import numpy as np
import pandas as pd
import pymc3 as pm
import matplotlib.pyplot as plt
import seaborn as sns
import scipy
###Output
_____no_output_____
###Markdown
8H1 8H2
###Code
d = pd.read_csv(
'https://raw.githubusercontent.com/rmcelreath/rethinking/master/data/tulips.csv', sep=';')
d.head()
d['blooms_std'] = d['blooms'] / d['blooms'].max()
d['water_cent'] = d['water'] - d['water'].mean()
d['shade_cent'] = d['shade'] - d['shade'].mean()
with pm.Model() as model_8_7:
a = pm.Normal('a', mu=0.5, sd=0.25)
bW = pm.Normal('bW', mu=0, sd=0.25)
bS = pm.Normal('bS', mu=0, sd=0.25)
bWS = pm.Normal('bWS', mu=0, sd=0.25)
sigma = pm.Exponential('sigma', 1)
mu = pm.Deterministic(
'mu', a + bW * d['water_cent'] + bS * d['shade_cent'] +
bWS * d['water_cent'] * d['shade_cent'])
blooms = pm.Normal('blooms', mu, sigma, observed=d.blooms_std)
trace_8_7 = pm.sample(1000, tune=1000)
# start = {'a':np.mean(d.blooms), 'bW':0, 'bS':0, 'bWS':0, 'sigma':np.std(d.blooms)}
varnames = ['a', 'bW', 'bS', 'bWS', 'sigma']
pm.summary(trace_8_7, varnames, kind='stats').round(3)
with pm.Model() as model_8H1:
a = pm.Normal('a', mu=0.5, sd=0.25)
bB = pm.Normal('bB', 0, 0.1, shape=d['bed'].nunique())
bW = pm.Normal('bW', mu=0, sd=0.25)
bS = pm.Normal('bS', mu=0, sd=0.25)
bWS = pm.Normal('bWS', mu=0, sd=0.25)
sigma = pm.Exponential('sigma', 1)
mu = pm.Deterministic(
'mu',
a + bB[d['bed'].astype('category').cat.codes.values] + bW * d['water_cent'] +
bS * d['shade_cent'] + bWS * d['water_cent'] * d['shade_cent'])
blooms = pm.Normal('blooms', mu, sigma, observed=d.blooms_std)
trace_8H1 = pm.sample(1000, tune=1000)
varnames = ['a', 'bB', 'bW', 'bS', 'bWS', 'sigma']
pm.summary(trace_8H1, varnames, kind='stats').round(3)
###Output
_____no_output_____
###Markdown
Compare WAIC
###Code
comp_df = pm.compare({'without_bed': trace_8_7, 'with_bed': trace_8H1})
comp_df
###Output
/Users/kani/Documents/KatrinaLand/explain-the-mark-experimentation/venv/lib/python3.7/site-packages/arviz/stats/stats.py:1196: UserWarning: For one or more samples the posterior variance of the log predictive densities exceeds 0.4. This could be indication of WAIC starting to fail.
See http://arxiv.org/abs/1507.04544 for details
"For one or more samples the posterior variance of the log predictive "
###Markdown
- value of bB indicates weak relationship as the credible interval includes zero- dse is 4.41 and d_waic is 3.64, which means the difference between waic between these two models is not significant 8H3
###Code
d = pd.read_csv('https://raw.githubusercontent.com/rmcelreath/rethinking/master/data/rugged.csv', sep=';')
d = d.dropna(subset=['rgdppc_2000'])
d['log_gdp_std'] = np.log(d['rgdppc_2000']) / np.log(d['rgdppc_2000']).mean()
d['rugged_std'] = d['rugged'] / d['rugged'].max()
dd = d[d['country'] != 'Seychelles']
###Output
_____no_output_____
###Markdown
With Seychelles
###Code
with pm.Model() as model_8_5:
a = pm.Normal('a', mu=1, sd=0.1, shape=d['cont_africa'].nunique())
b = pm.Normal('b', mu=0, sd=0.3, shape=d['cont_africa'].nunique())
sigma = pm.Exponential('sigma', 1)
mu = pm.Deterministic(
'mu', a[d['cont_africa'].values] + b[d['cont_africa'].values] *
(d.rugged_std - 0.215))
log_gdp = pm.Normal('log_gdp', mu, sigma, observed=d.log_gdp_std)
trace_8_5 = pm.sample(1000, tune=1000)
mean_q = pm.find_MAP()
means = np.concatenate([mean_q[k].reshape(-1) for k in ['a', 'b', 'sigma']])
cov_q = np.linalg.inv(pm.find_hessian(mean_q, vars=[a, b, sigma]))
stds = np.sqrt(np.diagonal(cov_q))
print('means: ', means.round(3))
print('stds: ', stds.round(3))
varnames = ['a', 'b', 'sigma']
pm.summary(trace_8_5, varnames, kind='stats').round(3)
d_a = d[d['cont_africa']==1]
d_na = d[d['cont_africa']==0]
dd_a = dd[dd['cont_africa']==1]
dd_na = dd[dd['cont_africa']==0]
rugged_seq = np.linspace(-0.1, 1.1, 30)
mu_a = np.apply_along_axis(
lambda x: trace_8_5['a'][:, 1] + trace_8_5['b'][:, 1] * x,
axis=1, arr=rugged_seq[:, np.newaxis])
mu_mean_a = mu_a.mean(axis=1)
mu_PI_a = np.quantile(mu_a, [0.055, 0.945], axis=1)
mu_na = np.apply_along_axis(
lambda x: trace_8_5['a'][:, 0] + trace_8_5['b'][:, 0] * x,
axis=1, arr=rugged_seq[:, np.newaxis])
mu_mean_na = mu_na.mean(axis=1)
mu_PI_na = np.quantile(mu_na, [0.055, 0.945], axis=1)
f, (ax1, ax2) = plt.subplots(1, 2, sharey=True, figsize=(8,3))
ax1.plot(d_a['rugged_std'], d_a['log_gdp_std'], 'C0o')
ax1.plot(rugged_seq, mu_mean_a, 'C0')
ax1.fill_between(rugged_seq, mu_PI_a[0], mu_PI_a[1], color='C0', alpha=0.5)
ax1.set_title('African Nations')
ax1.set_ylabel('log GDP year 2000', fontsize=14);
ax1.set_xlabel('Terrain Ruggedness Index', fontsize=14)
ax2.plot(d_na['rugged_std'], d_na['log_gdp_std'], 'ko')
ax2.plot(rugged_seq, mu_mean_na, 'k')
ax2.fill_between(rugged_seq, mu_PI_na[0], mu_PI_na[1], color='k', alpha=0.5)
ax2.set_title('Non-African Nations')
ax2.set_ylabel('log GDP year 2000', fontsize=14)
ax2.set_xlabel('Terrain Ruggedness Index', fontsize=14);
###Output
_____no_output_____
###Markdown
Without Seychelles
###Code
with pm.Model() as model_8H3:
a = pm.Normal('a', mu=1, sd=0.1, shape=dd['cont_africa'].nunique())
b = pm.Normal('b', mu=0, sd=0.3, shape=dd['cont_africa'].nunique())
sigma = pm.Exponential('sigma', 1)
mu = pm.Deterministic(
'mu', a[dd['cont_africa'].values] + b[dd['cont_africa'].values] *
(dd.rugged_std - 0.215))
log_gdp = pm.Normal('log_gdp', mu, sigma, observed=dd.log_gdp_std)
trace_8H3 = pm.sample(1000, tune=1000)
mean_q = pm.find_MAP()
means = np.concatenate([mean_q[k].reshape(-1) for k in ['a', 'b', 'sigma']])
cov_q = np.linalg.inv(pm.find_hessian(mean_q, vars=[a, b, sigma]))
stds = np.sqrt(np.diagonal(cov_q))
print('means: ', means.round(3))
print('stds: ', stds.round(3))
varnames = ['a', 'b', 'sigma']
pm.summary(trace_8H3, varnames, kind='stats').round(3)
rugged_seq = np.linspace(-0.1, 1.1, 30)
mu_a = np.apply_along_axis(
lambda x: trace_8H3['a'][:, 1] + trace_8H3['b'][:, 1] * x,
axis=1, arr=rugged_seq[:, np.newaxis])
mu_mean_a = mu_a.mean(axis=1)
mu_PI_a = np.quantile(mu_a, [0.055, 0.945], axis=1)
mu_na = np.apply_along_axis(
lambda x: trace_8H3['a'][:, 0] + trace_8H3['b'][:, 0] * x,
axis=1, arr=rugged_seq[:, np.newaxis])
mu_mean_na = mu_na.mean(axis=1)
mu_PI_na = np.quantile(mu_na, [0.055, 0.945], axis=1)
f, (ax1, ax2) = plt.subplots(1, 2, sharey=True, figsize=(8,3))
ax1.plot(dd_a['rugged_std'], dd_a['log_gdp_std'], 'C0o')
ax1.plot(rugged_seq, mu_mean_a, 'C0')
ax1.fill_between(rugged_seq, mu_PI_a[0], mu_PI_a[1], color='C0', alpha=0.5)
ax1.set_title('African Nations')
ax1.set_ylabel('log GDP year 2000', fontsize=14);
ax1.set_xlabel('Terrain Ruggedness Index', fontsize=14)
ax2.plot(dd_na['rugged_std'], dd_na['log_gdp_std'], 'ko')
ax2.plot(rugged_seq, mu_mean_na, 'k')
ax2.fill_between(rugged_seq, mu_PI_na[0], mu_PI_na[1], color='k', alpha=0.5)
ax2.set_title('Non-African Nations')
ax2.set_ylabel('log GDP year 2000', fontsize=14)
ax2.set_xlabel('Terrain Ruggedness Index', fontsize=14);
###Output
_____no_output_____
###Markdown
Compare WAIC
###Code
with pm.Model() as model_1:
a = pm.Normal('a', mu=1, sd=0.1)
b = pm.Normal('b', mu=0, sd=0.3)
sigma = pm.Exponential('sigma', 1)
mu = pm.Deterministic('mu', a + b * (dd.rugged_std - 0.215))
log_gdp = pm.Normal('log_gdp', mu, sigma, observed=dd.log_gdp_std)
trace_1 = pm.sample(1000, tune=1000)
with pm.Model() as model_2:
a = pm.Normal('a', mu=1, sd=0.1, shape=dd['cont_africa'].nunique())
b = pm.Normal('b', mu=0, sd=0.3)
sigma = pm.Exponential('sigma', 1)
mu = pm.Deterministic(
'mu', a[dd['cont_africa'].values] + b * (dd.rugged_std - 0.215))
log_gdp = pm.Normal('log_gdp', mu, sigma, observed=dd.log_gdp_std)
trace_2 = pm.sample(1000, tune=1000)
comp_df = pm.compare({'model1': trace_1, 'model2': trace_2, 'model3': trace_8H3})
comp_df
###Output
/Users/kani/Documents/KatrinaLand/explain-the-mark-experimentation/venv/lib/python3.7/site-packages/arviz/stats/stats.py:1196: UserWarning: For one or more samples the posterior variance of the log predictive densities exceeds 0.4. This could be indication of WAIC starting to fail.
See http://arxiv.org/abs/1507.04544 for details
"For one or more samples the posterior variance of the log predictive "
###Markdown
Weighted prediction
###Code
rugged_seq = np.linspace(-0.1, 1.1, 30)
mu_a = np.apply_along_axis(
lambda x: comp_df.weight[0] *
(trace_8H3['a'][:, 1] + trace_8H3['b'][:, 1] * x) + comp_df.weight[1] *
(trace_2['a'][:, 1] + trace_2['b'] * x) + comp_df.weight[2] *
(trace_1['a'] + trace_1['b'] * x),
axis=1,
arr=rugged_seq[:, np.newaxis])
mu_mean_a = mu_a.mean(axis=1)
mu_PI_a = np.quantile(mu_a, [0.055, 0.945], axis=1)
mu_na = np.apply_along_axis(
lambda x: comp_df.weight[0] *
(trace_8H3['a'][:, 0] + trace_8H3['b'][:, 0] * x) + comp_df.weight[1] *
(trace_2['a'][:, 0] + trace_2['b'] * x) + comp_df.weight[2] *
(trace_1['a'] + trace_1['b'] * x),
axis=1,
arr=rugged_seq[:, np.newaxis])
mu_mean_na = mu_na.mean(axis=1)
mu_PI_na = np.quantile(mu_na, [0.055, 0.945], axis=1)
f, (ax1, ax2) = plt.subplots(1, 2, sharey=True, figsize=(8,3))
ax1.plot(dd_a['rugged_std'], dd_a['log_gdp_std'], 'C0o')
ax1.plot(rugged_seq, mu_mean_a, 'C0')
ax1.fill_between(rugged_seq, mu_PI_a[0], mu_PI_a[1], color='C0', alpha=0.5)
ax1.set_title('African Nations')
ax1.set_ylabel('log GDP year 2000', fontsize=14);
ax1.set_xlabel('Terrain Ruggedness Index', fontsize=14)
ax2.plot(dd_na['rugged_std'], dd_na['log_gdp_std'], 'ko')
ax2.plot(rugged_seq, mu_mean_na, 'k')
ax2.fill_between(rugged_seq, mu_PI_na[0], mu_PI_na[1], color='k', alpha=0.5)
ax2.set_title('Non-African Nations')
ax2.set_ylabel('log GDP year 2000', fontsize=14)
ax2.set_xlabel('Terrain Ruggedness Index', fontsize=14);
###Output
_____no_output_____
###Markdown
8H4
###Code
d = pd.read_csv('https://raw.githubusercontent.com/rmcelreath/rethinking/master/data/nettle.csv', sep=';')
d['lang.per.cap.log'] = np.log(d['num.lang'] / d['k.pop'])
d['lang.per.cap.log.cent'] = d['lang.per.cap.log'] / d['lang.per.cap.log'].mean()
d['area.log'] = np.log(d['area'])
d['area.log.cent'] = (d['area.log'] - d['area.log'].min()) / (
d['area.log'].max() - d['area.log'].min())
d['area.log.cent'] = d['area.log.cent'] - d['area.log.cent'].mean()
d['mean.growing.season.cent'] = (
d['mean.growing.season'] - d['mean.growing.season'].min()) / (
d['mean.growing.season'].max() - d['mean.growing.season'].min())
d['mean.growing.season.cent'] = d['mean.growing.season.cent'] - d['mean.growing.season.cent'].mean()
d['sd.growing.season.cent'] = (
d['sd.growing.season'] - d['sd.growing.season'].min()) / (
d['sd.growing.season'].max() - d['sd.growing.season'].min())
d['sd.growing.season.cent'] = d['sd.growing.season.cent'] - d['sd.growing.season.cent'].mean()
with pm.Model() as model_1:
a = pm.Normal('a', mu=1, sd=0.1)
bA = pm.Normal('bA', mu=0, sd=0.3)
bM = pm.Normal('bM', mu=0, sd=0.3)
sigma = pm.Exponential('sigma', 1)
mu = pm.Deterministic(
'mu', a + bA * d['area.log.cent'] + bM * d['mean.growing.season.cent'])
y = pm.Normal('y', mu, sigma, observed=d['lang.per.cap.log.cent'])
trace_1 = pm.sample(1000, tune=1000)
pm.summary(trace_1, ['a', 'bA', 'bM'], kind='stats').round(3)
with pm.Model() as model_2:
a = pm.Normal('a', mu=1, sd=0.1)
bA = pm.Normal('bA', mu=0, sd=0.3)
bS = pm.Normal('bS', mu=0, sd=0.3)
sigma = pm.Exponential('sigma', 1)
mu = pm.Deterministic(
'mu', a + bA * d['area.log.cent'] + bS * d['sd.growing.season.cent'])
y = pm.Normal('y', mu, sigma, observed=d['lang.per.cap.log.cent'])
trace_2 = pm.sample(1000, tune=1000)
pm.summary(trace_2, ['a', 'bA', 'bS'], kind='stats').round(3)
with pm.Model() as model_3:
a = pm.Normal('a', mu=1, sd=0.1)
bA = pm.Normal('bA', mu=0, sd=0.3)
bM = pm.Normal('bM', mu=0, sd=0.3)
bS = pm.Normal('bS', mu=0, sd=0.3)
sigma = pm.Exponential('sigma', 1)
mu = pm.Deterministic(
'mu', a + bA * d['area.log.cent'] +
bM * d['mean.growing.season.cent'] + bS * d['sd.growing.season.cent'])
y = pm.Normal('y', mu, sigma, observed=d['lang.per.cap.log.cent'])
trace_3 = pm.sample(1000, tune=1000)
pm.summary(trace_3, ['a', 'bA', 'bM', 'bS'], kind='stats').round(3)
with pm.Model() as model_4:
a = pm.Normal('a', mu=1, sd=0.1)
bA = pm.Normal('bA', mu=0, sd=0.3)
bM = pm.Normal('bM', mu=0, sd=0.3)
bS = pm.Normal('bS', mu=0, sd=0.3)
bMS = pm.Normal('bMS', mu=0, sd=0.3)
sigma = pm.Exponential('sigma', 1)
mu = pm.Deterministic(
'mu', a + bA * d['area.log.cent'] +
bM * d['mean.growing.season.cent'] + bS * d['sd.growing.season.cent'] +
bMS * d['mean.growing.season.cent'] * d['sd.growing.season.cent'])
y = pm.Normal('y', mu, sigma, observed=d['lang.per.cap.log.cent'])
trace_4 = pm.sample(1000, tune=1000)
pm.summary(trace_4, ['a', 'bA', 'bM', 'bS', 'bMS'], kind='stats').round(3)
###Output
_____no_output_____
###Markdown
Compare WAIC
###Code
comp_df = pm.compare({'mean': trace_1, 'sd': trace_2, 'mean + st': trace_3, 'mean * st': trace_4})
comp_df
###Output
/Users/kani/Documents/KatrinaLand/explain-the-mark-experimentation/venv/lib/python3.7/site-packages/arviz/stats/stats.py:1196: UserWarning: For one or more samples the posterior variance of the log predictive densities exceeds 0.4. This could be indication of WAIC starting to fail.
See http://arxiv.org/abs/1507.04544 for details
"For one or more samples the posterior variance of the log predictive "
###Markdown
Plot posterior with interaction
###Code
d['mean.growing.season.cent'].hist()
d['sd.growing.season.cent'].hist()
seq_s = np.linspace(-0.3, 0.7, 25)
f, axs = plt.subplots(1, 3, sharey=True, figsize=(12, 3))
for ax, m in zip(axs.flat, [-0.4, 0, 0.4]):
mu = np.apply_along_axis(lambda x: trace_4['a'] + trace_4['bM'] * m +
trace_4['bS'] * x + trace_4['bMS'] * m * x,
axis=1,
arr=seq_s[:, np.newaxis])
mu_mean = mu.mean(1)
mu_PI = np.quantile(mu, [0.055, 0.945], axis=1)
ax.plot(seq_s, mu_mean, 'k')
ax.plot(seq_s, mu_PI[0], 'k--')
ax.plot(seq_s, mu_PI[1], 'k--')
ax.set_ylabel('area.log')
ax.set_xlabel('sd.growing.season')
ax.set_title(f'mean.growing.season = {m}')
###Output
_____no_output_____ |
Data-Analysis/Data_Analysis.ipynb | ###Markdown
i개이상의 추천수를 가진 분야별 청원수
###Code
i= 0
fied_names_num(df[df["count"]>=i],names)
alls = {}
for name in names:
alls[name]=[]
alls
###Output
_____no_output_____
###Markdown
추천수당 청원 개수 저장
###Code
def fied_names_num(df, names):
lists = []
for name in names:
if(name=="all"):
break
#print(name, " : ", len(df[df["category"]==name]["category"]))
lists.append(len(df[df["category"]==name]["category"]))
#print("all : ",len(df["category"]))
lists.append(len(df["category"]))
return lists
for i in range(400000):
lists = fied_names_num(df[df["count"]>=i],names)
for j,name in enumerate(names):
alls[name].append(lists[j])
len(alls["농산어촌"])
df2 = pd.DataFrame(alls)
df2
###Output
_____no_output_____
###Markdown
파일 저장
###Code
df2.to_csv("participants.csv",index=False)
df2.to_csv("participants_index.csv")
###Output
_____no_output_____
###Markdown
old
###Code
datas
classified_data = []
for data in datas[0]:
classified_data.append([data])
for i,data in enumerate(datas[1:]):
while(1):
if i < len(datas[0]):
break
if i >= len(datas[0]):
i-=17
classified_data[i].extend([data[i]])
len(datas[0])
classified_data
classified_data[16].append(0)
len(classified_data[13])
len(a)
len(classified_data[12])
dflist = {idx:val for idx, val in zip(names,classified_data)}
dflist["정치개혁"]
dflist
for dd in dflist:
print(dd)
for dd in dflist.values():
print(len(dd))
dd[17]
dflist["미래"]
df = pd.DataFrame({idx:val for idx, val in zip(names,classified_data)})
df
df.to_csv("allss.csv", index=False)
df = pd.DataFrame({"미래":dflist["미래"]})
df
df = pd.DataFrame(dflist)
df
alls = fied_names_num(df, names)
num30 = fied_names_num(df[df["count"]>30],names)
num100 = fied_names_num(df[df["count"]>100],names)
num100 = fied_names_num(df[df["count"]>=999],names)
num30[0]/num100[0]
num100[1]/num30[1]
num30[0]/alls[0]
num30[1]/alls[1]
(num30[0]-num100[0])/alls[0]
(num30[1]-num100[1])/alls[1]
###Output
_____no_output_____ |
001_Python_Functions.ipynb | ###Markdown
All the IPython Notebooks in this lecture series by Dr. Milan Parmar are available @ **[GitHub](https://github.com/milaan9/04_Python_Functions)** Python FunctionsIn this class, you'll learn about functions, what a function is, the syntax, components, and types of functions. Also, you'll learn to create a function in Python. What is a function in Python?In Python, a **function is a block of organized, reusable (DRY- Don’t Repeat Yourself) code with a name** that is used to perform a single, specific task. It can take arguments and returns the value.Functions help break our program into smaller and modular chunks. As our program grows larger and larger, functions make it more organized and manageable.Furthermore, it improves efficiency and reduces errors because of the reusability of a code. Types of FunctionsPython support two types of functions1. **[Built-in](https://github.com/milaan9/04_Python_Functions/tree/main/002_Python_Functions_Built_in)** function2. **[User-defined](https://github.com/milaan9/04_Python_Functions/blob/main/Python_User_defined_Functions.ipynb)** function1.**Built-in function**The functions which are come along with Python itself are called a built-in function or predefined function. Some of them are:**`range()`**, **`print()`**, **`input()`**, **`type()`**, **`id()`**, **`eval()`** etc.**Example:** Python **`range()`** function generates the immutable sequence of numbers starting from the given start integer to the stop integer.```python>>> for i in range(1, 10):>>> print(i, end=' ')1 2 3 4 5 6 7 8 9```2. **User-defined function**Functions which are created by programmer explicitly according to the requirement are called a user-defined function. **Syntax:**```pythondef function_name(parameter1, parameter2): """docstring""" function body write some actionreturn value``` Defining a Function1. **`def`** is a keyword that marks the start of the function header.2. **`function_name`** to uniquely identify the function. Function naming follows the same **[rules of writing identifiers in Python](https://github.com/milaan9/01_Python_Introduction/blob/main/005_Python_Keywords_and_Identifiers.ipynb)**.2. **`parameter`** is the value passed to the function. They are optional.3. **`:`** (colon) to mark the end of the function header.4. **`function body`** is a block of code that performs some task and all the statements in **`function body`** must have the same **indentation** level (usually 4 spaces). 5. **"""docstring"""** documentation string is used to describe what the function does.6. **`return`** is a keyword to return a value from the function.. A return statement with no arguments is the same as return **`None`**.>**Note:** While defining a function, we use two keywords, **`def`** (mandatory) and **`return`** (optional).**Example:**```python>>> def add(num1,num2): Function name: 'add', Parameters: 'num1', 'num2'>>> print("Number 1: ", num1) Function body>>> print("Number 2: ", num2) Function body>>> addition = num1 + num2 Function body>>> return addition return value>>> res = add(2, 4) Function call>>> print("Result: ", res)``` Defining a function without any parameters
###Code
# Example 1:
def greet():
print("Welcome to Data Science")
# call function using its name
greet()
###Output
Welcome to Data Science
###Markdown
Defining a function with parameters
###Code
# Example 1:
def course(name, course_name):
print("Hello", name, "Welcome to Data Science")
print("Your course name is", course_name)
course('Arthur', 'Python') # call function
# Example 2: Gereeting
def greet(name):
"""
This function greets to the person passed in as a parameter
"""
print("Hello, " + name + ". Good morning!") # No output!
###Output
_____no_output_____
###Markdown
Defining a function with parameters and `return` value
###Code
# Example 1:
def calculator(a, b):
add = a + b
return add # return the addition
result = calculator(30, 6) # call function & take return value in variable
print("Addition :", result) # Output Addition : 36
###Output
Addition : 36
###Markdown
How to call a function in python?Once we have defined a function, we can call it from another function, program or even the Python prompt. To call a function we simply type the function name with appropriate parameters.
###Code
greet('Alan')
###Output
Hello, Alan. Good morning!
###Markdown
>**Note:** Try running the above code in the Python program with the function definition to see the output.
###Code
# Example 1:
def wish(name):
"""
This function wishes to the person passed in as a parameter
"""
print("Happy birthday, " + name + ". Hope you have a wonderful day!")
wish('Bill')
# Example 2:
def swap(x, y):
"""
This function swaps the value of two variables
"""
temp = x; # value of x will go inside temp
x = y; # value of y will go inside x
y = temp; # value of temp will go inside y
print("value of x is:", x)
print("value of y is:", y)
return # "return" is optional
x = 6
y = 9
swap(x, y) #call function
# Example 3:
def even_odd(n):
if n % 2 == 0: # check number is even or odd
print(n, 'is a Even number')
else:
print(n, 'is a Odd Number')
even_odd(9) # calling function by its name
###Output
9 is a Odd Number
###Markdown
DocstringsThe first string after the function header is called the **docstring** and is short for documentation string. It is a descriptive text (like a comment) written by a programmer to let others know what block of code does.Although **optional**, documentation is a good programming practice. Unless you can remember what you had for dinner last week, always document your code.It is being declared using triple single quotes **`''' '''`** or triple-double quote **`""" """`** so that docstring can extend up to multiple lines.We can access docstring using doc attribute **`__doc__`** for any object like list, tuple, dict, and user-defined function, etc.In the above example, we have a docstring immediately below the function header.
###Code
print(greet.__doc__)
###Output
This function greets to the person passed in as a parameter
###Markdown
To learn more about docstrings in Python, visit **[Python Docstrings](https://github.com/milaan9/04_Python_Functions/blob/main/Python_Docstrings.ipynb)**. Function `return` StatementIn Python, to return value from the function, a **`return`** statement is used. It returns the value of the expression following the returns keyword.**Syntax:**```pythondef fun(): statement-1 statement-2 statement-3 . . return [expression]```The **`return`** value is nothing but a outcome of function.* The **`return`** statement ends the function execution.* For a function, it is not mandatory to return a value.* If a **`return`** statement is used without any expression, then the **`None`** is returned.* The **`return`** statement should be inside of the function block. Return Single Value
###Code
print(greet("Cory"))
###Output
Hello, Cory. Good morning!
None
###Markdown
Here, **`None`** is the returned value since **`greet()`** directly prints the name and no **`return`** statement is used.
###Code
# Example 1:
def absolute_value(num):
"""This function returns the absolute
value of the entered number"""
if num >= 0:
return num
else:
return -num
print(absolute_value(2))
print(absolute_value(-4))
# Example 2:
def sum(a,b): # Function 1
print("Adding the two values")
print("Printing within Function")
print(a+b)
return a+b
def msg(): # Function 2
print("Hello")
return
total=sum(10,20)
print('total : ',total)
msg()
print("Rest of code")
# Example 3:
def is_even(list1):
even_num = []
for n in list1:
if n % 2 == 0:
even_num.append(n)
# return a list
return even_num
# Pass list to the function
even_num = is_even([2, 3, 46, 63, 72, 83, 90, 19])
print("Even numbers are:", even_num)
###Output
Even numbers are: [2, 46, 72, 90]
###Markdown
Return Multiple ValuesYou can also return multiple values from a function. Use the return statement by separating each expression by a comma.
###Code
# Example 1:
def arithmetic(num1, num2):
add = num1 + num2
sub = num1 - num2
multiply = num1 * num2
division = num1 / num2
# return four values
return add, sub, multiply, division
a, b, c, d = arithmetic(10, 2) # read four return values in four variables
print("Addition: ", a)
print("Subtraction: ", b)
print("Multiplication: ", c)
print("Division: ", d)
###Output
Addition: 12
Subtraction: 8
Multiplication: 20
Division: 5.0
###Markdown
Function `pass` StatementIn Python, the **`pass`** is the keyword, which won’t do anything. Sometimes there is a situation where we need to define a syntactically empty block. We can define that block using the **`pass`** keyword.When the interpreter finds a **`pass`** statement in the program, it returns no operation.
###Code
# Example 1:
def addition(num1, num2):
# Implementation of addition function in comming release
# Pass statement
pass
addition(10, 2)
###Output
_____no_output_____
###Markdown
All the IPython Notebooks in **Python Functions** lecture series by Dr. Milaan Parmar are available @ **[GitHub](https://github.com/milaan9/04_Python_Functions)** Python FunctionsIn this class, you'll learn about functions, what a function is, the syntax, components, and types of functions. Also, you'll learn to create a function in Python. What is a function in Python?In Python, a **function is a block of organized, reusable (DRY- Don’t Repeat Yourself) code with a name** that is used to perform a single, specific task. It can take arguments and returns the value.Functions help break our program into smaller and modular chunks. As our program grows larger and larger, functions make it more organized and manageable.Furthermore, it improves efficiency and reduces errors because of the reusability of a code. Types of FunctionsPython support two types of functions1. **[Built-in](https://github.com/milaan9/04_Python_Functions/tree/main/002_Python_Functions_Built_in)** function2. **[User-defined](https://github.com/milaan9/04_Python_Functions/blob/main/Python_User_defined_Functions.ipynb)** function1.**Built-in function**The functions which are come along with Python itself are called a built-in function or predefined function. Some of them are:**`range()`**, **`print()`**, **`input()`**, **`type()`**, **`id()`**, **`eval()`** etc.**Example:** Python **`range()`** function generates the immutable sequence of numbers starting from the given start integer to the stop integer.```python>>> for i in range(1, 10):>>> print(i, end=' ')1 2 3 4 5 6 7 8 9```2. **User-defined function**Functions which are created by programmer explicitly according to the requirement are called a user-defined function. **Syntax:**```pythondef function_name(parameter1, parameter2): """docstring""" function body write some actionreturn value``` Defining a Function1. **`def`** is a keyword that marks the start of the function header.2. **`function_name`** to uniquely identify the function. Function naming follows the same **[rules of writing identifiers in Python](https://github.com/milaan9/01_Python_Introduction/blob/main/005_Python_Keywords_and_Identifiers.ipynb)**.2. **`parameter`** is the value passed to the function. They are optional.3. **`:`** (colon) to mark the end of the function header.4. **`function body`** is a block of code that performs some task and all the statements in **`function body`** must have the same **indentation** level (usually 4 spaces). 5. **"""docstring"""** documentation string is used to describe what the function does.6. **`return`** is a keyword to return a value from the function.. A return statement with no arguments is the same as return **`None`**.>**Note:** While defining a function, we use two keywords, **`def`** (mandatory) and **`return`** (optional).**Example:**```python>>> def add(num1,num2): Function name: 'add', Parameters: 'num1', 'num2'>>> print("Number 1: ", num1) Function body>>> print("Number 2: ", num2) Function body>>> addition = num1 + num2 Function body>>> return addition return value>>> res = add(2, 4) Function call>>> print("Result: ", res)``` Defining a function without any parametersFunction can be declared without parameters.
###Code
# Example 1:
def greet():
print("Welcome to Python for Data Science")
# call function using its name
greet()
# Example 2:
def add_two_numbers ():
num_one = 3
num_two = 6
total = num_one + num_two
print(total)
add_two_numbers() # calling a function
# Example 3:
def generate_full_name ():
first_name = 'Milaan'
last_name = 'Parmar'
space = ' '
full_name = first_name + space + last_name
print(full_name)
generate_full_name () # calling a function
###Output
Milaan Parmar
###Markdown
Defining a function without parameters and `return` valueFunction can also return values, if a function does not have a **`return`** statement, the value of the function is None. Let us rewrite the above functions using **`return`**. From now on, we get a value from a function when we call the function and print it.
###Code
# Example 1:
def add_two_numbers ():
num_one = 3
num_two = 6
total = num_one + num_two
return total
print(add_two_numbers())
# Example 2:
def generate_full_name ():
first_name = 'Milaan'
last_name = 'Parmar'
space = ' '
full_name = first_name + space + last_name
return full_name
print(generate_full_name())
###Output
Milaan Parmar
###Markdown
Defining a function with parametersIn a function we can pass different data types(number, string, boolean, list, tuple, dictionary or set) as a parameter. Single Parameter: If our function takes a parameter we should call our function with an argument
###Code
# Example 1: Gereeting
def greet(name):
"""
This function greets to the person passed in as a parameter
"""
print("Hello, " + name + ". Good morning!") # No output!
# Example 2:
def sum_of_numbers(n):
total = 0
for i in range(n+1):
total+=i
print(total)
print(sum_of_numbers(10)) # 55
print(sum_of_numbers(100)) # 5050
###Output
55
None
5050
None
###Markdown
Two Parameter: A function may or may not have a parameter or parameters. A function may also have two or more parameters. If our function takes parameters we should call it with arguments.
###Code
# Example 1:
def course(name, course_name):
print("Hello", name, "Welcome to Python for Data Science")
print("Your course name is", course_name)
course('Arthur', 'Python') # call function
###Output
Hello Arthur Welcome to Python for Data Science
Your course name is Python
###Markdown
Defining a function with parameters and `return` value
###Code
# Example 1:
def greetings (name): # single parameter
message = name + ', welcome to Python for Data Science'
return message
print(greetings('Milaan'))
# Example 2:
def add_ten(num): # single parameter
ten = 10
return num + ten
print(add_ten(90))
# Example 3:
def square_number(x): # single parameter
return x * x
print(square_number(3))
# Example 4:
def area_of_circle (r): # single parameter
PI = 3.14
area = PI * r ** 2
return area
print(area_of_circle(10))
# Example 5:
def calculator(a, b): # two parameter
add = a + b
return add # return the addition
result = calculator(30, 6) # call function & take return value in variable
print("Addition :", result) # Output Addition : 36
# Example 6:
def generate_full_name (first_name, last_name): # two parameter
space = ' '
full_name = first_name + space + last_name
return full_name
print('Full Name: ', generate_full_name('Milaan','Parmar'))
# Example 7:
def sum_two_numbers (num_one, num_two): # two parameter
sum = num_one + num_two
return sum
print('Sum of two numbers: ', sum_two_numbers(1, 9))
# Example 8:
def calculate_age (current_year, birth_year): # two parameter
age = current_year - birth_year
return age;
print('Age: ', calculate_age(2021, 1819))
# Example 9:
def weight_of_object (mass, gravity): # two parameter
weight = str(mass * gravity)+ ' N' # the value has to be changed to a string first
return weight
print('Weight of an object in Newtons: ', weight_of_object(100, 9.81))
###Output
Weight of an object in Newtons: 981.0 N
###Markdown
Function `return` StatementIn Python, to return value from the function, a **`return`** statement is used. It returns the value of the expression following the returns keyword.**Syntax:**```pythondef fun(): statement-1 statement-2 statement-3 . . return [expression]```The **`return`** value is nothing but a outcome of function.* The **`return`** statement ends the function execution.* For a function, it is not mandatory to return a value.* If a **`return`** statement is used without any expression, then the **`None`** is returned.* The **`return`** statement should be inside of the function block. Return Single Value
###Code
print(greet("Cory"))
###Output
Hello, Cory. Good morning!
None
###Markdown
Here, **`None`** is the returned value since **`greet()`** directly prints the name and no **`return`** statement is used. Passing Arguments with Key and ValueIf we pass the arguments with key and value, the order of the arguments does not matter.
###Code
# Example 1:
def print_fullname(firstname, lastname):
space = ' '
full_name = firstname + space + lastname
print(full_name)
print(print_fullname(firstname = 'Milaan', lastname = 'Parmar'))
# Example 2:
def add_two_numbers (num1, num2):
total = num1 + num2
print(total)
print(add_two_numbers(num2 = 3, num1 = 2)) # Order does not matter
###Output
5
None
###Markdown
If we do not **`return`** a value with a function, then our function is returning **`None`** by default. To return a value with a function we use the keyword **`return`** followed by the variable we are returning. We can return any kind of data types from a function.
###Code
# Example 1: with return statement
def print_fullname(firstname, lastname):
space = ' '
full_name = firstname + space + lastname
return full_name
print(print_fullname(firstname = 'Milaan', lastname = 'Parmar'))
# Example 2: with return statement
def add_two_numbers (num1, num2):
total = num1 + num2
return total
print(add_two_numbers(num2 = 3, num1 = 2)) # Order does not matter
# Example 3:
def absolute_value(num):
"""This function returns the absolute
value of the entered number"""
if num >= 0:
return num
else:
return -num
print(absolute_value(2))
print(absolute_value(-4))
# Example 4:
def sum(a,b): # Function 1
print("Adding the two values")
print("Printing within Function")
print(a+b)
return a+b
def msg(): # Function 2
print("Hello")
return
total=sum(10,20)
print('total : ',total)
msg()
print("Rest of code")
# Example 5:
def is_even(list1):
even_num = []
for n in list1:
if n % 2 == 0:
even_num.append(n)
# return a list
return even_num
# Pass list to the function
even_num = is_even([2, 3, 46, 63, 72, 83, 90, 19])
print("Even numbers are:", even_num)
###Output
Even numbers are: [2, 46, 72, 90]
###Markdown
Return Multiple ValuesYou can also return multiple values from a function. Use the return statement by separating each expression by a comma.
###Code
# Example 1:
def arithmetic(num1, num2):
add = num1 + num2
sub = num1 - num2
multiply = num1 * num2
division = num1 / num2
# return four values
return add, sub, multiply, division
a, b, c, d = arithmetic(10, 2) # read four return values in four variables
print("Addition: ", a)
print("Subtraction: ", b)
print("Multiplication: ", c)
print("Division: ", d)
###Output
Addition: 12
Subtraction: 8
Multiplication: 20
Division: 5.0
###Markdown
Return Boolean Values
###Code
# Example 1:
def is_even (n):
if n % 2 == 0:
print('even')
return True # return stops further execution of the function, similar to break
return False
print(is_even(10)) # True
print(is_even(7)) # False
###Output
even
True
False
###Markdown
Return a List
###Code
# Example 1:
def find_even_numbers(n):
evens = []
for i in range(n + 1):
if i % 2 == 0:
evens.append(i)
return evens
print(find_even_numbers(10))
###Output
[0, 2, 4, 6, 8, 10]
###Markdown
How to call a function in python?Once we have defined a function, we can call it from another function, program or even the Python prompt. To call a function we simply type the function name with appropriate parameters.
###Code
greet('Alan')
###Output
Hello, Alan. Good morning!
###Markdown
>**Note:** Try running the above code in the Python program with the function definition to see the output.
###Code
# Example 1:
def wish(name):
"""
This function wishes to the person passed in as a parameter
"""
print("Happy birthday, " + name + ". Hope you have a wonderful day!")
wish('Bill')
# Example 2:
def greetings (name = 'Clark'):
message = name + ', welcome to Python for Data Science'
return message
print(greetings())
print(greetings('Milaan'))
# Example 3:
def generate_full_name (first_name = 'Milaan', last_name = 'Parmar'):
space = ' '
full_name = first_name + space + last_name
return full_name
print(generate_full_name())
print(generate_full_name('Ethan','Hunt'))
# Example 4:
def calculate_age (birth_year,current_year = 2021):
age = current_year - birth_year
return age;
print('Age: ', calculate_age(1821))
# Example 5:
def swap(x, y):
"""
This function swaps the value of two variables
"""
temp = x; # value of x will go inside temp
x = y; # value of y will go inside x
y = temp; # value of temp will go inside y
print("value of x is:", x)
print("value of y is:", y)
return # "return" is optional
x = 6
y = 9
swap(x, y) #call function
# Example 6:
def even_odd(n):
if n % 2 == 0: # check number is even or odd
print(n, 'is a Even number')
else:
print(n, 'is a Odd Number')
even_odd(9) # calling function by its name
# Example 7:
def weight_of_object (mass, gravity = 9.81):
weight = str(mass * gravity)+ ' N' # the value has to be changed to string first
return weight
print('Weight of an object in Newtons: ', weight_of_object(100)) # 9.81 - average gravity on Earth's surface
print('Weight of an object in Newtons: ', weight_of_object(100, 1.62)) # gravity on the surface of the Moon
###Output
Weight of an object in Newtons: 981.0 N
Weight of an object in Newtons: 162.0 N
###Markdown
DocstringsThe first string after the function header is called the **docstring** and is short for documentation string. It is a descriptive text (like a comment) written by a programmer to let others know what block of code does.Although **optional**, documentation is a good programming practice. Unless you can remember what you had for dinner last week, always document your code.It is being declared using triple single quotes **`''' '''`** or triple-double quote **`""" """`** so that docstring can extend up to multiple lines.We can access docstring using doc attribute **`__doc__`** for any object like list, tuple, dict, and user-defined function, etc.In the above example, we have a docstring immediately below the function header.
###Code
print(greet.__doc__)
###Output
This function greets to the person passed in as a parameter
###Markdown
To learn more about docstrings in Python, visit **[Python Docstrings](https://github.com/milaan9/04_Python_Functions/blob/main/Python_Docstrings.ipynb)**. Function `pass` StatementIn Python, the **`pass`** is the keyword, which won’t do anything. Sometimes there is a situation where we need to define a syntactically empty block. We can define that block using the **`pass`** keyword.When the interpreter finds a **`pass`** statement in the program, it returns no operation.
###Code
# Example 1:
def addition(num1, num2):
# Implementation of addition function in comming release
# Pass statement
pass
addition(10, 2)
###Output
_____no_output_____ |
tutorial/westeros/westeros_renewable_resource.ipynb | ###Markdown
Westeros Tutorial - Adding representation of renewables (part 3/3): Introducing `renewable_resource_constraints`This tutorial, which demonstrates how to apply various model features to provide a more realistic representation of renewable energy integration in the energy system, is comprised of three parts. Previously, we introduced constraints on [`firm capacity`](https://docs.messageix.org/en/stable/model/MESSAGE/model_core.html?highlight=FIRM_CAPACITY_PROVISIONequation-firm-capacity-provision) and [`flexible generation`](https://docs.messageix.org/en/stable/model/MESSAGE/model_core.html?highlight=flexibilityequation-system-flexibility-constraint).In the third part we will show you how to introduce renewable resource potentials. Up until now, `wind_ppl` activity was unrestricted. In order to reflect the fact that there are limited wind potentials within a given region and the fact that these differ in quality, we will add introduce [`renewable_potentials` and `renewable_capacity_factors`](https://docs.messageix.org/en/stable/model/MESSAGE/model_core.html?highlight=renewableconstraints-representing-renewable-integration) for wind.Further information can be found in https://doi.org/10.1016/j.esr.2013.01.001 (*Sullivan et al., 2013*)**Pre-requisites**- You have the *MESSAGEix* framework installed and working- You have run Westeros scenario which adds emission taxes (``westeros_emissions_taxes.ipynb``) and solved it successfully Online documentationThe full framework documentation is available at [https://docs.messageix.org](https://docs.messageix.org)]
###Code
import pandas as pd
import ixmp
import message_ix
from message_ix.utils import make_df
%matplotlib inline
mp = ixmp.Platform()
###Output
_____no_output_____
###Markdown
Load existing and clone to new scenarioWe load the existing scenario '*carbon_tax*' and clone it to a new scenario '*renewable_potential*' to which we will apply the `renewable_resource_constraints` constraint
###Code
model = 'Westeros Electrified'
base = message_ix.Scenario(mp, model=model, scenario='carbon_tax')
scen = base.clone(model, 'renewable_potential', 'illustration of renewable_resource_constraint formulation', keep_solution=False)
scen.check_out()
###Output
_____no_output_____
###Markdown
Retrieve parametersWe will retrieve those parameters necessary to perform subsequent additions of parameters
###Code
year_df = scen.vintage_and_active_years()
vintage_years, act_years = year_df['year_vtg'], year_df['year_act']
model_horizon = scen.set('year')
country = 'Westeros'
###Output
_____no_output_____
###Markdown
`renewable_resource_constraints` - Describing the renewable resource potentialsFrom the previous tutorials, we know based on the results that in 720, wind capacity reaches over 150GWa. We will therefore define 4 wind potential categories which in total will provide 200GWa, yet the quality of these potentials will vary substantially from the current assumptions, where the capacity factor for `wind_ppl` has been assumed to be 1, meaning that the installed `wind_ppl` capacity can operate 8760 hours per year i.e., 100% of the year. Depending on the region, high quality on-shore wind potentials result in capacity factors around 35%, yet the majority of the potentials will lie below this value. Therefore, 4 resource categories will be introduced:| Resource Category | Potential \[GWa\] | Capacity Factor \[%\] || ----------------- | ----------------- | --------------------- || c1 | 100 | 15 || c2 | 50 | 20 || c3 | 25 | 25 || c4 | 25 | 30 |The figure below illustrates the potential categories as listed in the above table.The capacity factor of the `wind_ppl` will remain unchanged and will be reflected in the parametrization of the `renewable_resources`. The following steps are required:1. Add level and commodity: - Specify and new level and commodity which accounts for the wind potentials and which serve as inputs to the `wind_ppl` - Specify which level is a `level_renewable`2. Modify existing renewable technology: - Specify which technology is classified as a `type_tec` renewable (optional) - Modify the input of the `wind_ppl`3. Add potentials and corresponding capacity factors: - Add grades - Add `renewable_potentials` - Add `renewable_capacity_factor` 1 Define new level and commodityThe level and commodity which we add will allow us to account for potentials for wind
###Code
scen.add_set('level', ['renewable'])
scen.add_set('commodity', ['wind_onshore'])
scen.add_set('level_renewable', ['renewable'])
###Output
_____no_output_____
###Markdown
2.1 Define a new technology category `renewable`We will add `wind_ppl` to this newly defined `type_tec`. This can be used for example, to simplify the reporting code, where results can be retrieved for technologies within a given sets as opposed to specifying individual technologies.
###Code
scen.add_set('type_tec', ['renewable'])
df = pd.DataFrame({'type_tec': ['renewable'],
'technology': ['wind_ppl']})
scen.add_set('cat_tec', df)
###Output
_____no_output_____
###Markdown
2.2 Add `input` parameter for `wind_ppl`We will add the parameter `input` for `wind_ppl` therefore establishing a connection to the newly defined `renewable_potential` categories.
###Code
df = pd.DataFrame({
'node_loc': country,
'technology': 'wind_ppl',
'year_vtg': vintage_years,
'year_act': act_years,
'mode': 'standard',
'node_origin': country,
'commodity': 'wind_onshore',
'level': 'renewable',
'time': 'year',
'time_origin': 'year',
'value': 1,
'unit': '%'})
scen.add_par('input', df)
###Output
_____no_output_____
###Markdown
3.1 Add new resource potential categoriesEach renewable potential category is defined as a separate `grade`.
###Code
grades = ['c1', 'c2', 'c3', 'c4']
scen.add_set('grade', grades)
###Output
_____no_output_____
###Markdown
3.2 Add resource potentialsNote, that unlike fossil resources which are finite, renewable resources must be defined for each year.
###Code
# renewable_potential has the following index structure
scen.idx_names('renewable_potential')
idx = pd.MultiIndex.from_product([[country], ['wind_onshore'], grades, ['renewable'], model_horizon, ['GWa']],
names=['node', 'commodity', 'grade', 'level', 'year', 'unit'])
df = pd.DataFrame({'value': sorted([100, 50, 25, 25] * len(model_horizon), reverse=True)}, idx).reset_index()
scen.add_par('renewable_potential', df)
###Output
_____no_output_____
###Markdown
3.3 Add `renewable_capacity_factor`
###Code
# renewable_capacity_factor has the following index structure
scen.idx_names('renewable_capacity_factor')
idx = pd.MultiIndex.from_product([[country], ['wind_onshore'], grades, ['renewable'], model_horizon, ['-']],
names=['node', 'commodity', 'grade', 'level', 'year', 'unit'])
df = pd.DataFrame({'value': sorted([.15, .20, .25, .30] * len(model_horizon))}, idx).reset_index()
scen.add_par('renewable_capacity_factor', df)
###Output
_____no_output_____
###Markdown
Commit and solve
###Code
scen.commit(comment='define parameters for renewable implementation')
scen.set_as_default()
scen.solve()
scen.var('OBJ')['lvl']
###Output
_____no_output_____
###Markdown
Plotting Results
###Code
from message_ix.reporting import Reporter
from message_ix.util.tutorial import prepare_plots
rep_base = Reporter.from_scenario(base)
prepare_plots(rep_base)
rep_scen = Reporter.from_scenario(scen)
prepare_plots(rep_scen)
###Output
_____no_output_____
###Markdown
Activity***When comparing the results of the original scenario without the renewable potentials ('*carbon_tax*') with the results of our newly modified scenario ('*renewable_potential*'), for the same carbon price we can observe that the activity of the `wind_ppl` has substantially decreased. This is because through adding potentials with corresponding plant factors, the `wind_ppl` has become increasingly economically unattractive and despite the carbon tax is not used. Note, that the `coal_ppl` still has a plant factor of 1 and has no resource constraints, thus in order to further improve the model, the parameters for the `coal_ppl` would need to be adjusted. Scenario: '*carbon_tax*'
###Code
rep_base.set_filters(t=["coal_ppl", "wind_ppl"])
rep_base.get("plot activity")
###Output
_____no_output_____
###Markdown
Scenario: '*renewable_potential*'
###Code
rep_scen.set_filters(t=["coal_ppl", "wind_ppl"])
rep_scen.get("plot activity")
###Output
_____no_output_____
###Markdown
Capacity***The behavior observed for the activity of the two electricity generation technologies is reflected in the capacity. No further capacity is built for the `wind_ppl` and thus is phased out by 720. Scenario: '*carbon_tax*'
###Code
rep_base.get("plot capacity")
###Output
_____no_output_____
###Markdown
Scenario: '*renewable_potential*'
###Code
rep_scen.get("plot capacity")
###Output
_____no_output_____
###Markdown
Prices***Especially in the earlier model time periods, electricity and therefore the price for light increase dramatically. The increase in 720 is due to the emission taxes associated with the operation of the `coal_ppl`. Scenario: '*carbon_tax*'
###Code
rep_base.set_filters(t=None, c=["light"])
rep_base.get("plot prices")
###Output
_____no_output_____
###Markdown
Scenario: '*renewable_potential*'
###Code
rep_scen.set_filters(t=None, c=["light"])
rep_scen.get("plot prices")
mp.close_db()
###Output
_____no_output_____ |
character_rnn/character_rnn.ipynb | ###Markdown
Simple RNNSimple RNN that predicts the next character. Based on chapter 8 of Dive to Deep learning. Preparing datasetFor dataset here I used 3 books by Verne. They are contained in dataset directory as text files.In order to use this dataset for training we need to do the following: - Load files into memory. - Split string into tokens (in this case characters). - Encode characters into numbers. First let's load the files.
###Code
%matplotlib inline
import collections
import re
import glob
import random
import torch
import math
import matplotlib.pyplot as plt
from tqdm.notebook import tqdm
from torch import nn
from torch.nn import functional as torch_fn
device = "cuda" if torch.cuda.is_available() else "cpu"
DATASET_DIR = "dataset"
dataset_lines = []
for filename in glob.iglob(f'{DATASET_DIR}/*.txt'):
print(f"Loading {filename} ...")
with open(filename, 'r') as f:
lines = f.readlines()
file_lines = [re.sub('[^A-Za-z0-9]+', ' ', line).strip().lower() for line in lines]
dataset_lines.extend(file_lines)
len(dataset_lines)
dataset_lines[:10]
###Output
_____no_output_____
###Markdown
Now we will tokenize and flatten the entire dataset.
###Code
tokenized_dataset = [list(line) for line in dataset_lines]
tokenized_dataset = [token for line in tokenized_dataset for token in line]
len(tokenized_dataset)
print(tokenized_dataset[:100])
###Output
['t', 'h', 'e', ' ', 'p', 'r', 'o', 'j', 'e', 'c', 't', ' ', 'g', 'u', 't', 'e', 'n', 'b', 'e', 'r', 'g', ' ', 'e', 'b', 'o', 'o', 'k', ' ', 'o', 'f', ' ', 'a', 'r', 'o', 'u', 'n', 'd', ' ', 't', 'h', 'e', ' ', 'w', 'o', 'r', 'l', 'd', ' ', 'i', 'n', ' ', '8', '0', ' ', 'd', 'a', 'y', 's', ' ', 'b', 'y', ' ', 'j', 'u', 'l', 'e', 's', ' ', 'v', 'e', 'r', 'n', 'e', 't', 'h', 'i', 's', ' ', 'e', 'b', 'o', 'o', 'k', ' ', 'i', 's', ' ', 'f', 'o', 'r', ' ', 't', 'h', 'e', ' ', 'u', 's', 'e', ' ', 'o']
###Markdown
This is a simple solution for loading text files but it does have a small flaw. Since text files were stiched the sequence at the point of stiching will not make sense. However this only represents a small fraction of the dataset so this shouldn't have a significant impact on the result.Now let's construct a dictionary that will be used to encode characters into numbers.
###Code
class Vocabulary:
def __init__(self, tokens):
counter = collections.Counter(tokenized_dataset)
self.vocab = {}
for i, c in enumerate(counter):
self.vocab[c] = i
self.key_list = list(self.vocab.keys())
self.val_list = list(self.vocab.values())
self.size = len(self.key_list)
def tokens_to_indexes(self, tokens):
indexes = []
for token in tokens:
indexes.append(self.vocab[token])
return indexes
def indexes_to_tokens(self, indexes):
tokens = []
for indx in indexes:
tokens.append(self.key_list[self.val_list.index(indx)])
return tokens
vocab = Vocabulary(tokenized_dataset)
dataset = vocab.tokens_to_indexes(tokenized_dataset)
print(dataset[:100])
print(vocab.indexes_to_tokens(dataset[:100]))
###Output
['t', 'h', 'e', ' ', 'p', 'r', 'o', 'j', 'e', 'c', 't', ' ', 'g', 'u', 't', 'e', 'n', 'b', 'e', 'r', 'g', ' ', 'e', 'b', 'o', 'o', 'k', ' ', 'o', 'f', ' ', 'a', 'r', 'o', 'u', 'n', 'd', ' ', 't', 'h', 'e', ' ', 'w', 'o', 'r', 'l', 'd', ' ', 'i', 'n', ' ', '8', '0', ' ', 'd', 'a', 'y', 's', ' ', 'b', 'y', ' ', 'j', 'u', 'l', 'e', 's', ' ', 'v', 'e', 'r', 'n', 'e', 't', 'h', 'i', 's', ' ', 'e', 'b', 'o', 'o', 'k', ' ', 'i', 's', ' ', 'f', 'o', 'r', ' ', 't', 'h', 'e', ' ', 'u', 's', 'e', ' ', 'o']
###Markdown
Data loaderNow we will need to create a data loader. During training process we will try to predict the next character in the sequence. So in order to train a network we need a batch of sequences and corresponding sequences of labels.Each sequence will be sampled from the dataset using Sequential Partitioning. This means that we sample the sequences randomly with a constrain that subsequences from two adjacent minibatches during iteration are adjacent on the original sequence.Here is the implementation of the loader.
###Code
class SeqDataLoader:
def __init__(self, corpus, batch_size, seq_len, device):
self.corpus, self.b, self.n, self.d = corpus, batch_size, seq_len, device
def __iter__(self):
# Randomly drop the first d tokens.
corpus = self.corpus[random.randint(0, self.n - 1):]
# No. of subsequences. Subtract 1 to account for labels.
m = (len(corpus)-1) // self.n
# The starting indices for input sequences.
initial_indices = list(range(0, m*self.n, self.n))
random.shuffle(initial_indices)
for i in range(0, m // self.b):
# The randomized starting indices for this minibatch.
batch_indicies = initial_indices[i*self.b : (i+1) * self.b]
X = [corpus[j : j+self.n] for j in batch_indicies]
Y = [corpus[j+1 : j+1+self.n] for j in batch_indicies]
yield torch.tensor(X, dtype=torch.int16, device=self.d), \
torch.tensor(Y, dtype=torch.int16, device=self.d)
data_loader = SeqDataLoader(dataset, 2, 40, device)
x,y = next(iter(data_loader))
x,y
###Output
_____no_output_____
###Markdown
ModelModel that we will use here is a simple one layer RNN with a hidden state.The following equations are used to compute output and new hidden state: Layer that implements first equation is provided by torch.nn.RNN.Second equation is just a linear classifier. Activation function for the recursive layer is going to be tanh.Each character will be encoded as one hot vector.Recursive layer implementation is provided by torch.nn.RNN.This function takes 2 parameters: - Input tensor - Initial state for each element in the batch Each character in the sequence is a one hot vector, so the entire sequence is represente as a matrix and batch is a 3D tensor with shape $(N,L,H_{in})$Initial state for each sequence is a 1D vector so initial hidden state for all elements in the batch is a matrix with shape $(1, N, H_{out})$.Where - $N$ - batch size - $L$ - length of the sequence - $H_{in}$ - size of the input (size of the one hot vector) - $H_{out}$ - size of the hidden state vector.Here is the implementation of the model:
###Code
class RNN(nn.Module):
def __init__(self, hidden_state_size, vocab_size,
device, **kwargs):
super(RNN, self).__init__(**kwargs)
self.hidden_state_size = hidden_state_size
self.vocab_size = vocab_size
self.device = device
self.recursive_layer = nn.RNN(vocab_size, hidden_state_size)
self.classifier_layer = nn.Linear(hidden_state_size, vocab_size)
def forward(self, inputs, initial_state):
X = torch_fn.one_hot(inputs.T.long(), self.vocab_size)
X = X.to(torch.float32)
Y, state = self.recursive_layer(X, initial_state)
# The fully connected layer will first change the shape of `Y` to
# (`num_steps` * `batch_size`, `num_hiddens`). Its output shape is
# (`num_steps` * `batch_size`, `vocab_size`).
output = self.classifier_layer(Y.reshape((-1, Y.shape[-1])))
return output, state
def gen_initial_state(self, batch_size):
return torch.zeros((1, batch_size, self.hidden_state_size),
device=self.device)
net = RNN(256, vocab.size, device)
net = net.to(device)
initial_state = net.gen_initial_state(256)
data_loader = SeqDataLoader(dataset, 256, 40, device)
x,y = next(iter(data_loader))
output, state = net(x, initial_state)
output.shape
output
y.shape
###Output
_____no_output_____
###Markdown
Output is predictions for all sequences in a batch consolidated into one. Making predictionsNow let's build a function that will allow us to extend sentence with predictions from the network.
###Code
def gen_predictions(net, device, vocabulary, input_str, preds_count):
torch.set_grad_enabled(False)
tokens = [token for token in input_str]
indexes = vocabulary.tokens_to_indexes(tokens)
net_input = torch.tensor(indexes, dtype=torch.int16, device=device)
net_input = net_input.expand(1, -1)
initial_state = net.gen_initial_state(1)
# Warm up with the provided string.
outputs, state = net(net_input, initial_state)
get_idx = lambda logits: logits.argmax().expand(1, 1)
to_token = lambda idx_tensor: int(idx_tensor[0][0].cpu())
# Get output
last_index = get_idx(outputs[-1:])
# Generate new result.
output_tokens = [to_token(last_index)]
for _ in range(preds_count):
outputs, state = net(last_index, state)
last_index = get_idx(outputs)
output_tokens.extend([to_token(last_index)])
output_chars = vocabulary.indexes_to_tokens(output_tokens)
for char in output_chars:
input_str += char
torch.set_grad_enabled(True)
return input_str
gen_predictions(net, device, vocab, "journey", 80)
###Output
_____no_output_____
###Markdown
As expected untrained network is not doing well. Now let's train it. PerplexityBut first we will create a metric that will tell us how well the model is doing so we can monitor this metric during training.The standard quantity used for language models is called perplexity and it is defined by the following formula:Perplexity can be best understood as the harmonic mean of the number of real choices that we have when deciding which token to pick next. Let us look at a number of cases: - In the best case scenario, the model always perfectly estimates the probability of the label token as 1. In this case the perplexity of the model is 1. - In the worst case scenario, the model always predicts the probability of the label token as 0. In this situation, the perplexity is positive infinity. - At the baseline, the model predicts a uniform distribution over all the available tokens of the vocabulary. In this case, the perplexity equals the number of unique tokens of the vocabulary. In fact, if we were to store the sequence without any compression, this would be the best we could do to encode it. Hence, this provides a nontrivial upper bound that any useful model must beat. Training loopWhen writing training loop we have to consider two things. First we need to handle the internal state in between batches. Because we are using Sequential Partitioning we are going to initialize internal state at the beginning of each epoch and then preseve it between minibatches. This means that we will need to detach the internal state from the computational graph otherwise graph will continue to grow as we compute more and more minibatches.Second consideration is that we are multiplying state vector by the same matrix many times. This means that we will almost certanly see exploding gradients problem and optimization will become unstable. To this we will apply gradient clipping. Function will clip the gradients so that they norm will not exceed specified threshold.Here is the function for performing gradient clipping.
###Code
def clip_gradients(model, threshold):
params = [p for p in model.parameters() if p.requires_grad]
norm = torch.sqrt(sum(torch.sum((p.grad**2)) for p in params))
if norm > threshold:
for param in params:
param.grad[:] *= threshold / norm
###Output
_____no_output_____
###Markdown
THis will however not solve the problem of vanishing gradients.And here is the training loop.For the loss function we will use cross entropy loss since we want to maximize the probability that predictions of next characters are correct and we are returning logits.For optimizer we will use SGD.
###Code
def train_model(net, dataset, optimizer, batch_size, seq_len, epochs):
loss = nn.CrossEntropyLoss()
data_loader = SeqDataLoader(dataset, batch_size, seq_len, device)
loss_history = []
perplexity_history = []
for epoch in tqdm(range(epochs)):
state = None
total_loss = 0.
total_sample_number = 0
for X,Y in data_loader:
if state is None:
state = net.gen_initial_state(batch_size)
else:
state.detach_()
y_hat, state = net(X, state)
# Loss will be computed for each sequence we compute mean loss
# across all sentences.
y = Y.T.reshape(-1)
l = loss(y_hat, y.long()).mean()
optimizer.zero_grad()
l.backward()
clip_gradients(net, 1)
optimizer.step()
with torch.no_grad():
total_loss += l.cpu() * y.cpu().numel()
total_sample_number += y.cpu().numel()
loss_avg = total_loss / total_sample_number
perplexity_avg = math.exp(loss_avg)
loss_history.append(loss_avg)
perplexity_history.append(perplexity_avg)
return {"loss": loss_history, "perplexity": perplexity_history}
lr = 1
batch_size = 256
sequence_len = 40
epochs = 200
net = RNN(512, vocab.size, device)
net = net.to(device)
optimizer = torch.optim.SGD(net.parameters(), lr)
history = train_model(net, dataset, optimizer, batch_size, sequence_len, epochs)
plt.title("Loss history")
plt.plot(history["loss"])
plt.xlabel("epoch")
plt.ylabel("loss")
plt.title("Perplexity history")
plt.plot(history["perplexity"])
plt.xlabel("epoch")
plt.ylabel("perplexity")
###Output
_____no_output_____
###Markdown
Some examples and conclusionsNow let's have some fun and give network diferent sentences to extend.
###Code
gen_predictions(net, device, vocab, "journey to the", 200)
gen_predictions(net, device, vocab, "before starting afresh i thought a wash would do me good", 200)
gen_predictions(net, device, vocab, "towards four oclock", 200)
gen_predictions(net, device, vocab, "captain nemo", 200)
###Output
_____no_output_____ |
notebooks/link_labeldata_to_windows.ipynb | ###Markdown
Link labeldata to window cut-outs
###Code
# %pip install rioxarray
# %pip install geopandas
import numpy as np
import rasterio
from rasterio.features import shapes, geometry_mask
import rioxarray
import json
dataPath = '/Users/maaikeizeboud/Documents/Data/test/'
imName = 'S2_comp_first.tif'
# labName = 'S2_20190131_-100p7_-75p0.geojson'
labName = 'output.geojson'
###Output
_____no_output_____
###Markdown
Load Image (Kopied from Meierts 'rasterize_labeled_data.ipynb')
###Code
bands = rioxarray.open_rasterio(dataPath + imName)
bands.rio.bounds()
bands.rio.crs
bands.spatial_ref.crs_wkt
###Output
_____no_output_____
###Markdown
Load labeldata labeldata.geojson contains both Polygon and MultiLine. NB: the labeldata is stored in EPSG:4325 projection, (lat,lon) values.This is converted to EPSG:3031 projection (Antarctic polarstereographic) BEFORE LOADING HERE. This has been done in terminal, not in this notebook, using GDAL's ogr2ogr: ``ogr2ogr -s_srs EPSG:4326 -t_srs EPSG:3031 output.geojson input.geojson``
###Code
with open(dataPath + labName) as f:
gj = json.load(f)
features = gj['features'][0]['geometry'] # select one feature polygon for testing
len(gj)
###Output
_____no_output_____
###Markdown
Check if geometry is valid geojson geomtetry to input to rasterio.featuresEven though the features are recognised as is_valid_geom()=True, using this as input for rasterio.features.geometry_mask(geometries,...) (see later kernel) yields:``ValueError: No valid geometry objects found for rasterize``
###Code
print(rasterio.features.is_valid_geom(features))
###Output
True
###Markdown
Convert poly to georegisterted polygonWould rather skip this step, if we can get geometry_mask to work with 'features' geometryATTENTION: should test for MultiLine as well (now only Polygon)
###Code
from geopandas import GeoSeries
from shapely import geometry
from shapely.geometry import shape, mapping, MultiPolygon
# poly1 = geometry.Polygon([[p.x,p.y] for p in plist1])
poly = np.squeeze(features['coordinates']) # ndarray
poly1 = geometry.Polygon(poly)
polys = GeoSeries([poly1],crs=bands.spatial_ref.crs_wkt)
type(polys)
###Output
_____no_output_____
###Markdown
Create mask with rasterio.features.geometry_mask (1) use georegistered polygon -- workscreate mask based on geometry. Invert mask to select pixels WITHIN bounds. ATTENTION possible to select on touch or center inclusionpolys : dtype geometry / geopandas.geoseries.GeoSeries
###Code
mmask = geometry_mask(polys,out_shape=(len(bands.y),len(bands.x)),transform=bands.rio.transform(),invert=True)
# Inspect data type of mask -> ndarray
mmask = np.expand_dims(mmask,axis=0)
mmask.shape
###Output
_____no_output_____
###Markdown
(2) use geojson-like object -- doesnt workhttps://rasterio.readthedocs.io/en/latest/api/rasterio.features.htmlGeoJSON-like objects should work, and would save some conversion from geojson > polygon > georegistered polygon.
###Code
mmask = geometry_mask(features,out_shape=(len(bands.y),len(bands.x)),transform=bands.rio.transform(),invert=True)
mmask = np.expand_dims(mmask,axis=0)
mmas.shape
m2mask = mmask.astype(np.dtype('uint16'))
###Output
_____no_output_____
###Markdown
inspect mask
###Code
import matplotlib.pyplot as plt
# imshow(amask[0])
fig, (ax1, ax2) = plt.subplots(1, 2,figsize=(10,10))
ax2.imshow(m2mask[0])
ax1.imshow(bands[0,:,:])
ax2.set_title('labeldata');
ax1.set_title('image (1/3 bnd)');
###Output
_____no_output_____
###Markdown
Convert Mask to integer and add as band to image mask is boolean. Convert to integer representation true==1 false==0convert mask to DataArray. import coordinates from bands
###Code
import xarray
amask= xarray.DataArray(data=m2mask,dims=['band','y','x'],coords={'band':[0],'y':bands[0].coords['y'],'x':bands[0].coords['x']})
from rioxarray.rioxarray import _add_attrs_proj
_add_attrs_proj(amask,bands[0])
out=xarray.concat([bands,amask],'band')
out
###Output
_____no_output_____ |
Jonathan_Hardin_StarGan.ipynb | ###Markdown
StarGanJonathan Hardin **Installation**Final Project (George Bush Doesn’t Care about Data Science)In my project I built the StarGan dataset along with the celebrity faces. On the added set I added in George Bush faces to be tested and trained with the data that provided. The information at the end all the data was tested perfectly. When downloading the original CelebA dataset I cloned the information with this command.
###Code
!git clone https://github.com/jonvthvn90/GitHbProject.git
%cd GitHbProject/
!ls
import numpy as np
import pandas as pd
import cv2 as cv
from google.colab.patches import cv2_imshow # for image display
from skimage import io
from PIL import Image
import matplotlib.pylab as plt
###Output
Cloning into 'GitHbProject'...
remote: Enumerating objects: 180, done.[K
remote: Counting objects: 100% (180/180), done.[K
remote: Compressing objects: 100% (94/94), done.[K
remote: Total 180 (delta 97), reused 161 (delta 86), pack-reused 0[K
Receiving objects: 100% (180/180), 13.80 MiB | 20.30 MiB/s, done.
Resolving deltas: 100% (97/97), done.
/content/GitHbProject/GitHbProject
data_loader.py Jonathan_Hardin_StarGan.ipynb LICENSE main.py README.md
download.sh jpg logger.py model.py solver.py
###Markdown
Download Set
###Code
!bash download.sh celeba
###Output
[1;30;43mStreaming output truncated to the last 5000 lines.[0m
inflating: ./data/celeba/images/012465.jpg
inflating: ./data/celeba/images/089057.jpg
inflating: ./data/celeba/images/147796.jpg
inflating: ./data/celeba/images/192463.jpg
inflating: ./data/celeba/images/142818.jpg
inflating: ./data/celeba/images/148061.jpg
inflating: ./data/celeba/images/184432.jpg
inflating: ./data/celeba/images/201157.jpg
inflating: ./data/celeba/images/097288.jpg
inflating: ./data/celeba/images/130371.jpg
inflating: ./data/celeba/images/082864.jpg
inflating: ./data/celeba/images/017429.jpg
inflating: ./data/celeba/images/186077.jpg
inflating: ./data/celeba/images/126843.jpg
inflating: ./data/celeba/images/200381.jpg
inflating: ./data/celeba/images/012555.jpg
inflating: ./data/celeba/images/092238.jpg
inflating: ./data/celeba/images/008151.jpg
inflating: ./data/celeba/images/171418.jpg
inflating: ./data/celeba/images/155589.jpg
inflating: ./data/celeba/images/032110.jpg
inflating: ./data/celeba/images/098319.jpg
inflating: ./data/celeba/images/011702.jpg
inflating: ./data/celeba/images/170745.jpg
inflating: ./data/celeba/images/032835.jpg
inflating: ./data/celeba/images/192146.jpg
inflating: ./data/celeba/images/036732.jpg
inflating: ./data/celeba/images/134170.jpg
inflating: ./data/celeba/images/141256.jpg
inflating: ./data/celeba/images/066855.jpg
inflating: ./data/celeba/images/058527.jpg
inflating: ./data/celeba/images/122901.jpg
inflating: ./data/celeba/images/137404.jpg
inflating: ./data/celeba/images/013249.jpg
inflating: ./data/celeba/images/046547.jpg
inflating: ./data/celeba/images/201354.jpg
inflating: ./data/celeba/images/067897.jpg
inflating: ./data/celeba/images/078900.jpg
inflating: ./data/celeba/images/116855.jpg
inflating: ./data/celeba/images/126749.jpg
inflating: ./data/celeba/images/162931.jpg
inflating: ./data/celeba/images/046007.jpg
inflating: ./data/celeba/images/047538.jpg
inflating: ./data/celeba/images/126207.jpg
inflating: ./data/celeba/images/055186.jpg
inflating: ./data/celeba/images/005803.jpg
inflating: ./data/celeba/images/059170.jpg
inflating: ./data/celeba/images/134905.jpg
inflating: ./data/celeba/images/165695.jpg
inflating: ./data/celeba/images/143613.jpg
inflating: ./data/celeba/images/201583.jpg
inflating: ./data/celeba/images/000158.jpg
inflating: ./data/celeba/images/168980.jpg
inflating: ./data/celeba/images/181379.jpg
inflating: ./data/celeba/images/107160.jpg
inflating: ./data/celeba/images/106413.jpg
inflating: ./data/celeba/images/003882.jpg
inflating: ./data/celeba/images/029809.jpg
inflating: ./data/celeba/images/068624.jpg
inflating: ./data/celeba/images/081045.jpg
inflating: ./data/celeba/images/085512.jpg
inflating: ./data/celeba/images/127420.jpg
inflating: ./data/celeba/images/021821.jpg
inflating: ./data/celeba/images/131274.jpg
inflating: ./data/celeba/images/059685.jpg
inflating: ./data/celeba/images/034234.jpg
inflating: ./data/celeba/images/057058.jpg
inflating: ./data/celeba/images/014003.jpg
inflating: ./data/celeba/images/049810.jpg
inflating: ./data/celeba/images/170081.jpg
inflating: ./data/celeba/images/125714.jpg
inflating: ./data/celeba/images/175574.jpg
inflating: ./data/celeba/images/049348.jpg
inflating: ./data/celeba/images/086624.jpg
inflating: ./data/celeba/images/014250.jpg
inflating: ./data/celeba/images/012771.jpg
inflating: ./data/celeba/images/092698.jpg
inflating: ./data/celeba/images/186846.jpg
inflating: ./data/celeba/images/056856.jpg
inflating: ./data/celeba/images/199370.jpg
inflating: ./data/celeba/images/041855.jpg
inflating: ./data/celeba/images/030457.jpg
inflating: ./data/celeba/images/094497.jpg
inflating: ./data/celeba/images/145010.jpg
inflating: ./data/celeba/images/091060.jpg
inflating: ./data/celeba/images/180029.jpg
inflating: ./data/celeba/images/193524.jpg
inflating: ./data/celeba/images/012718.jpg
inflating: ./data/celeba/images/044706.jpg
inflating: ./data/celeba/images/022261.jpg
inflating: ./data/celeba/images/143061.jpg
inflating: ./data/celeba/images/095086.jpg
inflating: ./data/celeba/images/141004.jpg
inflating: ./data/celeba/images/069244.jpg
inflating: ./data/celeba/images/072422.jpg
inflating: ./data/celeba/images/192774.jpg
inflating: ./data/celeba/images/196449.jpg
inflating: ./data/celeba/images/187454.jpg
inflating: ./data/celeba/images/146299.jpg
inflating: ./data/celeba/images/026607.jpg
inflating: ./data/celeba/images/173090.jpg
inflating: ./data/celeba/images/142930.jpg
inflating: ./data/celeba/images/017458.jpg
inflating: ./data/celeba/images/172689.jpg
inflating: ./data/celeba/images/071355.jpg
inflating: ./data/celeba/images/087069.jpg
inflating: ./data/celeba/images/032922.jpg
inflating: ./data/celeba/images/190782.jpg
inflating: ./data/celeba/images/125421.jpg
inflating: ./data/celeba/images/091354.jpg
inflating: ./data/celeba/images/191085.jpg
inflating: ./data/celeba/images/128237.jpg
inflating: ./data/celeba/images/046600.jpg
inflating: ./data/celeba/images/068065.jpg
inflating: ./data/celeba/images/164873.jpg
inflating: ./data/celeba/images/071342.jpg
inflating: ./data/celeba/images/182354.jpg
inflating: ./data/celeba/images/089968.jpg
inflating: ./data/celeba/images/135063.jpg
inflating: ./data/celeba/images/007925.jpg
inflating: ./data/celeba/images/143539.jpg
inflating: ./data/celeba/images/099705.jpg
inflating: ./data/celeba/images/111345.jpg
inflating: ./data/celeba/images/180319.jpg
inflating: ./data/celeba/images/089962.jpg
inflating: ./data/celeba/images/122987.jpg
inflating: ./data/celeba/images/046125.jpg
inflating: ./data/celeba/images/072112.jpg
inflating: ./data/celeba/images/082160.jpg
inflating: ./data/celeba/images/161197.jpg
inflating: ./data/celeba/images/181631.jpg
inflating: ./data/celeba/images/053008.jpg
inflating: ./data/celeba/images/110138.jpg
inflating: ./data/celeba/images/091979.jpg
inflating: ./data/celeba/images/124079.jpg
inflating: ./data/celeba/images/150457.jpg
inflating: ./data/celeba/images/072221.jpg
inflating: ./data/celeba/images/026938.jpg
inflating: ./data/celeba/images/017437.jpg
inflating: ./data/celeba/images/139384.jpg
inflating: ./data/celeba/images/001605.jpg
inflating: ./data/celeba/images/151944.jpg
inflating: ./data/celeba/images/068531.jpg
inflating: ./data/celeba/images/040983.jpg
inflating: ./data/celeba/images/174019.jpg
inflating: ./data/celeba/images/157003.jpg
inflating: ./data/celeba/images/131553.jpg
inflating: ./data/celeba/images/105944.jpg
inflating: ./data/celeba/images/092796.jpg
inflating: ./data/celeba/images/196505.jpg
inflating: ./data/celeba/images/173873.jpg
inflating: ./data/celeba/images/141493.jpg
inflating: ./data/celeba/images/176996.jpg
inflating: ./data/celeba/images/010946.jpg
inflating: ./data/celeba/images/147813.jpg
inflating: ./data/celeba/images/001208.jpg
inflating: ./data/celeba/images/136407.jpg
inflating: ./data/celeba/images/188867.jpg
inflating: ./data/celeba/images/051231.jpg
inflating: ./data/celeba/images/104830.jpg
inflating: ./data/celeba/images/198021.jpg
inflating: ./data/celeba/images/005518.jpg
inflating: ./data/celeba/images/070386.jpg
inflating: ./data/celeba/images/174709.jpg
inflating: ./data/celeba/images/082349.jpg
inflating: ./data/celeba/images/054899.jpg
inflating: ./data/celeba/images/073406.jpg
inflating: ./data/celeba/images/100276.jpg
inflating: ./data/celeba/images/067732.jpg
inflating: ./data/celeba/images/060425.jpg
inflating: ./data/celeba/images/008553.jpg
inflating: ./data/celeba/images/116380.jpg
inflating: ./data/celeba/images/156801.jpg
inflating: ./data/celeba/images/073135.jpg
inflating: ./data/celeba/images/011203.jpg
inflating: ./data/celeba/images/051967.jpg
inflating: ./data/celeba/images/045059.jpg
inflating: ./data/celeba/images/059135.jpg
inflating: ./data/celeba/images/078246.jpg
inflating: ./data/celeba/images/030752.jpg
inflating: ./data/celeba/images/124215.jpg
inflating: ./data/celeba/images/070139.jpg
inflating: ./data/celeba/images/094246.jpg
inflating: ./data/celeba/images/181513.jpg
inflating: ./data/celeba/images/026664.jpg
inflating: ./data/celeba/images/155885.jpg
inflating: ./data/celeba/images/186523.jpg
inflating: ./data/celeba/images/010596.jpg
inflating: ./data/celeba/images/101163.jpg
inflating: ./data/celeba/images/142501.jpg
inflating: ./data/celeba/images/068930.jpg
inflating: ./data/celeba/images/047055.jpg
inflating: ./data/celeba/images/108516.jpg
inflating: ./data/celeba/images/165535.jpg
inflating: ./data/celeba/images/143967.jpg
inflating: ./data/celeba/images/033298.jpg
inflating: ./data/celeba/images/035072.jpg
inflating: ./data/celeba/images/202382.jpg
inflating: ./data/celeba/images/088623.jpg
inflating: ./data/celeba/images/033328.jpg
inflating: ./data/celeba/images/069382.jpg
inflating: ./data/celeba/images/155844.jpg
inflating: ./data/celeba/images/171675.jpg
inflating: ./data/celeba/images/185075.jpg
inflating: ./data/celeba/images/038095.jpg
inflating: ./data/celeba/images/107241.jpg
inflating: ./data/celeba/images/185739.jpg
inflating: ./data/celeba/images/027847.jpg
inflating: ./data/celeba/images/156282.jpg
inflating: ./data/celeba/images/110895.jpg
inflating: ./data/celeba/images/184299.jpg
inflating: ./data/celeba/images/167904.jpg
inflating: ./data/celeba/images/149790.jpg
inflating: ./data/celeba/images/123693.jpg
inflating: ./data/celeba/images/003806.jpg
inflating: ./data/celeba/images/093836.jpg
inflating: ./data/celeba/images/077908.jpg
inflating: ./data/celeba/images/110673.jpg
inflating: ./data/celeba/images/132270.jpg
inflating: ./data/celeba/images/015607.jpg
inflating: ./data/celeba/images/040609.jpg
inflating: ./data/celeba/images/034134.jpg
inflating: ./data/celeba/images/165446.jpg
inflating: ./data/celeba/images/154544.jpg
inflating: ./data/celeba/images/179644.jpg
inflating: ./data/celeba/images/096499.jpg
inflating: ./data/celeba/images/051202.jpg
inflating: ./data/celeba/images/087928.jpg
inflating: ./data/celeba/images/113998.jpg
inflating: ./data/celeba/images/092876.jpg
inflating: ./data/celeba/images/157486.jpg
inflating: ./data/celeba/images/025288.jpg
inflating: ./data/celeba/images/181510.jpg
inflating: ./data/celeba/images/057956.jpg
inflating: ./data/celeba/images/059686.jpg
inflating: ./data/celeba/images/024188.jpg
inflating: ./data/celeba/images/051210.jpg
inflating: ./data/celeba/images/095814.jpg
inflating: ./data/celeba/images/085511.jpg
inflating: ./data/celeba/images/144584.jpg
inflating: ./data/celeba/images/004212.jpg
inflating: ./data/celeba/images/181188.jpg
inflating: ./data/celeba/images/101367.jpg
inflating: ./data/celeba/images/062062.jpg
inflating: ./data/celeba/images/000766.jpg
inflating: ./data/celeba/images/156496.jpg
inflating: ./data/celeba/images/090896.jpg
inflating: ./data/celeba/images/018802.jpg
inflating: ./data/celeba/images/060242.jpg
inflating: ./data/celeba/images/100506.jpg
inflating: ./data/celeba/images/091321.jpg
inflating: ./data/celeba/images/118336.jpg
inflating: ./data/celeba/images/146318.jpg
inflating: ./data/celeba/images/062311.jpg
inflating: ./data/celeba/images/040629.jpg
inflating: ./data/celeba/images/031710.jpg
inflating: ./data/celeba/images/032954.jpg
inflating: ./data/celeba/images/190841.jpg
inflating: ./data/celeba/images/121863.jpg
inflating: ./data/celeba/images/111321.jpg
inflating: ./data/celeba/images/099450.jpg
inflating: ./data/celeba/images/009418.jpg
inflating: ./data/celeba/images/192974.jpg
inflating: ./data/celeba/images/040729.jpg
inflating: ./data/celeba/images/046194.jpg
inflating: ./data/celeba/images/092157.jpg
inflating: ./data/celeba/images/198279.jpg
inflating: ./data/celeba/images/139388.jpg
inflating: ./data/celeba/images/130342.jpg
inflating: ./data/celeba/images/010656.jpg
inflating: ./data/celeba/images/125576.jpg
inflating: ./data/celeba/images/121189.jpg
inflating: ./data/celeba/images/031848.jpg
inflating: ./data/celeba/images/038149.jpg
inflating: ./data/celeba/images/052270.jpg
inflating: ./data/celeba/images/172730.jpg
inflating: ./data/celeba/images/149221.jpg
inflating: ./data/celeba/images/135027.jpg
inflating: ./data/celeba/images/133511.jpg
inflating: ./data/celeba/images/195837.jpg
inflating: ./data/celeba/images/147303.jpg
inflating: ./data/celeba/images/130887.jpg
inflating: ./data/celeba/images/097048.jpg
inflating: ./data/celeba/images/124138.jpg
inflating: ./data/celeba/images/111739.jpg
inflating: ./data/celeba/images/178498.jpg
inflating: ./data/celeba/images/094387.jpg
inflating: ./data/celeba/images/077586.jpg
inflating: ./data/celeba/images/024421.jpg
inflating: ./data/celeba/images/084587.jpg
inflating: ./data/celeba/images/059143.jpg
inflating: ./data/celeba/images/123551.jpg
inflating: ./data/celeba/images/009059.jpg
inflating: ./data/celeba/images/185818.jpg
inflating: ./data/celeba/images/176772.jpg
inflating: ./data/celeba/images/076434.jpg
inflating: ./data/celeba/images/103240.jpg
inflating: ./data/celeba/images/005026.jpg
inflating: ./data/celeba/images/175049.jpg
inflating: ./data/celeba/images/175630.jpg
inflating: ./data/celeba/images/166274.jpg
inflating: ./data/celeba/images/041069.jpg
inflating: ./data/celeba/images/005058.jpg
inflating: ./data/celeba/images/133245.jpg
inflating: ./data/celeba/images/014379.jpg
inflating: ./data/celeba/images/071209.jpg
inflating: ./data/celeba/images/087357.jpg
inflating: ./data/celeba/images/021108.jpg
inflating: ./data/celeba/images/186253.jpg
inflating: ./data/celeba/images/195278.jpg
inflating: ./data/celeba/images/013345.jpg
inflating: ./data/celeba/images/087557.jpg
inflating: ./data/celeba/images/190076.jpg
inflating: ./data/celeba/images/072330.jpg
inflating: ./data/celeba/images/123723.jpg
inflating: ./data/celeba/images/156029.jpg
inflating: ./data/celeba/images/067480.jpg
inflating: ./data/celeba/images/072623.jpg
inflating: ./data/celeba/images/024109.jpg
inflating: ./data/celeba/images/077846.jpg
inflating: ./data/celeba/images/125828.jpg
inflating: ./data/celeba/images/028558.jpg
inflating: ./data/celeba/images/124435.jpg
inflating: ./data/celeba/images/175566.jpg
inflating: ./data/celeba/images/121879.jpg
inflating: ./data/celeba/images/133404.jpg
inflating: ./data/celeba/images/087128.jpg
inflating: ./data/celeba/images/158621.jpg
inflating: ./data/celeba/images/116161.jpg
inflating: ./data/celeba/images/070035.jpg
inflating: ./data/celeba/images/061401.jpg
inflating: ./data/celeba/images/162517.jpg
inflating: ./data/celeba/images/042554.jpg
inflating: ./data/celeba/images/183044.jpg
inflating: ./data/celeba/images/034545.jpg
inflating: ./data/celeba/images/182267.jpg
inflating: ./data/celeba/images/004930.jpg
inflating: ./data/celeba/images/102688.jpg
inflating: ./data/celeba/images/129714.jpg
inflating: ./data/celeba/images/129832.jpg
inflating: ./data/celeba/images/022451.jpg
inflating: ./data/celeba/images/167405.jpg
inflating: ./data/celeba/images/201123.jpg
inflating: ./data/celeba/images/050866.jpg
inflating: ./data/celeba/images/001245.jpg
inflating: ./data/celeba/images/067869.jpg
inflating: ./data/celeba/images/173004.jpg
inflating: ./data/celeba/images/061667.jpg
inflating: ./data/celeba/images/136469.jpg
inflating: ./data/celeba/images/071247.jpg
inflating: ./data/celeba/images/193479.jpg
inflating: ./data/celeba/images/073931.jpg
inflating: ./data/celeba/images/102499.jpg
inflating: ./data/celeba/images/189975.jpg
inflating: ./data/celeba/images/199536.jpg
inflating: ./data/celeba/images/024943.jpg
inflating: ./data/celeba/images/198700.jpg
inflating: ./data/celeba/images/159759.jpg
inflating: ./data/celeba/images/026616.jpg
inflating: ./data/celeba/images/171850.jpg
inflating: ./data/celeba/images/071072.jpg
inflating: ./data/celeba/images/058523.jpg
inflating: ./data/celeba/images/149999.jpg
inflating: ./data/celeba/images/052099.jpg
inflating: ./data/celeba/images/054802.jpg
inflating: ./data/celeba/images/021786.jpg
inflating: ./data/celeba/images/010885.jpg
inflating: ./data/celeba/images/163860.jpg
inflating: ./data/celeba/images/038011.jpg
inflating: ./data/celeba/images/131699.jpg
inflating: ./data/celeba/images/152792.jpg
inflating: ./data/celeba/images/010184.jpg
inflating: ./data/celeba/images/048869.jpg
inflating: ./data/celeba/images/092676.jpg
inflating: ./data/celeba/images/063186.jpg
inflating: ./data/celeba/images/074375.jpg
inflating: ./data/celeba/images/142844.jpg
inflating: ./data/celeba/images/160593.jpg
inflating: ./data/celeba/images/034714.jpg
inflating: ./data/celeba/images/104014.jpg
inflating: ./data/celeba/images/077621.jpg
inflating: ./data/celeba/images/078854.jpg
inflating: ./data/celeba/images/091796.jpg
inflating: ./data/celeba/images/151223.jpg
inflating: ./data/celeba/images/012075.jpg
inflating: ./data/celeba/images/170133.jpg
inflating: ./data/celeba/images/086245.jpg
inflating: ./data/celeba/images/035827.jpg
inflating: ./data/celeba/images/094595.jpg
inflating: ./data/celeba/images/072334.jpg
inflating: ./data/celeba/images/107334.jpg
inflating: ./data/celeba/images/066772.jpg
inflating: ./data/celeba/images/047415.jpg
inflating: ./data/celeba/images/085466.jpg
inflating: ./data/celeba/images/073034.jpg
inflating: ./data/celeba/images/104635.jpg
inflating: ./data/celeba/images/048474.jpg
inflating: ./data/celeba/images/093291.jpg
inflating: ./data/celeba/images/069619.jpg
inflating: ./data/celeba/images/156084.jpg
inflating: ./data/celeba/images/123641.jpg
inflating: ./data/celeba/images/071674.jpg
inflating: ./data/celeba/images/116977.jpg
inflating: ./data/celeba/images/036205.jpg
inflating: ./data/celeba/images/140580.jpg
inflating: ./data/celeba/images/049876.jpg
inflating: ./data/celeba/images/003095.jpg
inflating: ./data/celeba/images/027147.jpg
inflating: ./data/celeba/images/083471.jpg
inflating: ./data/celeba/images/148844.jpg
inflating: ./data/celeba/images/057632.jpg
inflating: ./data/celeba/images/144063.jpg
inflating: ./data/celeba/images/185948.jpg
inflating: ./data/celeba/images/083530.jpg
inflating: ./data/celeba/images/135339.jpg
inflating: ./data/celeba/images/036744.jpg
inflating: ./data/celeba/images/029266.jpg
inflating: ./data/celeba/images/191526.jpg
inflating: ./data/celeba/images/089306.jpg
inflating: ./data/celeba/images/000124.jpg
inflating: ./data/celeba/images/044210.jpg
inflating: ./data/celeba/images/093716.jpg
inflating: ./data/celeba/images/004200.jpg
inflating: ./data/celeba/images/091748.jpg
inflating: ./data/celeba/images/117421.jpg
inflating: ./data/celeba/images/144679.jpg
inflating: ./data/celeba/images/156070.jpg
inflating: ./data/celeba/images/161813.jpg
inflating: ./data/celeba/images/191158.jpg
inflating: ./data/celeba/images/131759.jpg
inflating: ./data/celeba/images/019396.jpg
inflating: ./data/celeba/images/078851.jpg
inflating: ./data/celeba/images/091818.jpg
inflating: ./data/celeba/images/171094.jpg
inflating: ./data/celeba/images/154435.jpg
inflating: ./data/celeba/images/162121.jpg
inflating: ./data/celeba/images/200776.jpg
inflating: ./data/celeba/images/113611.jpg
inflating: ./data/celeba/images/013501.jpg
inflating: ./data/celeba/images/198525.jpg
inflating: ./data/celeba/images/128819.jpg
inflating: ./data/celeba/images/128475.jpg
inflating: ./data/celeba/images/100715.jpg
inflating: ./data/celeba/images/022524.jpg
inflating: ./data/celeba/images/071553.jpg
inflating: ./data/celeba/images/111580.jpg
inflating: ./data/celeba/images/141157.jpg
inflating: ./data/celeba/images/070311.jpg
inflating: ./data/celeba/images/044952.jpg
inflating: ./data/celeba/images/098806.jpg
inflating: ./data/celeba/images/081939.jpg
inflating: ./data/celeba/images/197739.jpg
inflating: ./data/celeba/images/059952.jpg
inflating: ./data/celeba/images/036460.jpg
inflating: ./data/celeba/images/048652.jpg
inflating: ./data/celeba/images/044657.jpg
inflating: ./data/celeba/images/117151.jpg
inflating: ./data/celeba/images/186244.jpg
inflating: ./data/celeba/images/050599.jpg
inflating: ./data/celeba/images/018764.jpg
inflating: ./data/celeba/images/196030.jpg
inflating: ./data/celeba/images/130695.jpg
inflating: ./data/celeba/images/013369.jpg
inflating: ./data/celeba/images/144258.jpg
inflating: ./data/celeba/images/095658.jpg
inflating: ./data/celeba/images/112120.jpg
inflating: ./data/celeba/images/189986.jpg
inflating: ./data/celeba/images/102723.jpg
inflating: ./data/celeba/images/156917.jpg
inflating: ./data/celeba/images/073835.jpg
inflating: ./data/celeba/images/149565.jpg
inflating: ./data/celeba/images/063993.jpg
inflating: ./data/celeba/images/154166.jpg
inflating: ./data/celeba/images/114275.jpg
inflating: ./data/celeba/images/186757.jpg
inflating: ./data/celeba/images/064142.jpg
inflating: ./data/celeba/images/064748.jpg
inflating: ./data/celeba/images/106891.jpg
inflating: ./data/celeba/images/094258.jpg
inflating: ./data/celeba/images/036653.jpg
inflating: ./data/celeba/images/046782.jpg
inflating: ./data/celeba/images/050271.jpg
inflating: ./data/celeba/images/043535.jpg
inflating: ./data/celeba/images/183132.jpg
inflating: ./data/celeba/images/176087.jpg
inflating: ./data/celeba/images/166466.jpg
inflating: ./data/celeba/images/134981.jpg
inflating: ./data/celeba/images/119181.jpg
inflating: ./data/celeba/images/163515.jpg
inflating: ./data/celeba/images/105666.jpg
inflating: ./data/celeba/images/185904.jpg
inflating: ./data/celeba/images/116967.jpg
inflating: ./data/celeba/images/149196.jpg
inflating: ./data/celeba/images/051331.jpg
inflating: ./data/celeba/images/085715.jpg
inflating: ./data/celeba/images/077149.jpg
inflating: ./data/celeba/images/110758.jpg
inflating: ./data/celeba/images/044001.jpg
inflating: ./data/celeba/images/121314.jpg
inflating: ./data/celeba/images/046833.jpg
inflating: ./data/celeba/images/027471.jpg
inflating: ./data/celeba/images/048478.jpg
inflating: ./data/celeba/images/130842.jpg
inflating: ./data/celeba/images/156328.jpg
inflating: ./data/celeba/images/037785.jpg
inflating: ./data/celeba/images/185105.jpg
inflating: ./data/celeba/images/138642.jpg
inflating: ./data/celeba/images/144282.jpg
inflating: ./data/celeba/images/094354.jpg
inflating: ./data/celeba/images/139498.jpg
inflating: ./data/celeba/images/045156.jpg
inflating: ./data/celeba/images/080778.jpg
inflating: ./data/celeba/images/165467.jpg
inflating: ./data/celeba/images/141627.jpg
inflating: ./data/celeba/images/002780.jpg
inflating: ./data/celeba/images/006420.jpg
inflating: ./data/celeba/images/075334.jpg
inflating: ./data/celeba/images/079739.jpg
inflating: ./data/celeba/images/152746.jpg
inflating: ./data/celeba/images/100842.jpg
inflating: ./data/celeba/images/095146.jpg
inflating: ./data/celeba/images/006176.jpg
inflating: ./data/celeba/images/058203.jpg
inflating: ./data/celeba/images/165397.jpg
inflating: ./data/celeba/images/160350.jpg
inflating: ./data/celeba/images/024479.jpg
inflating: ./data/celeba/images/110745.jpg
inflating: ./data/celeba/images/123110.jpg
inflating: ./data/celeba/images/196223.jpg
inflating: ./data/celeba/images/050243.jpg
inflating: ./data/celeba/images/177617.jpg
inflating: ./data/celeba/images/046918.jpg
inflating: ./data/celeba/images/153246.jpg
inflating: ./data/celeba/images/053901.jpg
inflating: ./data/celeba/images/041565.jpg
inflating: ./data/celeba/images/098411.jpg
inflating: ./data/celeba/images/092296.jpg
inflating: ./data/celeba/images/113950.jpg
inflating: ./data/celeba/images/048152.jpg
inflating: ./data/celeba/images/094965.jpg
inflating: ./data/celeba/images/051643.jpg
inflating: ./data/celeba/images/113718.jpg
inflating: ./data/celeba/images/065774.jpg
inflating: ./data/celeba/images/032679.jpg
inflating: ./data/celeba/images/108286.jpg
inflating: ./data/celeba/images/149815.jpg
inflating: ./data/celeba/images/202527.jpg
inflating: ./data/celeba/images/163745.jpg
inflating: ./data/celeba/images/146048.jpg
inflating: ./data/celeba/images/148555.jpg
inflating: ./data/celeba/images/014975.jpg
inflating: ./data/celeba/images/142440.jpg
inflating: ./data/celeba/images/065412.jpg
inflating: ./data/celeba/images/007701.jpg
inflating: ./data/celeba/images/019545.jpg
inflating: ./data/celeba/images/198058.jpg
inflating: ./data/celeba/images/082926.jpg
inflating: ./data/celeba/images/090492.jpg
inflating: ./data/celeba/images/009958.jpg
inflating: ./data/celeba/images/135210.jpg
inflating: ./data/celeba/images/043857.jpg
inflating: ./data/celeba/images/119057.jpg
inflating: ./data/celeba/images/007933.jpg
inflating: ./data/celeba/images/163279.jpg
inflating: ./data/celeba/images/171081.jpg
inflating: ./data/celeba/images/176239.jpg
inflating: ./data/celeba/images/016737.jpg
inflating: ./data/celeba/images/075595.jpg
inflating: ./data/celeba/images/151262.jpg
inflating: ./data/celeba/images/084512.jpg
inflating: ./data/celeba/images/161852.jpg
inflating: ./data/celeba/images/139584.jpg
inflating: ./data/celeba/images/020915.jpg
inflating: ./data/celeba/images/011324.jpg
inflating: ./data/celeba/images/004035.jpg
inflating: ./data/celeba/images/068007.jpg
inflating: ./data/celeba/images/117619.jpg
inflating: ./data/celeba/images/106817.jpg
inflating: ./data/celeba/images/144819.jpg
inflating: ./data/celeba/images/118701.jpg
inflating: ./data/celeba/images/150866.jpg
inflating: ./data/celeba/images/136788.jpg
inflating: ./data/celeba/images/121522.jpg
inflating: ./data/celeba/images/096112.jpg
inflating: ./data/celeba/images/072375.jpg
inflating: ./data/celeba/images/003854.jpg
inflating: ./data/celeba/images/178384.jpg
inflating: ./data/celeba/images/141482.jpg
inflating: ./data/celeba/images/124535.jpg
inflating: ./data/celeba/images/085862.jpg
inflating: ./data/celeba/images/166772.jpg
inflating: ./data/celeba/images/088527.jpg
inflating: ./data/celeba/images/028923.jpg
inflating: ./data/celeba/images/072380.jpg
inflating: ./data/celeba/images/071169.jpg
inflating: ./data/celeba/images/065501.jpg
inflating: ./data/celeba/images/016860.jpg
inflating: ./data/celeba/images/024966.jpg
inflating: ./data/celeba/images/149852.jpg
inflating: ./data/celeba/images/125997.jpg
inflating: ./data/celeba/images/132807.jpg
inflating: ./data/celeba/images/054448.jpg
inflating: ./data/celeba/images/167021.jpg
inflating: ./data/celeba/images/116175.jpg
inflating: ./data/celeba/images/018383.jpg
inflating: ./data/celeba/images/164337.jpg
inflating: ./data/celeba/images/085956.jpg
inflating: ./data/celeba/images/052734.jpg
inflating: ./data/celeba/images/068270.jpg
inflating: ./data/celeba/images/140832.jpg
inflating: ./data/celeba/images/135512.jpg
inflating: ./data/celeba/images/114034.jpg
inflating: ./data/celeba/images/146875.jpg
inflating: ./data/celeba/images/156863.jpg
inflating: ./data/celeba/images/085495.jpg
inflating: ./data/celeba/images/167844.jpg
inflating: ./data/celeba/images/106717.jpg
inflating: ./data/celeba/images/153780.jpg
inflating: ./data/celeba/images/108405.jpg
inflating: ./data/celeba/images/198019.jpg
inflating: ./data/celeba/images/146915.jpg
inflating: ./data/celeba/images/120488.jpg
inflating: ./data/celeba/images/180509.jpg
inflating: ./data/celeba/images/186565.jpg
inflating: ./data/celeba/images/089626.jpg
inflating: ./data/celeba/images/009213.jpg
inflating: ./data/celeba/images/126117.jpg
inflating: ./data/celeba/images/002207.jpg
inflating: ./data/celeba/images/133775.jpg
inflating: ./data/celeba/images/155299.jpg
inflating: ./data/celeba/images/041869.jpg
inflating: ./data/celeba/images/019078.jpg
inflating: ./data/celeba/images/145561.jpg
inflating: ./data/celeba/images/019842.jpg
inflating: ./data/celeba/images/116179.jpg
inflating: ./data/celeba/images/033644.jpg
inflating: ./data/celeba/images/179962.jpg
inflating: ./data/celeba/images/162185.jpg
inflating: ./data/celeba/images/042533.jpg
inflating: ./data/celeba/images/028488.jpg
inflating: ./data/celeba/images/199646.jpg
inflating: ./data/celeba/images/156782.jpg
inflating: ./data/celeba/images/115499.jpg
inflating: ./data/celeba/images/192652.jpg
inflating: ./data/celeba/images/013550.jpg
inflating: ./data/celeba/images/123029.jpg
inflating: ./data/celeba/images/170182.jpg
inflating: ./data/celeba/images/080325.jpg
inflating: ./data/celeba/images/031242.jpg
inflating: ./data/celeba/images/178750.jpg
inflating: ./data/celeba/images/000762.jpg
inflating: ./data/celeba/images/077285.jpg
inflating: ./data/celeba/images/154545.jpg
inflating: ./data/celeba/images/011968.jpg
inflating: ./data/celeba/images/072886.jpg
inflating: ./data/celeba/images/194343.jpg
inflating: ./data/celeba/images/151901.jpg
inflating: ./data/celeba/images/078403.jpg
inflating: ./data/celeba/images/000300.jpg
inflating: ./data/celeba/images/028653.jpg
inflating: ./data/celeba/images/056171.jpg
inflating: ./data/celeba/images/002974.jpg
inflating: ./data/celeba/images/042961.jpg
inflating: ./data/celeba/images/169046.jpg
inflating: ./data/celeba/images/193301.jpg
inflating: ./data/celeba/images/179020.jpg
inflating: ./data/celeba/images/040927.jpg
inflating: ./data/celeba/images/153813.jpg
inflating: ./data/celeba/images/043572.jpg
inflating: ./data/celeba/images/048625.jpg
inflating: ./data/celeba/images/002321.jpg
inflating: ./data/celeba/images/093612.jpg
inflating: ./data/celeba/images/105959.jpg
inflating: ./data/celeba/images/060355.jpg
inflating: ./data/celeba/images/004487.jpg
inflating: ./data/celeba/images/107372.jpg
inflating: ./data/celeba/images/153829.jpg
inflating: ./data/celeba/images/106264.jpg
inflating: ./data/celeba/images/081071.jpg
inflating: ./data/celeba/images/003497.jpg
inflating: ./data/celeba/images/059846.jpg
inflating: ./data/celeba/images/201884.jpg
inflating: ./data/celeba/images/010096.jpg
inflating: ./data/celeba/images/101018.jpg
inflating: ./data/celeba/images/129681.jpg
inflating: ./data/celeba/images/044572.jpg
inflating: ./data/celeba/images/178428.jpg
inflating: ./data/celeba/images/022925.jpg
inflating: ./data/celeba/images/085913.jpg
inflating: ./data/celeba/images/175480.jpg
inflating: ./data/celeba/images/193818.jpg
inflating: ./data/celeba/images/061438.jpg
inflating: ./data/celeba/images/074010.jpg
inflating: ./data/celeba/images/162066.jpg
inflating: ./data/celeba/images/116688.jpg
inflating: ./data/celeba/images/162752.jpg
inflating: ./data/celeba/images/031132.jpg
inflating: ./data/celeba/images/188924.jpg
inflating: ./data/celeba/images/069126.jpg
inflating: ./data/celeba/images/118760.jpg
inflating: ./data/celeba/images/195287.jpg
inflating: ./data/celeba/images/058716.jpg
inflating: ./data/celeba/images/033976.jpg
inflating: ./data/celeba/images/024235.jpg
inflating: ./data/celeba/images/073524.jpg
inflating: ./data/celeba/images/054438.jpg
inflating: ./data/celeba/images/016352.jpg
inflating: ./data/celeba/images/155696.jpg
inflating: ./data/celeba/images/197019.jpg
inflating: ./data/celeba/images/108952.jpg
inflating: ./data/celeba/images/147184.jpg
inflating: ./data/celeba/images/063211.jpg
inflating: ./data/celeba/images/114836.jpg
inflating: ./data/celeba/images/085197.jpg
inflating: ./data/celeba/images/199998.jpg
inflating: ./data/celeba/images/071533.jpg
inflating: ./data/celeba/images/126630.jpg
inflating: ./data/celeba/images/082291.jpg
inflating: ./data/celeba/images/180684.jpg
inflating: ./data/celeba/images/040051.jpg
inflating: ./data/celeba/images/043189.jpg
inflating: ./data/celeba/images/111160.jpg
inflating: ./data/celeba/images/196978.jpg
inflating: ./data/celeba/images/109038.jpg
inflating: ./data/celeba/images/135646.jpg
inflating: ./data/celeba/images/114534.jpg
inflating: ./data/celeba/images/008711.jpg
inflating: ./data/celeba/images/061341.jpg
inflating: ./data/celeba/images/058786.jpg
inflating: ./data/celeba/images/079909.jpg
inflating: ./data/celeba/images/075557.jpg
inflating: ./data/celeba/images/183116.jpg
inflating: ./data/celeba/images/172635.jpg
inflating: ./data/celeba/images/202362.jpg
inflating: ./data/celeba/images/151627.jpg
inflating: ./data/celeba/images/106998.jpg
inflating: ./data/celeba/images/072255.jpg
inflating: ./data/celeba/images/182286.jpg
inflating: ./data/celeba/images/174668.jpg
inflating: ./data/celeba/images/095948.jpg
inflating: ./data/celeba/images/030314.jpg
inflating: ./data/celeba/images/028936.jpg
inflating: ./data/celeba/images/149171.jpg
inflating: ./data/celeba/images/051282.jpg
inflating: ./data/celeba/images/043157.jpg
inflating: ./data/celeba/images/015459.jpg
inflating: ./data/celeba/images/140654.jpg
inflating: ./data/celeba/images/120756.jpg
inflating: ./data/celeba/images/057315.jpg
inflating: ./data/celeba/images/030497.jpg
inflating: ./data/celeba/images/069033.jpg
inflating: ./data/celeba/images/172995.jpg
inflating: ./data/celeba/images/011704.jpg
inflating: ./data/celeba/images/132709.jpg
inflating: ./data/celeba/images/021737.jpg
inflating: ./data/celeba/images/151374.jpg
inflating: ./data/celeba/images/051378.jpg
inflating: ./data/celeba/images/102166.jpg
inflating: ./data/celeba/images/187503.jpg
inflating: ./data/celeba/images/173374.jpg
inflating: ./data/celeba/images/052931.jpg
inflating: ./data/celeba/images/156785.jpg
inflating: ./data/celeba/images/198233.jpg
inflating: ./data/celeba/images/133110.jpg
inflating: ./data/celeba/images/128869.jpg
inflating: ./data/celeba/images/128217.jpg
inflating: ./data/celeba/images/132192.jpg
inflating: ./data/celeba/images/139745.jpg
inflating: ./data/celeba/images/170804.jpg
inflating: ./data/celeba/images/050534.jpg
inflating: ./data/celeba/images/095331.jpg
inflating: ./data/celeba/images/155145.jpg
inflating: ./data/celeba/images/070765.jpg
inflating: ./data/celeba/images/192970.jpg
inflating: ./data/celeba/images/093481.jpg
inflating: ./data/celeba/images/117059.jpg
inflating: ./data/celeba/images/073904.jpg
inflating: ./data/celeba/images/198386.jpg
inflating: ./data/celeba/images/116201.jpg
inflating: ./data/celeba/images/033663.jpg
inflating: ./data/celeba/images/089035.jpg
inflating: ./data/celeba/images/084752.jpg
inflating: ./data/celeba/images/013382.jpg
inflating: ./data/celeba/images/125530.jpg
inflating: ./data/celeba/images/078816.jpg
inflating: ./data/celeba/images/017668.jpg
inflating: ./data/celeba/images/081170.jpg
inflating: ./data/celeba/images/010203.jpg
inflating: ./data/celeba/images/107339.jpg
inflating: ./data/celeba/images/052844.jpg
inflating: ./data/celeba/images/041776.jpg
inflating: ./data/celeba/images/139060.jpg
inflating: ./data/celeba/images/007684.jpg
inflating: ./data/celeba/images/088064.jpg
inflating: ./data/celeba/images/187648.jpg
inflating: ./data/celeba/images/169242.jpg
inflating: ./data/celeba/images/082263.jpg
inflating: ./data/celeba/images/048343.jpg
inflating: ./data/celeba/images/039280.jpg
inflating: ./data/celeba/images/000831.jpg
inflating: ./data/celeba/images/084070.jpg
inflating: ./data/celeba/images/093804.jpg
inflating: ./data/celeba/images/039227.jpg
inflating: ./data/celeba/images/011558.jpg
inflating: ./data/celeba/images/050097.jpg
inflating: ./data/celeba/images/035109.jpg
inflating: ./data/celeba/images/179330.jpg
inflating: ./data/celeba/images/081901.jpg
inflating: ./data/celeba/images/121659.jpg
inflating: ./data/celeba/images/121645.jpg
inflating: ./data/celeba/images/190550.jpg
inflating: ./data/celeba/images/143988.jpg
inflating: ./data/celeba/images/020105.jpg
inflating: ./data/celeba/images/066945.jpg
inflating: ./data/celeba/images/189404.jpg
inflating: ./data/celeba/images/068752.jpg
inflating: ./data/celeba/images/196118.jpg
inflating: ./data/celeba/images/136654.jpg
inflating: ./data/celeba/images/109168.jpg
inflating: ./data/celeba/images/155749.jpg
inflating: ./data/celeba/images/113184.jpg
inflating: ./data/celeba/images/118102.jpg
inflating: ./data/celeba/images/139579.jpg
inflating: ./data/celeba/images/121411.jpg
inflating: ./data/celeba/images/174463.jpg
inflating: ./data/celeba/images/080692.jpg
inflating: ./data/celeba/images/159258.jpg
inflating: ./data/celeba/images/158328.jpg
inflating: ./data/celeba/images/146442.jpg
inflating: ./data/celeba/images/113383.jpg
inflating: ./data/celeba/images/063688.jpg
inflating: ./data/celeba/images/188892.jpg
inflating: ./data/celeba/images/096030.jpg
inflating: ./data/celeba/images/198257.jpg
inflating: ./data/celeba/images/165679.jpg
inflating: ./data/celeba/images/085854.jpg
inflating: ./data/celeba/images/095509.jpg
inflating: ./data/celeba/images/072428.jpg
inflating: ./data/celeba/images/179511.jpg
inflating: ./data/celeba/images/099491.jpg
inflating: ./data/celeba/images/153411.jpg
inflating: ./data/celeba/images/008355.jpg
inflating: ./data/celeba/images/180102.jpg
inflating: ./data/celeba/images/030686.jpg
inflating: ./data/celeba/images/098843.jpg
inflating: ./data/celeba/images/030442.jpg
inflating: ./data/celeba/images/184430.jpg
inflating: ./data/celeba/images/151725.jpg
inflating: ./data/celeba/images/080535.jpg
inflating: ./data/celeba/images/146162.jpg
inflating: ./data/celeba/images/186915.jpg
inflating: ./data/celeba/images/196480.jpg
inflating: ./data/celeba/images/156687.jpg
inflating: ./data/celeba/images/058927.jpg
inflating: ./data/celeba/images/144134.jpg
inflating: ./data/celeba/images/137053.jpg
inflating: ./data/celeba/images/102429.jpg
inflating: ./data/celeba/images/081791.jpg
inflating: ./data/celeba/images/073400.jpg
inflating: ./data/celeba/images/169503.jpg
inflating: ./data/celeba/images/116235.jpg
inflating: ./data/celeba/images/066879.jpg
inflating: ./data/celeba/images/101907.jpg
inflating: ./data/celeba/images/051348.jpg
inflating: ./data/celeba/images/115030.jpg
inflating: ./data/celeba/images/021334.jpg
inflating: ./data/celeba/images/012154.jpg
inflating: ./data/celeba/images/134618.jpg
inflating: ./data/celeba/images/017998.jpg
inflating: ./data/celeba/images/008748.jpg
inflating: ./data/celeba/images/021859.jpg
inflating: ./data/celeba/images/121200.jpg
inflating: ./data/celeba/images/108931.jpg
inflating: ./data/celeba/images/148202.jpg
inflating: ./data/celeba/images/113545.jpg
inflating: ./data/celeba/images/158867.jpg
inflating: ./data/celeba/images/199940.jpg
inflating: ./data/celeba/images/075775.jpg
inflating: ./data/celeba/images/036117.jpg
inflating: ./data/celeba/images/072683.jpg
inflating: ./data/celeba/images/038757.jpg
inflating: ./data/celeba/images/065612.jpg
inflating: ./data/celeba/images/121289.jpg
inflating: ./data/celeba/images/118337.jpg
inflating: ./data/celeba/images/151650.jpg
inflating: ./data/celeba/images/146179.jpg
inflating: ./data/celeba/images/080938.jpg
inflating: ./data/celeba/images/161711.jpg
inflating: ./data/celeba/images/120950.jpg
inflating: ./data/celeba/images/164633.jpg
inflating: ./data/celeba/images/178007.jpg
inflating: ./data/celeba/images/042617.jpg
inflating: ./data/celeba/images/018414.jpg
inflating: ./data/celeba/images/085779.jpg
inflating: ./data/celeba/images/131425.jpg
inflating: ./data/celeba/images/117137.jpg
inflating: ./data/celeba/images/025181.jpg
inflating: ./data/celeba/images/005488.jpg
inflating: ./data/celeba/images/121675.jpg
inflating: ./data/celeba/images/032955.jpg
inflating: ./data/celeba/images/088466.jpg
inflating: ./data/celeba/images/051109.jpg
inflating: ./data/celeba/images/144624.jpg
inflating: ./data/celeba/images/072149.jpg
inflating: ./data/celeba/images/080925.jpg
inflating: ./data/celeba/images/080002.jpg
inflating: ./data/celeba/images/023810.jpg
inflating: ./data/celeba/images/151027.jpg
inflating: ./data/celeba/images/183760.jpg
inflating: ./data/celeba/images/155923.jpg
inflating: ./data/celeba/images/036128.jpg
inflating: ./data/celeba/images/071583.jpg
inflating: ./data/celeba/images/104550.jpg
inflating: ./data/celeba/images/169310.jpg
inflating: ./data/celeba/images/049982.jpg
inflating: ./data/celeba/images/100566.jpg
inflating: ./data/celeba/images/007214.jpg
inflating: ./data/celeba/images/077244.jpg
inflating: ./data/celeba/images/198692.jpg
inflating: ./data/celeba/images/195781.jpg
inflating: ./data/celeba/images/080814.jpg
inflating: ./data/celeba/images/094689.jpg
inflating: ./data/celeba/images/001250.jpg
inflating: ./data/celeba/images/179202.jpg
inflating: ./data/celeba/images/133374.jpg
inflating: ./data/celeba/images/068609.jpg
inflating: ./data/celeba/images/099936.jpg
inflating: ./data/celeba/images/077924.jpg
inflating: ./data/celeba/images/048824.jpg
inflating: ./data/celeba/images/161342.jpg
inflating: ./data/celeba/images/200002.jpg
inflating: ./data/celeba/images/125950.jpg
inflating: ./data/celeba/images/062271.jpg
inflating: ./data/celeba/images/151996.jpg
inflating: ./data/celeba/images/066107.jpg
inflating: ./data/celeba/images/059302.jpg
inflating: ./data/celeba/images/087619.jpg
inflating: ./data/celeba/images/150464.jpg
inflating: ./data/celeba/images/033394.jpg
inflating: ./data/celeba/images/029872.jpg
inflating: ./data/celeba/images/047116.jpg
inflating: ./data/celeba/images/043252.jpg
inflating: ./data/celeba/images/011735.jpg
inflating: ./data/celeba/images/134296.jpg
inflating: ./data/celeba/images/194354.jpg
inflating: ./data/celeba/images/092669.jpg
inflating: ./data/celeba/images/104119.jpg
inflating: ./data/celeba/images/181214.jpg
inflating: ./data/celeba/images/083494.jpg
inflating: ./data/celeba/images/131053.jpg
inflating: ./data/celeba/images/065833.jpg
inflating: ./data/celeba/images/128831.jpg
inflating: ./data/celeba/images/158636.jpg
inflating: ./data/celeba/images/046291.jpg
inflating: ./data/celeba/images/182029.jpg
inflating: ./data/celeba/images/090848.jpg
inflating: ./data/celeba/images/037426.jpg
inflating: ./data/celeba/images/095270.jpg
inflating: ./data/celeba/images/187087.jpg
inflating: ./data/celeba/images/164654.jpg
inflating: ./data/celeba/images/017913.jpg
inflating: ./data/celeba/images/190191.jpg
inflating: ./data/celeba/images/062925.jpg
inflating: ./data/celeba/images/201197.jpg
inflating: ./data/celeba/images/039262.jpg
inflating: ./data/celeba/images/018375.jpg
inflating: ./data/celeba/images/198743.jpg
inflating: ./data/celeba/images/087331.jpg
inflating: ./data/celeba/images/066507.jpg
inflating: ./data/celeba/images/013709.jpg
inflating: ./data/celeba/images/072116.jpg
inflating: ./data/celeba/images/073548.jpg
inflating: ./data/celeba/images/037354.jpg
inflating: ./data/celeba/images/085981.jpg
inflating: ./data/celeba/images/010647.jpg
inflating: ./data/celeba/images/040824.jpg
inflating: ./data/celeba/images/173098.jpg
inflating: ./data/celeba/images/110701.jpg
inflating: ./data/celeba/images/176265.jpg
inflating: ./data/celeba/images/090566.jpg
inflating: ./data/celeba/images/164184.jpg
inflating: ./data/celeba/images/057269.jpg
inflating: ./data/celeba/images/161761.jpg
inflating: ./data/celeba/images/114120.jpg
inflating: ./data/celeba/images/054368.jpg
inflating: ./data/celeba/images/013118.jpg
inflating: ./data/celeba/images/044324.jpg
inflating: ./data/celeba/images/194700.jpg
inflating: ./data/celeba/images/053839.jpg
inflating: ./data/celeba/images/015587.jpg
inflating: ./data/celeba/images/063712.jpg
inflating: ./data/celeba/images/130506.jpg
inflating: ./data/celeba/images/032013.jpg
inflating: ./data/celeba/images/013304.jpg
inflating: ./data/celeba/images/078613.jpg
inflating: ./data/celeba/images/164796.jpg
inflating: ./data/celeba/images/036778.jpg
inflating: ./data/celeba/images/202348.jpg
inflating: ./data/celeba/images/022930.jpg
inflating: ./data/celeba/images/167575.jpg
inflating: ./data/celeba/images/100714.jpg
inflating: ./data/celeba/images/067906.jpg
inflating: ./data/celeba/images/116173.jpg
inflating: ./data/celeba/images/125218.jpg
inflating: ./data/celeba/images/127384.jpg
inflating: ./data/celeba/images/168187.jpg
inflating: ./data/celeba/images/008052.jpg
inflating: ./data/celeba/images/051411.jpg
inflating: ./data/celeba/images/075220.jpg
inflating: ./data/celeba/images/119334.jpg
inflating: ./data/celeba/images/192800.jpg
inflating: ./data/celeba/images/111170.jpg
inflating: ./data/celeba/images/149818.jpg
inflating: ./data/celeba/images/010637.jpg
inflating: ./data/celeba/images/025784.jpg
inflating: ./data/celeba/images/013301.jpg
inflating: ./data/celeba/images/038899.jpg
inflating: ./data/celeba/images/180670.jpg
inflating: ./data/celeba/images/130269.jpg
inflating: ./data/celeba/images/002852.jpg
inflating: ./data/celeba/images/176612.jpg
inflating: ./data/celeba/images/129196.jpg
inflating: ./data/celeba/images/106435.jpg
inflating: ./data/celeba/images/085593.jpg
inflating: ./data/celeba/images/202581.jpg
inflating: ./data/celeba/images/176931.jpg
inflating: ./data/celeba/images/059576.jpg
inflating: ./data/celeba/images/079204.jpg
inflating: ./data/celeba/images/115813.jpg
inflating: ./data/celeba/images/144645.jpg
inflating: ./data/celeba/images/133354.jpg
inflating: ./data/celeba/images/102196.jpg
inflating: ./data/celeba/images/118545.jpg
inflating: ./data/celeba/images/069055.jpg
inflating: ./data/celeba/images/123860.jpg
inflating: ./data/celeba/images/003520.jpg
inflating: ./data/celeba/images/133697.jpg
inflating: ./data/celeba/images/178231.jpg
inflating: ./data/celeba/images/051145.jpg
inflating: ./data/celeba/images/183103.jpg
inflating: ./data/celeba/images/065630.jpg
inflating: ./data/celeba/images/131232.jpg
inflating: ./data/celeba/images/161961.jpg
inflating: ./data/celeba/images/036776.jpg
inflating: ./data/celeba/images/055605.jpg
inflating: ./data/celeba/images/007916.jpg
inflating: ./data/celeba/images/060226.jpg
inflating: ./data/celeba/images/198327.jpg
inflating: ./data/celeba/images/002653.jpg
inflating: ./data/celeba/images/010877.jpg
inflating: ./data/celeba/images/041432.jpg
inflating: ./data/celeba/images/098077.jpg
inflating: ./data/celeba/images/027045.jpg
inflating: ./data/celeba/images/193917.jpg
inflating: ./data/celeba/images/088472.jpg
inflating: ./data/celeba/images/189618.jpg
inflating: ./data/celeba/images/132578.jpg
inflating: ./data/celeba/images/125023.jpg
inflating: ./data/celeba/images/023959.jpg
inflating: ./data/celeba/images/141590.jpg
inflating: ./data/celeba/images/121417.jpg
inflating: ./data/celeba/images/115124.jpg
inflating: ./data/celeba/images/068100.jpg
inflating: ./data/celeba/images/183792.jpg
inflating: ./data/celeba/images/201677.jpg
inflating: ./data/celeba/images/046469.jpg
inflating: ./data/celeba/images/197400.jpg
inflating: ./data/celeba/images/116789.jpg
inflating: ./data/celeba/images/119911.jpg
inflating: ./data/celeba/images/059355.jpg
inflating: ./data/celeba/images/057492.jpg
inflating: ./data/celeba/images/165537.jpg
inflating: ./data/celeba/images/110802.jpg
inflating: ./data/celeba/images/180819.jpg
inflating: ./data/celeba/images/108342.jpg
inflating: ./data/celeba/images/060092.jpg
inflating: ./data/celeba/images/183434.jpg
inflating: ./data/celeba/images/112477.jpg
inflating: ./data/celeba/images/175795.jpg
inflating: ./data/celeba/images/201139.jpg
inflating: ./data/celeba/images/008432.jpg
inflating: ./data/celeba/images/010616.jpg
inflating: ./data/celeba/images/074646.jpg
inflating: ./data/celeba/images/122448.jpg
inflating: ./data/celeba/images/141800.jpg
inflating: ./data/celeba/images/196582.jpg
inflating: ./data/celeba/images/126864.jpg
inflating: ./data/celeba/images/191546.jpg
inflating: ./data/celeba/images/089947.jpg
inflating: ./data/celeba/images/193719.jpg
inflating: ./data/celeba/images/151431.jpg
inflating: ./data/celeba/images/012225.jpg
inflating: ./data/celeba/images/022088.jpg
inflating: ./data/celeba/images/118668.jpg
inflating: ./data/celeba/images/131011.jpg
inflating: ./data/celeba/images/043199.jpg
inflating: ./data/celeba/images/005027.jpg
inflating: ./data/celeba/images/017476.jpg
inflating: ./data/celeba/images/143891.jpg
inflating: ./data/celeba/images/108403.jpg
inflating: ./data/celeba/images/041415.jpg
inflating: ./data/celeba/images/167720.jpg
inflating: ./data/celeba/images/034259.jpg
inflating: ./data/celeba/images/179743.jpg
inflating: ./data/celeba/images/005516.jpg
inflating: ./data/celeba/images/128845.jpg
inflating: ./data/celeba/images/147148.jpg
inflating: ./data/celeba/images/105929.jpg
inflating: ./data/celeba/images/168531.jpg
inflating: ./data/celeba/images/078699.jpg
inflating: ./data/celeba/images/062299.jpg
inflating: ./data/celeba/images/149102.jpg
inflating: ./data/celeba/images/061063.jpg
inflating: ./data/celeba/images/124563.jpg
inflating: ./data/celeba/images/187716.jpg
inflating: ./data/celeba/images/101702.jpg
inflating: ./data/celeba/images/133287.jpg
inflating: ./data/celeba/images/201919.jpg
inflating: ./data/celeba/images/080065.jpg
inflating: ./data/celeba/images/143642.jpg
inflating: ./data/celeba/images/104659.jpg
inflating: ./data/celeba/images/133701.jpg
inflating: ./data/celeba/images/099196.jpg
inflating: ./data/celeba/images/165273.jpg
inflating: ./data/celeba/images/018864.jpg
inflating: ./data/celeba/images/095012.jpg
inflating: ./data/celeba/images/146769.jpg
inflating: ./data/celeba/images/134354.jpg
inflating: ./data/celeba/images/158912.jpg
inflating: ./data/celeba/images/056026.jpg
inflating: ./data/celeba/images/023187.jpg
inflating: ./data/celeba/images/004320.jpg
inflating: ./data/celeba/images/138720.jpg
inflating: ./data/celeba/images/088433.jpg
inflating: ./data/celeba/images/067285.jpg
inflating: ./data/celeba/images/155700.jpg
inflating: ./data/celeba/images/057298.jpg
inflating: ./data/celeba/images/193541.jpg
inflating: ./data/celeba/images/032950.jpg
inflating: ./data/celeba/images/077249.jpg
inflating: ./data/celeba/images/133881.jpg
inflating: ./data/celeba/images/049963.jpg
inflating: ./data/celeba/images/093020.jpg
inflating: ./data/celeba/images/161649.jpg
inflating: ./data/celeba/images/190311.jpg
inflating: ./data/celeba/images/116493.jpg
inflating: ./data/celeba/images/132667.jpg
inflating: ./data/celeba/images/026456.jpg
inflating: ./data/celeba/images/193697.jpg
inflating: ./data/celeba/images/038446.jpg
inflating: ./data/celeba/images/076305.jpg
inflating: ./data/celeba/images/001262.jpg
inflating: ./data/celeba/images/183831.jpg
inflating: ./data/celeba/images/032329.jpg
inflating: ./data/celeba/images/096603.jpg
inflating: ./data/celeba/images/146870.jpg
inflating: ./data/celeba/images/127679.jpg
inflating: ./data/celeba/images/134695.jpg
inflating: ./data/celeba/images/016577.jpg
inflating: ./data/celeba/images/113523.jpg
inflating: ./data/celeba/images/054338.jpg
inflating: ./data/celeba/images/049794.jpg
inflating: ./data/celeba/images/090750.jpg
inflating: ./data/celeba/images/090286.jpg
inflating: ./data/celeba/images/140823.jpg
inflating: ./data/celeba/images/156658.jpg
inflating: ./data/celeba/images/105717.jpg
inflating: ./data/celeba/images/201427.jpg
inflating: ./data/celeba/images/089692.jpg
inflating: ./data/celeba/images/199946.jpg
inflating: ./data/celeba/images/086252.jpg
inflating: ./data/celeba/images/202147.jpg
inflating: ./data/celeba/images/032779.jpg
inflating: ./data/celeba/images/155629.jpg
inflating: ./data/celeba/images/186362.jpg
inflating: ./data/celeba/images/091470.jpg
inflating: ./data/celeba/images/106037.jpg
inflating: ./data/celeba/images/052550.jpg
inflating: ./data/celeba/images/078491.jpg
inflating: ./data/celeba/images/102632.jpg
inflating: ./data/celeba/images/196679.jpg
inflating: ./data/celeba/images/183358.jpg
inflating: ./data/celeba/images/135269.jpg
inflating: ./data/celeba/images/087023.jpg
inflating: ./data/celeba/images/197994.jpg
inflating: ./data/celeba/images/149173.jpg
inflating: ./data/celeba/images/178031.jpg
inflating: ./data/celeba/images/192891.jpg
inflating: ./data/celeba/images/109906.jpg
inflating: ./data/celeba/images/075012.jpg
inflating: ./data/celeba/images/134346.jpg
inflating: ./data/celeba/images/021296.jpg
inflating: ./data/celeba/images/198884.jpg
inflating: ./data/celeba/images/040082.jpg
inflating: ./data/celeba/images/099881.jpg
inflating: ./data/celeba/images/060587.jpg
inflating: ./data/celeba/images/183614.jpg
inflating: ./data/celeba/images/103851.jpg
inflating: ./data/celeba/images/195615.jpg
inflating: ./data/celeba/images/044433.jpg
inflating: ./data/celeba/images/021287.jpg
inflating: ./data/celeba/images/123241.jpg
inflating: ./data/celeba/images/042634.jpg
inflating: ./data/celeba/images/122425.jpg
inflating: ./data/celeba/images/028349.jpg
inflating: ./data/celeba/images/024728.jpg
inflating: ./data/celeba/images/162566.jpg
inflating: ./data/celeba/images/130411.jpg
inflating: ./data/celeba/images/180621.jpg
inflating: ./data/celeba/images/015007.jpg
inflating: ./data/celeba/images/069838.jpg
inflating: ./data/celeba/images/049726.jpg
inflating: ./data/celeba/images/077695.jpg
inflating: ./data/celeba/images/159919.jpg
inflating: ./data/celeba/images/140529.jpg
inflating: ./data/celeba/images/062638.jpg
inflating: ./data/celeba/images/118895.jpg
inflating: ./data/celeba/images/128878.jpg
inflating: ./data/celeba/images/199234.jpg
inflating: ./data/celeba/images/015093.jpg
inflating: ./data/celeba/images/139068.jpg
inflating: ./data/celeba/images/028659.jpg
inflating: ./data/celeba/images/112861.jpg
inflating: ./data/celeba/images/010052.jpg
inflating: ./data/celeba/images/192150.jpg
inflating: ./data/celeba/images/050950.jpg
inflating: ./data/celeba/images/012731.jpg
inflating: ./data/celeba/images/103340.jpg
inflating: ./data/celeba/images/136142.jpg
inflating: ./data/celeba/images/187676.jpg
inflating: ./data/celeba/images/111880.jpg
inflating: ./data/celeba/images/197927.jpg
inflating: ./data/celeba/images/014728.jpg
inflating: ./data/celeba/images/023450.jpg
inflating: ./data/celeba/images/186901.jpg
inflating: ./data/celeba/images/172656.jpg
inflating: ./data/celeba/images/061232.jpg
inflating: ./data/celeba/images/030490.jpg
inflating: ./data/celeba/images/009601.jpg
inflating: ./data/celeba/images/057446.jpg
inflating: ./data/celeba/images/128186.jpg
inflating: ./data/celeba/images/002432.jpg
inflating: ./data/celeba/images/000558.jpg
inflating: ./data/celeba/images/124946.jpg
inflating: ./data/celeba/images/192651.jpg
inflating: ./data/celeba/images/119353.jpg
inflating: ./data/celeba/images/189191.jpg
inflating: ./data/celeba/images/190310.jpg
inflating: ./data/celeba/images/025277.jpg
inflating: ./data/celeba/images/069156.jpg
inflating: ./data/celeba/images/038390.jpg
inflating: ./data/celeba/images/016004.jpg
inflating: ./data/celeba/images/053056.jpg
inflating: ./data/celeba/images/018835.jpg
inflating: ./data/celeba/images/128734.jpg
inflating: ./data/celeba/images/107182.jpg
inflating: ./data/celeba/images/045440.jpg
inflating: ./data/celeba/images/086326.jpg
inflating: ./data/celeba/images/087089.jpg
inflating: ./data/celeba/images/046601.jpg
inflating: ./data/celeba/images/086459.jpg
inflating: ./data/celeba/images/034357.jpg
inflating: ./data/celeba/images/058096.jpg
inflating: ./data/celeba/images/162120.jpg
inflating: ./data/celeba/images/053130.jpg
inflating: ./data/celeba/images/044542.jpg
inflating: ./data/celeba/images/113937.jpg
inflating: ./data/celeba/images/186063.jpg
inflating: ./data/celeba/images/158104.jpg
inflating: ./data/celeba/images/064087.jpg
inflating: ./data/celeba/images/182511.jpg
inflating: ./data/celeba/images/189501.jpg
inflating: ./data/celeba/images/020349.jpg
inflating: ./data/celeba/images/105352.jpg
inflating: ./data/celeba/images/056095.jpg
inflating: ./data/celeba/images/043735.jpg
inflating: ./data/celeba/images/149461.jpg
inflating: ./data/celeba/images/093043.jpg
inflating: ./data/celeba/images/023633.jpg
inflating: ./data/celeba/images/009744.jpg
inflating: ./data/celeba/images/092553.jpg
inflating: ./data/celeba/images/149391.jpg
inflating: ./data/celeba/images/054616.jpg
inflating: ./data/celeba/images/149294.jpg
inflating: ./data/celeba/images/142005.jpg
inflating: ./data/celeba/images/187737.jpg
inflating: ./data/celeba/images/160211.jpg
inflating: ./data/celeba/images/128565.jpg
inflating: ./data/celeba/images/095660.jpg
inflating: ./data/celeba/images/088669.jpg
inflating: ./data/celeba/images/056939.jpg
inflating: ./data/celeba/images/106172.jpg
inflating: ./data/celeba/images/026202.jpg
inflating: ./data/celeba/images/059146.jpg
inflating: ./data/celeba/images/199537.jpg
inflating: ./data/celeba/images/035290.jpg
inflating: ./data/celeba/images/180924.jpg
inflating: ./data/celeba/images/194361.jpg
inflating: ./data/celeba/images/077086.jpg
inflating: ./data/celeba/images/099095.jpg
inflating: ./data/celeba/images/006018.jpg
inflating: ./data/celeba/images/162366.jpg
inflating: ./data/celeba/images/083265.jpg
inflating: ./data/celeba/images/149954.jpg
inflating: ./data/celeba/images/030250.jpg
inflating: ./data/celeba/images/017707.jpg
inflating: ./data/celeba/images/036834.jpg
inflating: ./data/celeba/images/141546.jpg
inflating: ./data/celeba/images/045833.jpg
inflating: ./data/celeba/images/033984.jpg
inflating: ./data/celeba/images/015773.jpg
inflating: ./data/celeba/images/127529.jpg
inflating: ./data/celeba/images/087308.jpg
inflating: ./data/celeba/images/175062.jpg
inflating: ./data/celeba/images/101212.jpg
inflating: ./data/celeba/images/086885.jpg
inflating: ./data/celeba/images/164555.jpg
inflating: ./data/celeba/images/007268.jpg
inflating: ./data/celeba/images/122638.jpg
inflating: ./data/celeba/images/029071.jpg
inflating: ./data/celeba/images/164912.jpg
inflating: ./data/celeba/images/154483.jpg
inflating: ./data/celeba/images/126729.jpg
inflating: ./data/celeba/images/148261.jpg
inflating: ./data/celeba/images/052450.jpg
inflating: ./data/celeba/images/128336.jpg
inflating: ./data/celeba/images/106583.jpg
inflating: ./data/celeba/images/195997.jpg
inflating: ./data/celeba/images/170569.jpg
inflating: ./data/celeba/images/027469.jpg
inflating: ./data/celeba/images/041522.jpg
inflating: ./data/celeba/images/107084.jpg
inflating: ./data/celeba/images/042053.jpg
inflating: ./data/celeba/images/081252.jpg
inflating: ./data/celeba/images/096926.jpg
inflating: ./data/celeba/images/023655.jpg
inflating: ./data/celeba/images/093463.jpg
inflating: ./data/celeba/images/076661.jpg
inflating: ./data/celeba/images/180925.jpg
inflating: ./data/celeba/images/127798.jpg
inflating: ./data/celeba/images/107015.jpg
inflating: ./data/celeba/images/069940.jpg
inflating: ./data/celeba/images/172085.jpg
inflating: ./data/celeba/images/134441.jpg
inflating: ./data/celeba/images/093670.jpg
inflating: ./data/celeba/images/102712.jpg
inflating: ./data/celeba/images/173385.jpg
inflating: ./data/celeba/images/130052.jpg
inflating: ./data/celeba/images/044522.jpg
inflating: ./data/celeba/images/191619.jpg
inflating: ./data/celeba/images/103433.jpg
inflating: ./data/celeba/images/086528.jpg
inflating: ./data/celeba/images/125822.jpg
inflating: ./data/celeba/images/154327.jpg
inflating: ./data/celeba/images/090560.jpg
inflating: ./data/celeba/images/057544.jpg
inflating: ./data/celeba/images/022644.jpg
inflating: ./data/celeba/images/046250.jpg
inflating: ./data/celeba/images/107501.jpg
inflating: ./data/celeba/images/181320.jpg
inflating: ./data/celeba/images/093822.jpg
inflating: ./data/celeba/images/166014.jpg
inflating: ./data/celeba/images/039182.jpg
inflating: ./data/celeba/images/028636.jpg
inflating: ./data/celeba/images/140525.jpg
inflating: ./data/celeba/images/178735.jpg
inflating: ./data/celeba/images/159966.jpg
inflating: ./data/celeba/images/048568.jpg
inflating: ./data/celeba/images/069532.jpg
inflating: ./data/celeba/images/060462.jpg
inflating: ./data/celeba/images/153774.jpg
inflating: ./data/celeba/images/194922.jpg
inflating: ./data/celeba/images/174407.jpg
inflating: ./data/celeba/images/118839.jpg
inflating: ./data/celeba/images/006231.jpg
inflating: ./data/celeba/images/053914.jpg
inflating: ./data/celeba/images/077329.jpg
inflating: ./data/celeba/images/078365.jpg
inflating: ./data/celeba/images/156107.jpg
inflating: ./data/celeba/images/119343.jpg
inflating: ./data/celeba/images/035406.jpg
inflating: ./data/celeba/images/096940.jpg
inflating: ./data/celeba/images/179093.jpg
inflating: ./data/celeba/images/163155.jpg
inflating: ./data/celeba/images/127550.jpg
inflating: ./data/celeba/images/133962.jpg
inflating: ./data/celeba/images/012019.jpg
inflating: ./data/celeba/images/040725.jpg
inflating: ./data/celeba/images/090158.jpg
inflating: ./data/celeba/images/087993.jpg
inflating: ./data/celeba/images/020840.jpg
inflating: ./data/celeba/images/001194.jpg
inflating: ./data/celeba/images/034426.jpg
inflating: ./data/celeba/images/131066.jpg
inflating: ./data/celeba/images/048514.jpg
inflating: ./data/celeba/images/159801.jpg
inflating: ./data/celeba/images/162392.jpg
inflating: ./data/celeba/images/187020.jpg
inflating: ./data/celeba/images/160788.jpg
inflating: ./data/celeba/images/095729.jpg
inflating: ./data/celeba/images/165653.jpg
inflating: ./data/celeba/images/087590.jpg
inflating: ./data/celeba/images/132520.jpg
inflating: ./data/celeba/images/057482.jpg
inflating: ./data/celeba/images/159578.jpg
inflating: ./data/celeba/images/170187.jpg
inflating: ./data/celeba/images/010914.jpg
inflating: ./data/celeba/images/167690.jpg
inflating: ./data/celeba/images/166151.jpg
inflating: ./data/celeba/images/063234.jpg
inflating: ./data/celeba/images/163257.jpg
inflating: ./data/celeba/images/006637.jpg
inflating: ./data/celeba/images/076947.jpg
inflating: ./data/celeba/images/015322.jpg
inflating: ./data/celeba/images/029234.jpg
inflating: ./data/celeba/images/118095.jpg
inflating: ./data/celeba/images/194906.jpg
inflating: ./data/celeba/images/081945.jpg
inflating: ./data/celeba/images/185900.jpg
inflating: ./data/celeba/images/123732.jpg
inflating: ./data/celeba/images/124980.jpg
inflating: ./data/celeba/images/010349.jpg
inflating: ./data/celeba/images/121272.jpg
inflating: ./data/celeba/images/020698.jpg
inflating: ./data/celeba/images/021396.jpg
inflating: ./data/celeba/images/009521.jpg
inflating: ./data/celeba/images/121810.jpg
inflating: ./data/celeba/images/137596.jpg
inflating: ./data/celeba/images/086816.jpg
inflating: ./data/celeba/images/126638.jpg
inflating: ./data/celeba/images/111582.jpg
inflating: ./data/celeba/images/174817.jpg
inflating: ./data/celeba/images/164410.jpg
inflating: ./data/celeba/images/169584.jpg
inflating: ./data/celeba/images/162183.jpg
inflating: ./data/celeba/images/070644.jpg
inflating: ./data/celeba/images/055918.jpg
inflating: ./data/celeba/images/192824.jpg
inflating: ./data/celeba/images/153612.jpg
inflating: ./data/celeba/images/055214.jpg
inflating: ./data/celeba/images/040276.jpg
inflating: ./data/celeba/images/156515.jpg
inflating: ./data/celeba/images/001641.jpg
inflating: ./data/celeba/images/151546.jpg
inflating: ./data/celeba/images/064918.jpg
inflating: ./data/celeba/images/156632.jpg
inflating: ./data/celeba/images/140660.jpg
inflating: ./data/celeba/images/032282.jpg
inflating: ./data/celeba/images/058013.jpg
inflating: ./data/celeba/images/021181.jpg
inflating: ./data/celeba/images/098592.jpg
inflating: ./data/celeba/images/056024.jpg
inflating: ./data/celeba/images/079801.jpg
inflating: ./data/celeba/images/083596.jpg
inflating: ./data/celeba/images/121938.jpg
inflating: ./data/celeba/images/178056.jpg
inflating: ./data/celeba/images/160202.jpg
inflating: ./data/celeba/images/058208.jpg
inflating: ./data/celeba/images/009038.jpg
inflating: ./data/celeba/images/029345.jpg
inflating: ./data/celeba/images/162025.jpg
inflating: ./data/celeba/images/029164.jpg
inflating: ./data/celeba/images/023316.jpg
inflating: ./data/celeba/images/202306.jpg
inflating: ./data/celeba/images/136464.jpg
inflating: ./data/celeba/images/072723.jpg
inflating: ./data/celeba/images/099810.jpg
inflating: ./data/celeba/images/018331.jpg
inflating: ./data/celeba/images/072896.jpg
inflating: ./data/celeba/images/195365.jpg
inflating: ./data/celeba/images/024784.jpg
inflating: ./data/celeba/images/119525.jpg
inflating: ./data/celeba/images/058918.jpg
inflating: ./data/celeba/images/049984.jpg
inflating: ./data/celeba/images/108522.jpg
inflating: ./data/celeba/images/071358.jpg
inflating: ./data/celeba/images/073928.jpg
inflating: ./data/celeba/images/161559.jpg
inflating: ./data/celeba/images/092709.jpg
inflating: ./data/celeba/images/141479.jpg
inflating: ./data/celeba/images/132077.jpg
inflating: ./data/celeba/images/075932.jpg
inflating: ./data/celeba/images/004926.jpg
inflating: ./data/celeba/images/141092.jpg
inflating: ./data/celeba/images/114351.jpg
inflating: ./data/celeba/images/194818.jpg
inflating: ./data/celeba/images/082311.jpg
inflating: ./data/celeba/images/005850.jpg
inflating: ./data/celeba/images/099462.jpg
inflating: ./data/celeba/images/021295.jpg
inflating: ./data/celeba/images/100669.jpg
inflating: ./data/celeba/images/092326.jpg
inflating: ./data/celeba/images/105296.jpg
inflating: ./data/celeba/images/084560.jpg
inflating: ./data/celeba/images/062539.jpg
inflating: ./data/celeba/images/151229.jpg
inflating: ./data/celeba/images/139840.jpg
inflating: ./data/celeba/images/012271.jpg
inflating: ./data/celeba/images/026864.jpg
inflating: ./data/celeba/images/009077.jpg
inflating: ./data/celeba/images/195189.jpg
inflating: ./data/celeba/images/137041.jpg
inflating: ./data/celeba/images/093507.jpg
inflating: ./data/celeba/images/079668.jpg
inflating: ./data/celeba/images/093174.jpg
inflating: ./data/celeba/images/121224.jpg
inflating: ./data/celeba/images/171999.jpg
inflating: ./data/celeba/images/164180.jpg
inflating: ./data/celeba/images/168306.jpg
inflating: ./data/celeba/images/030602.jpg
inflating: ./data/celeba/images/079519.jpg
inflating: ./data/celeba/images/069467.jpg
inflating: ./data/celeba/images/128318.jpg
inflating: ./data/celeba/images/084083.jpg
inflating: ./data/celeba/images/062593.jpg
inflating: ./data/celeba/images/061864.jpg
inflating: ./data/celeba/images/058780.jpg
inflating: ./data/celeba/images/093857.jpg
inflating: ./data/celeba/images/037423.jpg
inflating: ./data/celeba/images/056312.jpg
inflating: ./data/celeba/images/055705.jpg
inflating: ./data/celeba/images/023617.jpg
inflating: ./data/celeba/images/115156.jpg
inflating: ./data/celeba/images/065282.jpg
inflating: ./data/celeba/images/082024.jpg
inflating: ./data/celeba/images/142777.jpg
inflating: ./data/celeba/images/151778.jpg
inflating: ./data/celeba/images/153059.jpg
inflating: ./data/celeba/images/004954.jpg
inflating: ./data/celeba/images/092979.jpg
inflating: ./data/celeba/images/107965.jpg
inflating: ./data/celeba/images/093105.jpg
inflating: ./data/celeba/images/187659.jpg
inflating: ./data/celeba/images/058193.jpg
inflating: ./data/celeba/images/158994.jpg
inflating: ./data/celeba/images/157369.jpg
inflating: ./data/celeba/images/077325.jpg
inflating: ./data/celeba/images/017894.jpg
inflating: ./data/celeba/images/159439.jpg
inflating: ./data/celeba/images/143876.jpg
inflating: ./data/celeba/images/023140.jpg
inflating: ./data/celeba/images/142073.jpg
inflating: ./data/celeba/images/051941.jpg
inflating: ./data/celeba/images/004797.jpg
inflating: ./data/celeba/images/142650.jpg
inflating: ./data/celeba/images/181456.jpg
inflating: ./data/celeba/images/123892.jpg
inflating: ./data/celeba/images/137542.jpg
inflating: ./data/celeba/images/082491.jpg
inflating: ./data/celeba/images/094534.jpg
inflating: ./data/celeba/images/010535.jpg
inflating: ./data/celeba/images/177685.jpg
inflating: ./data/celeba/images/125671.jpg
inflating: ./data/celeba/images/198641.jpg
inflating: ./data/celeba/images/172561.jpg
inflating: ./data/celeba/images/028649.jpg
inflating: ./data/celeba/images/025780.jpg
inflating: ./data/celeba/images/121658.jpg
inflating: ./data/celeba/images/027590.jpg
inflating: ./data/celeba/images/139438.jpg
inflating: ./data/celeba/images/185573.jpg
inflating: ./data/celeba/images/157709.jpg
inflating: ./data/celeba/images/047090.jpg
inflating: ./data/celeba/images/048807.jpg
inflating: ./data/celeba/images/049405.jpg
inflating: ./data/celeba/images/010039.jpg
inflating: ./data/celeba/images/107462.jpg
inflating: ./data/celeba/images/105706.jpg
inflating: ./data/celeba/images/105238.jpg
inflating: ./data/celeba/images/045530.jpg
inflating: ./data/celeba/images/126603.jpg
inflating: ./data/celeba/images/188992.jpg
inflating: ./data/celeba/images/134327.jpg
inflating: ./data/celeba/images/039426.jpg
inflating: ./data/celeba/images/112301.jpg
inflating: ./data/celeba/images/033853.jpg
inflating: ./data/celeba/images/040542.jpg
inflating: ./data/celeba/images/062677.jpg
inflating: ./data/celeba/images/024489.jpg
inflating: ./data/celeba/images/064319.jpg
inflating: ./data/celeba/images/159273.jpg
inflating: ./data/celeba/images/059486.jpg
inflating: ./data/celeba/images/025226.jpg
inflating: ./data/celeba/images/180344.jpg
inflating: ./data/celeba/images/082308.jpg
inflating: ./data/celeba/images/139450.jpg
inflating: ./data/celeba/images/105602.jpg
inflating: ./data/celeba/images/139009.jpg
inflating: ./data/celeba/images/112459.jpg
inflating: ./data/celeba/images/015726.jpg
inflating: ./data/celeba/images/145872.jpg
inflating: ./data/celeba/images/151414.jpg
inflating: ./data/celeba/images/125669.jpg
inflating: ./data/celeba/images/000454.jpg
inflating: ./data/celeba/images/153754.jpg
inflating: ./data/celeba/images/195045.jpg
inflating: ./data/celeba/images/168138.jpg
inflating: ./data/celeba/images/182074.jpg
inflating: ./data/celeba/images/188847.jpg
inflating: ./data/celeba/images/060826.jpg
inflating: ./data/celeba/images/048057.jpg
inflating: ./data/celeba/images/176191.jpg
inflating: ./data/celeba/images/115928.jpg
inflating: ./data/celeba/images/081484.jpg
inflating: ./data/celeba/images/098515.jpg
inflating: ./data/celeba/images/182599.jpg
inflating: ./data/celeba/images/083336.jpg
inflating: ./data/celeba/images/179375.jpg
inflating: ./data/celeba/images/186394.jpg
inflating: ./data/celeba/images/011565.jpg
inflating: ./data/celeba/images/158903.jpg
inflating: ./data/celeba/images/060123.jpg
inflating: ./data/celeba/images/113307.jpg
inflating: ./data/celeba/images/160547.jpg
inflating: ./data/celeba/images/030561.jpg
inflating: ./data/celeba/images/065787.jpg
inflating: ./data/celeba/images/072729.jpg
inflating: ./data/celeba/images/059509.jpg
inflating: ./data/celeba/images/052273.jpg
inflating: ./data/celeba/images/046863.jpg
inflating: ./data/celeba/images/184464.jpg
inflating: ./data/celeba/images/080199.jpg
inflating: ./data/celeba/images/025944.jpg
inflating: ./data/celeba/images/056996.jpg
inflating: ./data/celeba/images/096212.jpg
inflating: ./data/celeba/images/120382.jpg
inflating: ./data/celeba/images/092142.jpg
inflating: ./data/celeba/images/025549.jpg
inflating: ./data/celeba/images/142059.jpg
inflating: ./data/celeba/images/011600.jpg
inflating: ./data/celeba/images/026242.jpg
inflating: ./data/celeba/images/171899.jpg
inflating: ./data/celeba/images/051916.jpg
inflating: ./data/celeba/images/106085.jpg
inflating: ./data/celeba/images/157088.jpg
inflating: ./data/celeba/images/046757.jpg
inflating: ./data/celeba/images/057727.jpg
inflating: ./data/celeba/images/012159.jpg
inflating: ./data/celeba/images/039470.jpg
inflating: ./data/celeba/images/151621.jpg
inflating: ./data/celeba/images/198039.jpg
inflating: ./data/celeba/images/164134.jpg
inflating: ./data/celeba/images/082449.jpg
inflating: ./data/celeba/images/199006.jpg
inflating: ./data/celeba/images/004124.jpg
inflating: ./data/celeba/images/094658.jpg
inflating: ./data/celeba/images/026150.jpg
inflating: ./data/celeba/images/012167.jpg
inflating: ./data/celeba/images/150616.jpg
inflating: ./data/celeba/images/182996.jpg
inflating: ./data/celeba/images/132157.jpg
inflating: ./data/celeba/images/105625.jpg
inflating: ./data/celeba/images/158141.jpg
inflating: ./data/celeba/images/161308.jpg
inflating: ./data/celeba/images/121794.jpg
inflating: ./data/celeba/images/100020.jpg
inflating: ./data/celeba/images/026457.jpg
inflating: ./data/celeba/images/018677.jpg
inflating: ./data/celeba/images/053994.jpg
inflating: ./data/celeba/images/016981.jpg
inflating: ./data/celeba/images/026447.jpg
inflating: ./data/celeba/images/160220.jpg
inflating: ./data/celeba/images/189458.jpg
inflating: ./data/celeba/images/163472.jpg
inflating: ./data/celeba/images/158049.jpg
inflating: ./data/celeba/images/005780.jpg
inflating: ./data/celeba/images/095732.jpg
inflating: ./data/celeba/images/108422.jpg
inflating: ./data/celeba/images/069472.jpg
inflating: ./data/celeba/images/136928.jpg
inflating: ./data/celeba/images/075674.jpg
inflating: ./data/celeba/images/053900.jpg
inflating: ./data/celeba/images/000507.jpg
inflating: ./data/celeba/images/034271.jpg
inflating: ./data/celeba/images/040310.jpg
inflating: ./data/celeba/images/041469.jpg
inflating: ./data/celeba/images/184578.jpg
inflating: ./data/celeba/images/080732.jpg
inflating: ./data/celeba/images/048301.jpg
inflating: ./data/celeba/images/000194.jpg
inflating: ./data/celeba/images/022589.jpg
inflating: ./data/celeba/images/069505.jpg
inflating: ./data/celeba/images/145797.jpg
inflating: ./data/celeba/images/029518.jpg
inflating: ./data/celeba/images/150700.jpg
inflating: ./data/celeba/images/194646.jpg
inflating: ./data/celeba/images/184505.jpg
inflating: ./data/celeba/images/109948.jpg
inflating: ./data/celeba/images/163810.jpg
inflating: ./data/celeba/images/114775.jpg
inflating: ./data/celeba/images/116115.jpg
inflating: ./data/celeba/images/059083.jpg
inflating: ./data/celeba/images/162105.jpg
inflating: ./data/celeba/images/120976.jpg
inflating: ./data/celeba/images/100925.jpg
inflating: ./data/celeba/images/153249.jpg
inflating: ./data/celeba/images/065117.jpg
inflating: ./data/celeba/images/079617.jpg
inflating: ./data/celeba/images/179571.jpg
inflating: ./data/celeba/images/115822.jpg
inflating: ./data/celeba/images/094135.jpg
inflating: ./data/celeba/images/121495.jpg
inflating: ./data/celeba/images/047013.jpg
inflating: ./data/celeba/images/161732.jpg
inflating: ./data/celeba/images/110383.jpg
inflating: ./data/celeba/images/135483.jpg
inflating: ./data/celeba/images/114119.jpg
inflating: ./data/celeba/images/160811.jpg
inflating: ./data/celeba/images/173283.jpg
inflating: ./data/celeba/images/145554.jpg
inflating: ./data/celeba/images/004929.jpg
inflating: ./data/celeba/images/142194.jpg
inflating: ./data/celeba/images/161850.jpg
inflating: ./data/celeba/images/007688.jpg
inflating: ./data/celeba/images/035211.jpg
inflating: ./data/celeba/images/116069.jpg
inflating: ./data/celeba/images/080125.jpg
inflating: ./data/celeba/images/027829.jpg
inflating: ./data/celeba/images/200075.jpg
inflating: ./data/celeba/images/040794.jpg
inflating: ./data/celeba/images/060147.jpg
inflating: ./data/celeba/images/045735.jpg
inflating: ./data/celeba/images/183689.jpg
inflating: ./data/celeba/images/108060.jpg
inflating: ./data/celeba/images/172410.jpg
inflating: ./data/celeba/images/172775.jpg
inflating: ./data/celeba/images/046903.jpg
inflating: ./data/celeba/images/016845.jpg
inflating: ./data/celeba/images/051569.jpg
inflating: ./data/celeba/images/186707.jpg
inflating: ./data/celeba/images/178012.jpg
inflating: ./data/celeba/images/159106.jpg
inflating: ./data/celeba/images/148309.jpg
inflating: ./data/celeba/images/001414.jpg
inflating: ./data/celeba/images/127636.jpg
inflating: ./data/celeba/images/123103.jpg
inflating: ./data/celeba/images/062620.jpg
inflating: ./data/celeba/images/073283.jpg
inflating: ./data/celeba/images/147455.jpg
inflating: ./data/celeba/images/131006.jpg
inflating: ./data/celeba/images/158972.jpg
inflating: ./data/celeba/images/104666.jpg
inflating: ./data/celeba/images/149303.jpg
inflating: ./data/celeba/images/080654.jpg
inflating: ./data/celeba/images/165808.jpg
inflating: ./data/celeba/images/120247.jpg
inflating: ./data/celeba/images/132527.jpg
inflating: ./data/celeba/images/044415.jpg
inflating: ./data/celeba/images/006183.jpg
inflating: ./data/celeba/images/000996.jpg
inflating: ./data/celeba/images/043458.jpg
inflating: ./data/celeba/images/159702.jpg
inflating: ./data/celeba/images/000211.jpg
inflating: ./data/celeba/images/153693.jpg
inflating: ./data/celeba/images/123099.jpg
inflating: ./data/celeba/images/136921.jpg
inflating: ./data/celeba/images/107429.jpg
inflating: ./data/celeba/images/145699.jpg
inflating: ./data/celeba/images/001280.jpg
inflating: ./data/celeba/images/168553.jpg
inflating: ./data/celeba/images/064935.jpg
inflating: ./data/celeba/images/054945.jpg
inflating: ./data/celeba/images/196003.jpg
inflating: ./data/celeba/images/035481.jpg
inflating: ./data/celeba/images/137842.jpg
inflating: ./data/celeba/images/011445.jpg
inflating: ./data/celeba/images/051498.jpg
inflating: ./data/celeba/images/157066.jpg
inflating: ./data/celeba/images/187988.jpg
inflating: ./data/celeba/images/151281.jpg
inflating: ./data/celeba/images/184557.jpg
inflating: ./data/celeba/images/050147.jpg
inflating: ./data/celeba/images/125722.jpg
inflating: ./data/celeba/images/109423.jpg
inflating: ./data/celeba/images/112288.jpg
inflating: ./data/celeba/images/002902.jpg
inflating: ./data/celeba/images/118251.jpg
inflating: ./data/celeba/images/001248.jpg
inflating: ./data/celeba/images/069788.jpg
inflating: ./data/celeba/images/020571.jpg
inflating: ./data/celeba/images/088459.jpg
inflating: ./data/celeba/images/002859.jpg
inflating: ./data/celeba/images/190615.jpg
inflating: ./data/celeba/images/097351.jpg
inflating: ./data/celeba/images/053513.jpg
inflating: ./data/celeba/images/200610.jpg
inflating: ./data/celeba/images/143735.jpg
inflating: ./data/celeba/images/187921.jpg
inflating: ./data/celeba/images/177088.jpg
inflating: ./data/celeba/images/046735.jpg
inflating: ./data/celeba/images/146698.jpg
inflating: ./data/celeba/images/185730.jpg
inflating: ./data/celeba/images/183033.jpg
inflating: ./data/celeba/images/044896.jpg
inflating: ./data/celeba/images/074752.jpg
inflating: ./data/celeba/images/098277.jpg
inflating: ./data/celeba/images/002596.jpg
inflating: ./data/celeba/images/187215.jpg
inflating: ./data/celeba/images/149367.jpg
inflating: ./data/celeba/images/060069.jpg
inflating: ./data/celeba/images/053389.jpg
inflating: ./data/celeba/images/069229.jpg
inflating: ./data/celeba/images/105095.jpg
inflating: ./data/celeba/images/107797.jpg
inflating: ./data/celeba/images/129152.jpg
inflating: ./data/celeba/images/006097.jpg
inflating: ./data/celeba/images/073436.jpg
inflating: ./data/celeba/images/184606.jpg
inflating: ./data/celeba/images/047498.jpg
inflating: ./data/celeba/images/135954.jpg
inflating: ./data/celeba/images/031473.jpg
inflating: ./data/celeba/images/092557.jpg
inflating: ./data/celeba/images/133033.jpg
inflating: ./data/celeba/images/129508.jpg
inflating: ./data/celeba/images/125966.jpg
inflating: ./data/celeba/images/162839.jpg
inflating: ./data/celeba/images/069266.jpg
inflating: ./data/celeba/images/135624.jpg
inflating: ./data/celeba/images/028452.jpg
inflating: ./data/celeba/images/014026.jpg
inflating: ./data/celeba/images/129915.jpg
inflating: ./data/celeba/images/200512.jpg
inflating: ./data/celeba/images/178955.jpg
inflating: ./data/celeba/images/153764.jpg
inflating: ./data/celeba/images/175379.jpg
inflating: ./data/celeba/images/084032.jpg
inflating: ./data/celeba/images/020541.jpg
inflating: ./data/celeba/images/128711.jpg
inflating: ./data/celeba/images/054483.jpg
inflating: ./data/celeba/images/187874.jpg
inflating: ./data/celeba/images/197264.jpg
inflating: ./data/celeba/images/182449.jpg
inflating: ./data/celeba/images/042977.jpg
inflating: ./data/celeba/images/145678.jpg
inflating: ./data/celeba/images/187445.jpg
inflating: ./data/celeba/images/071108.jpg
inflating: ./data/celeba/images/070397.jpg
inflating: ./data/celeba/images/054060.jpg
inflating: ./data/celeba/images/044851.jpg
inflating: ./data/celeba/images/121919.jpg
inflating: ./data/celeba/images/162860.jpg
inflating: ./data/celeba/images/062811.jpg
inflating: ./data/celeba/images/068544.jpg
inflating: ./data/celeba/images/136415.jpg
inflating: ./data/celeba/images/000367.jpg
inflating: ./data/celeba/images/053018.jpg
inflating: ./data/celeba/images/194401.jpg
inflating: ./data/celeba/images/091615.jpg
inflating: ./data/celeba/images/012050.jpg
inflating: ./data/celeba/images/057732.jpg
inflating: ./data/celeba/images/034814.jpg
inflating: ./data/celeba/images/012109.jpg
inflating: ./data/celeba/images/057545.jpg
inflating: ./data/celeba/images/165063.jpg
inflating: ./data/celeba/images/060281.jpg
inflating: ./data/celeba/images/162080.jpg
inflating: ./data/celeba/images/024686.jpg
inflating: ./data/celeba/images/100432.jpg
inflating: ./data/celeba/images/031365.jpg
inflating: ./data/celeba/images/004559.jpg
inflating: ./data/celeba/images/137714.jpg
inflating: ./data/celeba/images/153002.jpg
inflating: ./data/celeba/images/166669.jpg
inflating: ./data/celeba/images/055108.jpg
inflating: ./data/celeba/images/116523.jpg
inflating: ./data/celeba/images/030633.jpg
inflating: ./data/celeba/images/080923.jpg
inflating: ./data/celeba/images/080789.jpg
inflating: ./data/celeba/images/037071.jpg
inflating: ./data/celeba/images/165139.jpg
inflating: ./data/celeba/images/005293.jpg
inflating: ./data/celeba/images/103999.jpg
inflating: ./data/celeba/images/024359.jpg
inflating: ./data/celeba/images/162627.jpg
inflating: ./data/celeba/images/030576.jpg
inflating: ./data/celeba/images/094504.jpg
inflating: ./data/celeba/images/101556.jpg
inflating: ./data/celeba/images/035018.jpg
inflating: ./data/celeba/images/054853.jpg
inflating: ./data/celeba/images/074058.jpg
inflating: ./data/celeba/images/102382.jpg
inflating: ./data/celeba/images/122441.jpg
inflating: ./data/celeba/images/154028.jpg
inflating: ./data/celeba/images/081928.jpg
inflating: ./data/celeba/images/013404.jpg
inflating: ./data/celeba/images/151052.jpg
inflating: ./data/celeba/images/085463.jpg
inflating: ./data/celeba/images/184619.jpg
inflating: ./data/celeba/images/036091.jpg
inflating: ./data/celeba/images/041906.jpg
inflating: ./data/celeba/images/005860.jpg
inflating: ./data/celeba/images/153602.jpg
inflating: ./data/celeba/images/025777.jpg
inflating: ./data/celeba/images/091529.jpg
inflating: ./data/celeba/images/151889.jpg
inflating: ./data/celeba/images/004792.jpg
inflating: ./data/celeba/images/080261.jpg
inflating: ./data/celeba/images/063110.jpg
inflating: ./data/celeba/images/020236.jpg
inflating: ./data/celeba/images/186481.jpg
inflating: ./data/celeba/images/000713.jpg
inflating: ./data/celeba/images/177012.jpg
inflating: ./data/celeba/images/024155.jpg
inflating: ./data/celeba/images/186671.jpg
inflating: ./data/celeba/images/049132.jpg
inflating: ./data/celeba/images/085159.jpg
inflating: ./data/celeba/images/068539.jpg
inflating: ./data/celeba/images/044771.jpg
inflating: ./data/celeba/images/167548.jpg
inflating: ./data/celeba/images/063201.jpg
inflating: ./data/celeba/images/086754.jpg
inflating: ./data/celeba/images/022028.jpg
inflating: ./data/celeba/images/000189.jpg
inflating: ./data/celeba/images/061217.jpg
inflating: ./data/celeba/images/123617.jpg
inflating: ./data/celeba/images/127111.jpg
inflating: ./data/celeba/images/086325.jpg
inflating: ./data/celeba/images/177507.jpg
inflating: ./data/celeba/images/037495.jpg
inflating: ./data/celeba/images/184938.jpg
inflating: ./data/celeba/images/197063.jpg
inflating: ./data/celeba/images/114169.jpg
inflating: ./data/celeba/images/107820.jpg
inflating: ./data/celeba/images/035776.jpg
inflating: ./data/celeba/images/167467.jpg
inflating: ./data/celeba/images/064487.jpg
inflating: ./data/celeba/images/180310.jpg
inflating: ./data/celeba/images/062558.jpg
inflating: ./data/celeba/images/001573.jpg
inflating: ./data/celeba/images/077684.jpg
inflating: ./data/celeba/images/161897.jpg
inflating: ./data/celeba/images/187952.jpg
inflating: ./data/celeba/images/120688.jpg
inflating: ./data/celeba/images/019889.jpg
inflating: ./data/celeba/images/114712.jpg
inflating: ./data/celeba/images/190995.jpg
inflating: ./data/celeba/images/119474.jpg
inflating: ./data/celeba/images/144641.jpg
inflating: ./data/celeba/images/114435.jpg
inflating: ./data/celeba/images/145615.jpg
inflating: ./data/celeba/images/023384.jpg
inflating: ./data/celeba/images/191475.jpg
inflating: ./data/celeba/images/143466.jpg
inflating: ./data/celeba/images/162269.jpg
inflating: ./data/celeba/images/036027.jpg
inflating: ./data/celeba/images/162361.jpg
inflating: ./data/celeba/images/113780.jpg
inflating: ./data/celeba/images/013566.jpg
inflating: ./data/celeba/images/000353.jpg
inflating: ./data/celeba/images/116200.jpg
inflating: ./data/celeba/images/201771.jpg
inflating: ./data/celeba/images/148916.jpg
inflating: ./data/celeba/images/091198.jpg
inflating: ./data/celeba/images/142424.jpg
inflating: ./data/celeba/images/173804.jpg
inflating: ./data/celeba/images/169290.jpg
inflating: ./data/celeba/images/129348.jpg
inflating: ./data/celeba/images/167858.jpg
inflating: ./data/celeba/images/048240.jpg
inflating: ./data/celeba/images/095654.jpg
inflating: ./data/celeba/images/036900.jpg
inflating: ./data/celeba/images/096777.jpg
inflating: ./data/celeba/images/002529.jpg
inflating: ./data/celeba/images/024667.jpg
inflating: ./data/celeba/images/025091.jpg
inflating: ./data/celeba/images/022093.jpg
inflating: ./data/celeba/images/169699.jpg
inflating: ./data/celeba/images/030717.jpg
inflating: ./data/celeba/images/036734.jpg
inflating: ./data/celeba/images/044019.jpg
inflating: ./data/celeba/images/080722.jpg
inflating: ./data/celeba/images/023534.jpg
inflating: ./data/celeba/images/196681.jpg
inflating: ./data/celeba/images/111450.jpg
inflating: ./data/celeba/images/109737.jpg
inflating: ./data/celeba/images/160817.jpg
inflating: ./data/celeba/images/114831.jpg
inflating: ./data/celeba/images/140068.jpg
inflating: ./data/celeba/images/124055.jpg
inflating: ./data/celeba/images/084603.jpg
inflating: ./data/celeba/images/087781.jpg
inflating: ./data/celeba/images/004091.jpg
inflating: ./data/celeba/images/142680.jpg
inflating: ./data/celeba/images/156268.jpg
inflating: ./data/celeba/images/061562.jpg
inflating: ./data/celeba/images/042934.jpg
inflating: ./data/celeba/images/058343.jpg
inflating: ./data/celeba/images/111168.jpg
inflating: ./data/celeba/images/041689.jpg
inflating: ./data/celeba/images/092110.jpg
inflating: ./data/celeba/images/133463.jpg
inflating: ./data/celeba/images/024979.jpg
inflating: ./data/celeba/images/155201.jpg
inflating: ./data/celeba/images/138396.jpg
inflating: ./data/celeba/images/135794.jpg
inflating: ./data/celeba/images/131349.jpg
inflating: ./data/celeba/images/120301.jpg
inflating: ./data/celeba/images/058599.jpg
inflating: ./data/celeba/images/010792.jpg
inflating: ./data/celeba/images/034211.jpg
inflating: ./data/celeba/images/001219.jpg
inflating: ./data/celeba/images/117418.jpg
inflating: ./data/celeba/images/074817.jpg
inflating: ./data/celeba/images/031764.jpg
inflating: ./data/celeba/images/042188.jpg
inflating: ./data/celeba/images/129753.jpg
inflating: ./data/celeba/images/081154.jpg
inflating: ./data/celeba/images/038383.jpg
inflating: ./data/celeba/images/082805.jpg
inflating: ./data/celeba/images/120423.jpg
inflating: ./data/celeba/images/037153.jpg
inflating: ./data/celeba/images/160869.jpg
inflating: ./data/celeba/images/142291.jpg
inflating: ./data/celeba/images/080024.jpg
inflating: ./data/celeba/images/127226.jpg
inflating: ./data/celeba/images/166867.jpg
inflating: ./data/celeba/images/026110.jpg
inflating: ./data/celeba/images/197370.jpg
inflating: ./data/celeba/images/162859.jpg
inflating: ./data/celeba/images/067765.jpg
inflating: ./data/celeba/images/134028.jpg
inflating: ./data/celeba/images/185761.jpg
inflating: ./data/celeba/images/117640.jpg
inflating: ./data/celeba/images/144803.jpg
inflating: ./data/celeba/images/178071.jpg
inflating: ./data/celeba/images/043488.jpg
inflating: ./data/celeba/images/188120.jpg
inflating: ./data/celeba/images/132791.jpg
inflating: ./data/celeba/images/025529.jpg
inflating: ./data/celeba/images/075102.jpg
inflating: ./data/celeba/images/009798.jpg
inflating: ./data/celeba/images/195928.jpg
inflating: ./data/celeba/images/083773.jpg
inflating: ./data/celeba/images/111657.jpg
inflating: ./data/celeba/images/058429.jpg
inflating: ./data/celeba/images/128321.jpg
inflating: ./data/celeba/images/192104.jpg
inflating: ./data/celeba/images/157292.jpg
inflating: ./data/celeba/images/173848.jpg
inflating: ./data/celeba/images/028893.jpg
inflating: ./data/celeba/images/013044.jpg
inflating: ./data/celeba/images/029036.jpg
inflating: ./data/celeba/images/053612.jpg
inflating: ./data/celeba/images/111253.jpg
inflating: ./data/celeba/images/017324.jpg
inflating: ./data/celeba/images/186932.jpg
inflating: ./data/celeba/images/026788.jpg
inflating: ./data/celeba/images/183892.jpg
inflating: ./data/celeba/images/028123.jpg
inflating: ./data/celeba/images/153730.jpg
inflating: ./data/celeba/images/090814.jpg
inflating: ./data/celeba/images/056093.jpg
inflating: ./data/celeba/images/003423.jpg
inflating: ./data/celeba/images/153199.jpg
inflating: ./data/celeba/images/123720.jpg
inflating: ./data/celeba/images/147787.jpg
inflating: ./data/celeba/images/192952.jpg
inflating: ./data/celeba/images/049271.jpg
inflating: ./data/celeba/images/131478.jpg
inflating: ./data/celeba/images/003414.jpg
inflating: ./data/celeba/images/149487.jpg
inflating: ./data/celeba/images/145864.jpg
inflating: ./data/celeba/images/038392.jpg
inflating: ./data/celeba/images/088811.jpg
inflating: ./data/celeba/images/106819.jpg
inflating: ./data/celeba/images/142728.jpg
inflating: ./data/celeba/images/178542.jpg
inflating: ./data/celeba/images/135042.jpg
inflating: ./data/celeba/images/108639.jpg
inflating: ./data/celeba/images/092327.jpg
inflating: ./data/celeba/images/019682.jpg
inflating: ./data/celeba/images/032257.jpg
inflating: ./data/celeba/images/130303.jpg
inflating: ./data/celeba/images/102717.jpg
inflating: ./data/celeba/images/028843.jpg
inflating: ./data/celeba/images/162617.jpg
inflating: ./data/celeba/images/140854.jpg
inflating: ./data/celeba/images/083412.jpg
inflating: ./data/celeba/images/034669.jpg
inflating: ./data/celeba/images/044257.jpg
inflating: ./data/celeba/images/035870.jpg
inflating: ./data/celeba/images/120222.jpg
inflating: ./data/celeba/images/067777.jpg
inflating: ./data/celeba/images/008095.jpg
inflating: ./data/celeba/images/006492.jpg
inflating: ./data/celeba/images/124445.jpg
inflating: ./data/celeba/images/055404.jpg
inflating: ./data/celeba/images/188934.jpg
inflating: ./data/celeba/images/189965.jpg
inflating: ./data/celeba/images/040068.jpg
inflating: ./data/celeba/images/099958.jpg
inflating: ./data/celeba/images/154538.jpg
inflating: ./data/celeba/images/016721.jpg
inflating: ./data/celeba/images/045083.jpg
inflating: ./data/celeba/images/143621.jpg
inflating: ./data/celeba/images/044540.jpg
inflating: ./data/celeba/images/133782.jpg
inflating: ./data/celeba/images/139819.jpg
inflating: ./data/celeba/images/147941.jpg
inflating: ./data/celeba/images/144016.jpg
inflating: ./data/celeba/images/069252.jpg
inflating: ./data/celeba/images/131614.jpg
inflating: ./data/celeba/images/016013.jpg
inflating: ./data/celeba/images/016703.jpg
inflating: ./data/celeba/images/085212.jpg
inflating: ./data/celeba/images/007421.jpg
inflating: ./data/celeba/images/195672.jpg
inflating: ./data/celeba/images/183544.jpg
inflating: ./data/celeba/images/050131.jpg
inflating: ./data/celeba/images/189038.jpg
inflating: ./data/celeba/images/140608.jpg
inflating: ./data/celeba/images/070595.jpg
inflating: ./data/celeba/images/152755.jpg
inflating: ./data/celeba/images/074618.jpg
inflating: ./data/celeba/images/015038.jpg
inflating: ./data/celeba/images/023862.jpg
inflating: ./data/celeba/images/167779.jpg
inflating: ./data/celeba/images/021112.jpg
inflating: ./data/celeba/images/072384.jpg
inflating: ./data/celeba/images/077817.jpg
inflating: ./data/celeba/images/137711.jpg
inflating: ./data/celeba/images/127608.jpg
inflating: ./data/celeba/images/176202.jpg
inflating: ./data/celeba/images/158581.jpg
inflating: ./data/celeba/images/121694.jpg
inflating: ./data/celeba/images/184546.jpg
inflating: ./data/celeba/images/161321.jpg
inflating: ./data/celeba/images/176853.jpg
inflating: ./data/celeba/images/005944.jpg
inflating: ./data/celeba/images/145884.jpg
inflating: ./data/celeba/images/193274.jpg
inflating: ./data/celeba/images/154731.jpg
inflating: ./data/celeba/images/027706.jpg
inflating: ./data/celeba/images/154273.jpg
inflating: ./data/celeba/images/074134.jpg
inflating: ./data/celeba/images/132785.jpg
inflating: ./data/celeba/images/034617.jpg
inflating: ./data/celeba/images/193451.jpg
inflating: ./data/celeba/images/134651.jpg
inflating: ./data/celeba/images/183826.jpg
inflating: ./data/celeba/images/105403.jpg
inflating: ./data/celeba/images/200733.jpg
inflating: ./data/celeba/images/121813.jpg
inflating: ./data/celeba/images/040691.jpg
inflating: ./data/celeba/images/023363.jpg
inflating: ./data/celeba/images/105684.jpg
inflating: ./data/celeba/images/040355.jpg
inflating: ./data/celeba/images/012585.jpg
inflating: ./data/celeba/images/163801.jpg
inflating: ./data/celeba/images/111108.jpg
inflating: ./data/celeba/images/008406.jpg
inflating: ./data/celeba/images/123159.jpg
inflating: ./data/celeba/images/132797.jpg
inflating: ./data/celeba/images/029445.jpg
inflating: ./data/celeba/images/019208.jpg
inflating: ./data/celeba/images/047375.jpg
inflating: ./data/celeba/images/083005.jpg
inflating: ./data/celeba/images/014314.jpg
inflating: ./data/celeba/images/098780.jpg
inflating: ./data/celeba/images/189153.jpg
inflating: ./data/celeba/images/185312.jpg
inflating: ./data/celeba/images/086524.jpg
inflating: ./data/celeba/images/185230.jpg
inflating: ./data/celeba/images/064109.jpg
inflating: ./data/celeba/images/190293.jpg
inflating: ./data/celeba/images/090312.jpg
inflating: ./data/celeba/images/183459.jpg
inflating: ./data/celeba/images/092248.jpg
inflating: ./data/celeba/images/011129.jpg
inflating: ./data/celeba/images/154950.jpg
inflating: ./data/celeba/images/045791.jpg
inflating: ./data/celeba/images/101595.jpg
inflating: ./data/celeba/images/171900.jpg
inflating: ./data/celeba/images/085983.jpg
inflating: ./data/celeba/images/141342.jpg
inflating: ./data/celeba/images/009780.jpg
inflating: ./data/celeba/images/170358.jpg
inflating: ./data/celeba/images/044899.jpg
inflating: ./data/celeba/images/110080.jpg
inflating: ./data/celeba/images/182336.jpg
inflating: ./data/celeba/images/067940.jpg
inflating: ./data/celeba/images/177627.jpg
inflating: ./data/celeba/images/096894.jpg
inflating: ./data/celeba/images/122481.jpg
inflating: ./data/celeba/images/173107.jpg
inflating: ./data/celeba/images/099332.jpg
inflating: ./data/celeba/images/189569.jpg
inflating: ./data/celeba/images/083279.jpg
inflating: ./data/celeba/images/190296.jpg
inflating: ./data/celeba/images/033245.jpg
inflating: ./data/celeba/images/136463.jpg
inflating: ./data/celeba/images/000698.jpg
inflating: ./data/celeba/images/041329.jpg
inflating: ./data/celeba/images/010148.jpg
inflating: ./data/celeba/images/192173.jpg
inflating: ./data/celeba/images/095690.jpg
inflating: ./data/celeba/images/155297.jpg
inflating: ./data/celeba/images/152754.jpg
inflating: ./data/celeba/images/056035.jpg
inflating: ./data/celeba/images/170806.jpg
inflating: ./data/celeba/images/061091.jpg
inflating: ./data/celeba/images/083644.jpg
inflating: ./data/celeba/images/091083.jpg
inflating: ./data/celeba/images/120697.jpg
inflating: ./data/celeba/images/091270.jpg
inflating: ./data/celeba/images/041215.jpg
inflating: ./data/celeba/images/019837.jpg
inflating: ./data/celeba/images/104036.jpg
inflating: ./data/celeba/images/059244.jpg
inflating: ./data/celeba/images/120938.jpg
inflating: ./data/celeba/images/194777.jpg
inflating: ./data/celeba/images/008308.jpg
inflating: ./data/celeba/images/062450.jpg
inflating: ./data/celeba/images/016641.jpg
inflating: ./data/celeba/images/198812.jpg
inflating: ./data/celeba/images/107614.jpg
inflating: ./data/celeba/images/080122.jpg
inflating: ./data/celeba/images/188279.jpg
inflating: ./data/celeba/images/155808.jpg
inflating: ./data/celeba/images/084208.jpg
inflating: ./data/celeba/images/031430.jpg
inflating: ./data/celeba/images/157820.jpg
inflating: ./data/celeba/images/080775.jpg
inflating: ./data/celeba/images/034659.jpg
inflating: ./data/celeba/images/174992.jpg
inflating: ./data/celeba/images/112394.jpg
inflating: ./data/celeba/images/030476.jpg
inflating: ./data/celeba/images/005013.jpg
inflating: ./data/celeba/images/164302.jpg
inflating: ./data/celeba/images/127955.jpg
inflating: ./data/celeba/images/094161.jpg
inflating: ./data/celeba/images/122703.jpg
inflating: ./data/celeba/images/003297.jpg
inflating: ./data/celeba/images/055982.jpg
inflating: ./data/celeba/images/093261.jpg
inflating: ./data/celeba/images/140315.jpg
inflating: ./data/celeba/images/078616.jpg
inflating: ./data/celeba/images/194071.jpg
inflating: ./data/celeba/images/030097.jpg
inflating: ./data/celeba/images/021807.jpg
inflating: ./data/celeba/images/160131.jpg
inflating: ./data/celeba/images/150221.jpg
inflating: ./data/celeba/images/045013.jpg
inflating: ./data/celeba/images/178481.jpg
inflating: ./data/celeba/images/078004.jpg
inflating: ./data/celeba/images/045303.jpg
inflating: ./data/celeba/images/163858.jpg
inflating: ./data/celeba/images/014426.jpg
inflating: ./data/celeba/images/045657.jpg
inflating: ./data/celeba/images/097910.jpg
inflating: ./data/celeba/images/073972.jpg
inflating: ./data/celeba/images/142411.jpg
inflating: ./data/celeba/images/156619.jpg
inflating: ./data/celeba/images/192162.jpg
inflating: ./data/celeba/images/172717.jpg
inflating: ./data/celeba/images/162673.jpg
inflating: ./data/celeba/images/144961.jpg
inflating: ./data/celeba/images/177863.jpg
inflating: ./data/celeba/images/021705.jpg
inflating: ./data/celeba/images/024107.jpg
inflating: ./data/celeba/images/035978.jpg
inflating: ./data/celeba/images/111244.jpg
inflating: ./data/celeba/images/164170.jpg
inflating: ./data/celeba/images/187297.jpg
inflating: ./data/celeba/images/155673.jpg
inflating: ./data/celeba/images/033442.jpg
inflating: ./data/celeba/images/181315.jpg
inflating: ./data/celeba/images/071560.jpg
inflating: ./data/celeba/images/095609.jpg
inflating: ./data/celeba/images/068534.jpg
inflating: ./data/celeba/images/087674.jpg
inflating: ./data/celeba/images/133690.jpg
inflating: ./data/celeba/images/150950.jpg
inflating: ./data/celeba/images/180867.jpg
inflating: ./data/celeba/images/131014.jpg
inflating: ./data/celeba/images/166911.jpg
inflating: ./data/celeba/images/082851.jpg
inflating: ./data/celeba/images/131154.jpg
inflating: ./data/celeba/images/060877.jpg
inflating: ./data/celeba/images/048570.jpg
inflating: ./data/celeba/images/078346.jpg
inflating: ./data/celeba/images/161810.jpg
inflating: ./data/celeba/images/075988.jpg
inflating: ./data/celeba/images/065655.jpg
inflating: ./data/celeba/images/179618.jpg
inflating: ./data/celeba/images/150146.jpg
inflating: ./data/celeba/images/031385.jpg
inflating: ./data/celeba/images/091708.jpg
inflating: ./data/celeba/images/118506.jpg
inflating: ./data/celeba/images/024079.jpg
inflating: ./data/celeba/images/065275.jpg
inflating: ./data/celeba/images/078534.jpg
inflating: ./data/celeba/images/161124.jpg
inflating: ./data/celeba/images/075832.jpg
inflating: ./data/celeba/images/008687.jpg
inflating: ./data/celeba/images/035518.jpg
inflating: ./data/celeba/images/016074.jpg
inflating: ./data/celeba/images/150868.jpg
inflating: ./data/celeba/images/146563.jpg
inflating: ./data/celeba/images/085036.jpg
inflating: ./data/celeba/images/083469.jpg
inflating: ./data/celeba/images/152494.jpg
inflating: ./data/celeba/images/118666.jpg
inflating: ./data/celeba/images/099241.jpg
inflating: ./data/celeba/images/119741.jpg
inflating: ./data/celeba/images/003673.jpg
inflating: ./data/celeba/images/067041.jpg
inflating: ./data/celeba/images/127645.jpg
inflating: ./data/celeba/images/145754.jpg
inflating: ./data/celeba/images/065761.jpg
inflating: ./data/celeba/images/198870.jpg
inflating: ./data/celeba/images/059343.jpg
inflating: ./data/celeba/images/010013.jpg
inflating: ./data/celeba/images/059217.jpg
inflating: ./data/celeba/images/111545.jpg
inflating: ./data/celeba/images/024947.jpg
inflating: ./data/celeba/images/098678.jpg
inflating: ./data/celeba/images/191977.jpg
inflating: ./data/celeba/images/108048.jpg
inflating: ./data/celeba/images/083378.jpg
inflating: ./data/celeba/images/164016.jpg
inflating: ./data/celeba/images/086179.jpg
inflating: ./data/celeba/images/052324.jpg
inflating: ./data/celeba/images/142816.jpg
inflating: ./data/celeba/images/038180.jpg
inflating: ./data/celeba/images/021818.jpg
inflating: ./data/celeba/images/039190.jpg
inflating: ./data/celeba/images/043247.jpg
inflating: ./data/celeba/images/036579.jpg
inflating: ./data/celeba/images/198571.jpg
inflating: ./data/celeba/images/034992.jpg
inflating: ./data/celeba/images/079494.jpg
inflating: ./data/celeba/images/200398.jpg
inflating: ./data/celeba/images/132940.jpg
inflating: ./data/celeba/images/009176.jpg
inflating: ./data/celeba/images/066813.jpg
inflating: ./data/celeba/images/057582.jpg
inflating: ./data/celeba/images/047894.jpg
inflating: ./data/celeba/images/020233.jpg
inflating: ./data/celeba/images/127196.jpg
inflating: ./data/celeba/images/064459.jpg
inflating: ./data/celeba/images/154260.jpg
inflating: ./data/celeba/images/101495.jpg
inflating: ./data/celeba/images/075739.jpg
inflating: ./data/celeba/images/109776.jpg
inflating: ./data/celeba/images/066815.jpg
inflating: ./data/celeba/images/055627.jpg
inflating: ./data/celeba/images/190475.jpg
inflating: ./data/celeba/images/123064.jpg
inflating: ./data/celeba/images/109056.jpg
inflating: ./data/celeba/images/023228.jpg
inflating: ./data/celeba/images/183292.jpg
inflating: ./data/celeba/images/050210.jpg
inflating: ./data/celeba/images/182687.jpg
inflating: ./data/celeba/images/120127.jpg
inflating: ./data/celeba/images/172202.jpg
inflating: ./data/celeba/images/151463.jpg
inflating: ./data/celeba/images/139247.jpg
inflating: ./data/celeba/images/088682.jpg
inflating: ./data/celeba/images/095122.jpg
inflating: ./data/celeba/images/125740.jpg
inflating: ./data/celeba/images/075711.jpg
inflating: ./data/celeba/images/190675.jpg
inflating: ./data/celeba/images/097213.jpg
inflating: ./data/celeba/images/163017.jpg
inflating: ./data/celeba/images/035330.jpg
inflating: ./data/celeba/images/027068.jpg
inflating: ./data/celeba/images/075420.jpg
inflating: ./data/celeba/images/186857.jpg
inflating: ./data/celeba/images/049801.jpg
inflating: ./data/celeba/images/106317.jpg
inflating: ./data/celeba/images/014851.jpg
inflating: ./data/celeba/images/045264.jpg
inflating: ./data/celeba/images/035983.jpg
inflating: ./data/celeba/images/028668.jpg
inflating: ./data/celeba/images/142635.jpg
inflating: ./data/celeba/images/193747.jpg
inflating: ./data/celeba/images/098660.jpg
inflating: ./data/celeba/images/043261.jpg
inflating: ./data/celeba/images/091485.jpg
inflating: ./data/celeba/images/149354.jpg
inflating: ./data/celeba/images/075150.jpg
inflating: ./data/celeba/images/035693.jpg
inflating: ./data/celeba/images/019789.jpg
inflating: ./data/celeba/images/062651.jpg
inflating: ./data/celeba/images/105568.jpg
inflating: ./data/celeba/images/113788.jpg
inflating: ./data/celeba/images/040877.jpg
inflating: ./data/celeba/images/074822.jpg
inflating: ./data/celeba/images/185289.jpg
inflating: ./data/celeba/images/107555.jpg
inflating: ./data/celeba/images/096222.jpg
inflating: ./data/celeba/images/124431.jpg
inflating: ./data/celeba/images/094582.jpg
inflating: ./data/celeba/images/187538.jpg
inflating: ./data/celeba/images/054689.jpg
inflating: ./data/celeba/images/141979.jpg
inflating: ./data/celeba/images/071811.jpg
inflating: ./data/celeba/images/046404.jpg
inflating: ./data/celeba/images/058755.jpg
inflating: ./data/celeba/images/020657.jpg
inflating: ./data/celeba/images/186889.jpg
inflating: ./data/celeba/images/035199.jpg
inflating: ./data/celeba/images/043622.jpg
inflating: ./data/celeba/images/115014.jpg
inflating: ./data/celeba/images/037380.jpg
inflating: ./data/celeba/images/037594.jpg
inflating: ./data/celeba/images/072817.jpg
inflating: ./data/celeba/images/150959.jpg
inflating: ./data/celeba/images/048285.jpg
inflating: ./data/celeba/images/100791.jpg
inflating: ./data/celeba/images/060405.jpg
inflating: ./data/celeba/images/046432.jpg
inflating: ./data/celeba/images/154317.jpg
inflating: ./data/celeba/images/034880.jpg
inflating: ./data/celeba/images/198080.jpg
inflating: ./data/celeba/images/066353.jpg
inflating: ./data/celeba/images/156018.jpg
inflating: ./data/celeba/images/032961.jpg
inflating: ./data/celeba/images/015957.jpg
inflating: ./data/celeba/images/196691.jpg
inflating: ./data/celeba/images/082427.jpg
inflating: ./data/celeba/images/058762.jpg
inflating: ./data/celeba/images/143909.jpg
inflating: ./data/celeba/images/134515.jpg
inflating: ./data/celeba/images/115392.jpg
inflating: ./data/celeba/images/070684.jpg
inflating: ./data/celeba/images/138296.jpg
inflating: ./data/celeba/images/116567.jpg
inflating: ./data/celeba/images/189255.jpg
inflating: ./data/celeba/images/005016.jpg
inflating: ./data/celeba/images/078307.jpg
inflating: ./data/celeba/images/129359.jpg
inflating: ./data/celeba/images/134673.jpg
inflating: ./data/celeba/images/020892.jpg
inflating: ./data/celeba/images/106062.jpg
inflating: ./data/celeba/images/021740.jpg
inflating: ./data/celeba/images/072660.jpg
inflating: ./data/celeba/images/128453.jpg
inflating: ./data/celeba/images/120333.jpg
inflating: ./data/celeba/images/147059.jpg
inflating: ./data/celeba/images/148071.jpg
inflating: ./data/celeba/images/091596.jpg
inflating: ./data/celeba/images/117651.jpg
inflating: ./data/celeba/images/171968.jpg
inflating: ./data/celeba/images/164628.jpg
inflating: ./data/celeba/images/043878.jpg
inflating: ./data/celeba/images/152295.jpg
inflating: ./data/celeba/images/004820.jpg
inflating: ./data/celeba/images/067325.jpg
inflating: ./data/celeba/images/171612.jpg
inflating: ./data/celeba/images/189085.jpg
inflating: ./data/celeba/images/188564.jpg
inflating: ./data/celeba/images/173176.jpg
inflating: ./data/celeba/images/022897.jpg
inflating: ./data/celeba/images/174829.jpg
inflating: ./data/celeba/images/200392.jpg
inflating: ./data/celeba/images/047410.jpg
inflating: ./data/celeba/images/013776.jpg
inflating: ./data/celeba/images/120448.jpg
inflating: ./data/celeba/images/155448.jpg
inflating: ./data/celeba/images/043165.jpg
inflating: ./data/celeba/images/099377.jpg
inflating: ./data/celeba/images/183208.jpg
inflating: ./data/celeba/images/098284.jpg
inflating: ./data/celeba/images/006148.jpg
inflating: ./data/celeba/images/192325.jpg
inflating: ./data/celeba/images/160658.jpg
inflating: ./data/celeba/images/080140.jpg
inflating: ./data/celeba/images/022084.jpg
inflating: ./data/celeba/images/037325.jpg
inflating: ./data/celeba/images/088713.jpg
inflating: ./data/celeba/images/170209.jpg
inflating: ./data/celeba/images/069001.jpg
inflating: ./data/celeba/images/033184.jpg
inflating: ./data/celeba/images/038895.jpg
inflating: ./data/celeba/images/172316.jpg
inflating: ./data/celeba/images/198623.jpg
inflating: ./data/celeba/images/138705.jpg
inflating: ./data/celeba/images/135431.jpg
inflating: ./data/celeba/images/189182.jpg
inflating: ./data/celeba/images/035197.jpg
inflating: ./data/celeba/images/053195.jpg
inflating: ./data/celeba/images/105167.jpg
inflating: ./data/celeba/images/112990.jpg
inflating: ./data/celeba/images/040792.jpg
inflating: ./data/celeba/images/108901.jpg
inflating: ./data/celeba/images/149314.jpg
inflating: ./data/celeba/images/106118.jpg
inflating: ./data/celeba/images/123884.jpg
inflating: ./data/celeba/images/079945.jpg
inflating: ./data/celeba/images/014387.jpg
inflating: ./data/celeba/images/011213.jpg
inflating: ./data/celeba/images/165742.jpg
inflating: ./data/celeba/images/082626.jpg
inflating: ./data/celeba/images/053308.jpg
inflating: ./data/celeba/images/077125.jpg
inflating: ./data/celeba/images/087727.jpg
inflating: ./data/celeba/images/176761.jpg
inflating: ./data/celeba/images/028177.jpg
inflating: ./data/celeba/images/045055.jpg
inflating: ./data/celeba/images/097038.jpg
inflating: ./data/celeba/images/101594.jpg
inflating: ./data/celeba/images/020287.jpg
inflating: ./data/celeba/images/084449.jpg
inflating: ./data/celeba/images/133874.jpg
inflating: ./data/celeba/images/023963.jpg
inflating: ./data/celeba/images/189030.jpg
inflating: ./data/celeba/images/023926.jpg
inflating: ./data/celeba/images/175509.jpg
inflating: ./data/celeba/images/052390.jpg
inflating: ./data/celeba/images/021341.jpg
inflating: ./data/celeba/images/169771.jpg
inflating: ./data/celeba/images/162291.jpg
inflating: ./data/celeba/images/108957.jpg
inflating: ./data/celeba/images/151269.jpg
inflating: ./data/celeba/images/182135.jpg
inflating: ./data/celeba/images/075886.jpg
inflating: ./data/celeba/images/051114.jpg
inflating: ./data/celeba/images/190479.jpg
inflating: ./data/celeba/images/101139.jpg
inflating: ./data/celeba/images/090936.jpg
inflating: ./data/celeba/images/168040.jpg
inflating: ./data/celeba/images/101583.jpg
inflating: ./data/celeba/images/035229.jpg
inflating: ./data/celeba/images/179912.jpg
inflating: ./data/celeba/images/083246.jpg
inflating: ./data/celeba/images/125021.jpg
inflating: ./data/celeba/images/165288.jpg
inflating: ./data/celeba/images/141649.jpg
inflating: ./data/celeba/images/123384.jpg
inflating: ./data/celeba/images/192979.jpg
inflating: ./data/celeba/images/021958.jpg
inflating: ./data/celeba/images/196363.jpg
inflating: ./data/celeba/images/074947.jpg
inflating: ./data/celeba/images/047509.jpg
inflating: ./data/celeba/images/107346.jpg
inflating: ./data/celeba/images/069151.jpg
inflating: ./data/celeba/images/139172.jpg
inflating: ./data/celeba/images/018042.jpg
inflating: ./data/celeba/images/141297.jpg
inflating: ./data/celeba/images/040806.jpg
inflating: ./data/celeba/images/185576.jpg
inflating: ./data/celeba/images/115367.jpg
inflating: ./data/celeba/images/055853.jpg
inflating: ./data/celeba/images/159050.jpg
inflating: ./data/celeba/images/077351.jpg
inflating: ./data/celeba/images/136152.jpg
inflating: ./data/celeba/images/055739.jpg
inflating: ./data/celeba/images/009929.jpg
inflating: ./data/celeba/images/040771.jpg
inflating: ./data/celeba/images/070465.jpg
inflating: ./data/celeba/images/192398.jpg
inflating: ./data/celeba/images/021118.jpg
inflating: ./data/celeba/images/123093.jpg
inflating: ./data/celeba/images/031621.jpg
inflating: ./data/celeba/images/142626.jpg
inflating: ./data/celeba/images/013372.jpg
inflating: ./data/celeba/images/181802.jpg
inflating: ./data/celeba/images/186552.jpg
inflating: ./data/celeba/images/022921.jpg
inflating: ./data/celeba/images/027279.jpg
inflating: ./data/celeba/images/125024.jpg
inflating: ./data/celeba/images/124147.jpg
inflating: ./data/celeba/images/145221.jpg
inflating: ./data/celeba/images/105523.jpg
inflating: ./data/celeba/images/074730.jpg
inflating: ./data/celeba/images/123109.jpg
inflating: ./data/celeba/images/044179.jpg
inflating: ./data/celeba/images/011854.jpg
inflating: ./data/celeba/images/137936.jpg
inflating: ./data/celeba/images/129977.jpg
inflating: ./data/celeba/images/106130.jpg
inflating: ./data/celeba/images/068186.jpg
inflating: ./data/celeba/images/108773.jpg
inflating: ./data/celeba/images/143056.jpg
inflating: ./data/celeba/images/105827.jpg
inflating: ./data/celeba/images/103054.jpg
inflating: ./data/celeba/images/071468.jpg
inflating: ./data/celeba/images/116613.jpg
inflating: ./data/celeba/images/138444.jpg
inflating: ./data/celeba/images/144004.jpg
inflating: ./data/celeba/images/096949.jpg
inflating: ./data/celeba/images/099825.jpg
inflating: ./data/celeba/images/080900.jpg
inflating: ./data/celeba/images/184311.jpg
inflating: ./data/celeba/images/187046.jpg
inflating: ./data/celeba/images/045262.jpg
inflating: ./data/celeba/images/057415.jpg
inflating: ./data/celeba/images/010636.jpg
inflating: ./data/celeba/images/171478.jpg
inflating: ./data/celeba/images/031820.jpg
inflating: ./data/celeba/images/150521.jpg
inflating: ./data/celeba/images/051946.jpg
inflating: ./data/celeba/images/033935.jpg
inflating: ./data/celeba/images/088974.jpg
inflating: ./data/celeba/images/183627.jpg
inflating: ./data/celeba/images/157479.jpg
inflating: ./data/celeba/images/102735.jpg
inflating: ./data/celeba/images/167093.jpg
inflating: ./data/celeba/images/036557.jpg
inflating: ./data/celeba/images/068338.jpg
inflating: ./data/celeba/images/007662.jpg
inflating: ./data/celeba/images/159603.jpg
inflating: ./data/celeba/images/049283.jpg
inflating: ./data/celeba/images/174831.jpg
inflating: ./data/celeba/images/000494.jpg
inflating: ./data/celeba/images/094592.jpg
inflating: ./data/celeba/images/083612.jpg
inflating: ./data/celeba/images/168854.jpg
inflating: ./data/celeba/images/002918.jpg
inflating: ./data/celeba/images/054045.jpg
inflating: ./data/celeba/images/148468.jpg
inflating: ./data/celeba/images/182037.jpg
inflating: ./data/celeba/images/096790.jpg
inflating: ./data/celeba/images/010593.jpg
inflating: ./data/celeba/images/153868.jpg
inflating: ./data/celeba/images/040159.jpg
inflating: ./data/celeba/images/154696.jpg
inflating: ./data/celeba/images/117592.jpg
inflating: ./data/celeba/images/029388.jpg
inflating: ./data/celeba/images/143769.jpg
inflating: ./data/celeba/images/200232.jpg
inflating: ./data/celeba/images/026330.jpg
inflating: ./data/celeba/images/031882.jpg
inflating: ./data/celeba/images/165073.jpg
inflating: ./data/celeba/images/017355.jpg
inflating: ./data/celeba/images/089150.jpg
inflating: ./data/celeba/images/076806.jpg
inflating: ./data/celeba/images/192687.jpg
inflating: ./data/celeba/images/191910.jpg
inflating: ./data/celeba/images/106778.jpg
inflating: ./data/celeba/images/151937.jpg
inflating: ./data/celeba/images/014428.jpg
inflating: ./data/celeba/images/192845.jpg
inflating: ./data/celeba/images/182481.jpg
inflating: ./data/celeba/images/158149.jpg
inflating: ./data/celeba/images/167731.jpg
inflating: ./data/celeba/images/101297.jpg
inflating: ./data/celeba/images/010571.jpg
inflating: ./data/celeba/images/116663.jpg
inflating: ./data/celeba/images/186886.jpg
inflating: ./data/celeba/images/104768.jpg
inflating: ./data/celeba/images/144275.jpg
inflating: ./data/celeba/images/125387.jpg
inflating: ./data/celeba/images/035711.jpg
inflating: ./data/celeba/images/181925.jpg
inflating: ./data/celeba/images/159748.jpg
inflating: ./data/celeba/images/186505.jpg
inflating: ./data/celeba/images/026248.jpg
inflating: ./data/celeba/images/024739.jpg
inflating: ./data/celeba/images/145148.jpg
inflating: ./data/celeba/images/175160.jpg
inflating: ./data/celeba/images/050756.jpg
inflating: ./data/celeba/images/146494.jpg
inflating: ./data/celeba/images/097678.jpg
inflating: ./data/celeba/images/138860.jpg
inflating: ./data/celeba/images/129536.jpg
inflating: ./data/celeba/images/001945.jpg
inflating: ./data/celeba/images/169891.jpg
inflating: ./data/celeba/images/176747.jpg
inflating: ./data/celeba/images/099684.jpg
inflating: ./data/celeba/images/006243.jpg
inflating: ./data/celeba/images/200757.jpg
inflating: ./data/celeba/images/185754.jpg
inflating: ./data/celeba/images/041122.jpg
inflating: ./data/celeba/images/001336.jpg
inflating: ./data/celeba/images/158498.jpg
inflating: ./data/celeba/images/160135.jpg
inflating: ./data/celeba/images/100619.jpg
inflating: ./data/celeba/images/189987.jpg
inflating: ./data/celeba/images/185189.jpg
inflating: ./data/celeba/images/056259.jpg
inflating: ./data/celeba/images/086003.jpg
inflating: ./data/celeba/images/064356.jpg
inflating: ./data/celeba/images/018349.jpg
inflating: ./data/celeba/images/084683.jpg
inflating: ./data/celeba/images/172528.jpg
inflating: ./data/celeba/images/179663.jpg
inflating: ./data/celeba/images/156286.jpg
inflating: ./data/celeba/images/010345.jpg
inflating: ./data/celeba/images/158298.jpg
inflating: ./data/celeba/images/058560.jpg
inflating: ./data/celeba/images/096824.jpg
inflating: ./data/celeba/images/022210.jpg
inflating: ./data/celeba/images/144339.jpg
inflating: ./data/celeba/images/172716.jpg
inflating: ./data/celeba/images/013724.jpg
inflating: ./data/celeba/images/119568.jpg
inflating: ./data/celeba/images/079634.jpg
inflating: ./data/celeba/images/171723.jpg
inflating: ./data/celeba/images/087593.jpg
inflating: ./data/celeba/images/047293.jpg
inflating: ./data/celeba/images/096802.jpg
inflating: ./data/celeba/images/131374.jpg
inflating: ./data/celeba/images/142561.jpg
inflating: ./data/celeba/images/127952.jpg
inflating: ./data/celeba/images/033314.jpg
inflating: ./data/celeba/images/023276.jpg
inflating: ./data/celeba/images/084458.jpg
inflating: ./data/celeba/images/146188.jpg
inflating: ./data/celeba/images/136373.jpg
inflating: ./data/celeba/images/080352.jpg
inflating: ./data/celeba/images/063269.jpg
inflating: ./data/celeba/images/026219.jpg
inflating: ./data/celeba/images/179568.jpg
inflating: ./data/celeba/images/120307.jpg
inflating: ./data/celeba/images/017897.jpg
inflating: ./data/celeba/images/115161.jpg
inflating: ./data/celeba/images/188270.jpg
inflating: ./data/celeba/images/111442.jpg
inflating: ./data/celeba/images/023156.jpg
inflating: ./data/celeba/images/135370.jpg
inflating: ./data/celeba/images/090636.jpg
inflating: ./data/celeba/images/179151.jpg
inflating: ./data/celeba/images/053165.jpg
inflating: ./data/celeba/images/016601.jpg
inflating: ./data/celeba/images/122220.jpg
inflating: ./data/celeba/images/038810.jpg
inflating: ./data/celeba/images/101258.jpg
inflating: ./data/celeba/images/101166.jpg
inflating: ./data/celeba/images/068274.jpg
inflating: ./data/celeba/images/072608.jpg
inflating: ./data/celeba/images/061576.jpg
inflating: ./data/celeba/images/008124.jpg
inflating: ./data/celeba/images/032600.jpg
inflating: ./data/celeba/images/191018.jpg
inflating: ./data/celeba/images/037556.jpg
inflating: ./data/celeba/images/133680.jpg
inflating: ./data/celeba/images/139065.jpg
inflating: ./data/celeba/images/117834.jpg
inflating: ./data/celeba/images/024385.jpg
inflating: ./data/celeba/images/108366.jpg
inflating: ./data/celeba/images/197702.jpg
inflating: ./data/celeba/images/164449.jpg
inflating: ./data/celeba/images/036363.jpg
inflating: ./data/celeba/images/080397.jpg
inflating: ./data/celeba/images/115660.jpg
inflating: ./data/celeba/images/057032.jpg
inflating: ./data/celeba/images/093205.jpg
inflating: ./data/celeba/images/066778.jpg
inflating: ./data/celeba/images/190957.jpg
inflating: ./data/celeba/images/146640.jpg
inflating: ./data/celeba/images/177806.jpg
inflating: ./data/celeba/images/028760.jpg
inflating: ./data/celeba/images/087378.jpg
inflating: ./data/celeba/images/044069.jpg
inflating: ./data/celeba/images/025103.jpg
inflating: ./data/celeba/images/087847.jpg
inflating: ./data/celeba/images/169628.jpg
inflating: ./data/celeba/images/190219.jpg
inflating: ./data/celeba/images/126376.jpg
inflating: ./data/celeba/images/115892.jpg
inflating: ./data/celeba/images/021731.jpg
inflating: ./data/celeba/images/027943.jpg
inflating: ./data/celeba/images/173518.jpg
inflating: ./data/celeba/images/076016.jpg
inflating: ./data/celeba/images/023210.jpg
inflating: ./data/celeba/images/197748.jpg
inflating: ./data/celeba/images/063924.jpg
inflating: ./data/celeba/images/023891.jpg
inflating: ./data/celeba/images/056702.jpg
inflating: ./data/celeba/images/116136.jpg
inflating: ./data/celeba/images/179515.jpg
inflating: ./data/celeba/images/055729.jpg
inflating: ./data/celeba/images/009893.jpg
inflating: ./data/celeba/images/173015.jpg
inflating: ./data/celeba/images/060790.jpg
inflating: ./data/celeba/images/147261.jpg
inflating: ./data/celeba/images/059465.jpg
inflating: ./data/celeba/images/100287.jpg
inflating: ./data/celeba/images/083823.jpg
inflating: ./data/celeba/images/002281.jpg
inflating: ./data/celeba/images/091145.jpg
inflating: ./data/celeba/images/119258.jpg
inflating: ./data/celeba/images/077364.jpg
inflating: ./data/celeba/images/081329.jpg
inflating: ./data/celeba/images/171557.jpg
inflating: ./data/celeba/images/035125.jpg
inflating: ./data/celeba/images/156536.jpg
inflating: ./data/celeba/images/026854.jpg
inflating: ./data/celeba/images/199424.jpg
inflating: ./data/celeba/images/179074.jpg
inflating: ./data/celeba/images/017133.jpg
inflating: ./data/celeba/images/012692.jpg
inflating: ./data/celeba/images/178703.jpg
inflating: ./data/celeba/images/112923.jpg
inflating: ./data/celeba/images/126026.jpg
inflating: ./data/celeba/images/145994.jpg
inflating: ./data/celeba/images/201178.jpg
inflating: ./data/celeba/images/152611.jpg
inflating: ./data/celeba/images/084794.jpg
inflating: ./data/celeba/images/133871.jpg
inflating: ./data/celeba/images/172225.jpg
inflating: ./data/celeba/images/126852.jpg
inflating: ./data/celeba/images/014254.jpg
inflating: ./data/celeba/images/073075.jpg
inflating: ./data/celeba/images/043616.jpg
inflating: ./data/celeba/images/083973.jpg
inflating: ./data/celeba/images/159020.jpg
inflating: ./data/celeba/images/142123.jpg
inflating: ./data/celeba/images/063767.jpg
inflating: ./data/celeba/images/046196.jpg
inflating: ./data/celeba/images/157442.jpg
inflating: ./data/celeba/images/113928.jpg
inflating: ./data/celeba/images/175619.jpg
inflating: ./data/celeba/images/079606.jpg
inflating: ./data/celeba/images/002873.jpg
inflating: ./data/celeba/images/103174.jpg
inflating: ./data/celeba/images/141389.jpg
inflating: ./data/celeba/images/084631.jpg
inflating: ./data/celeba/images/019374.jpg
inflating: ./data/celeba/images/092542.jpg
inflating: ./data/celeba/images/014292.jpg
inflating: ./data/celeba/images/043614.jpg
inflating: ./data/celeba/images/060503.jpg
inflating: ./data/celeba/images/034050.jpg
inflating: ./data/celeba/images/057598.jpg
inflating: ./data/celeba/images/192448.jpg
inflating: ./data/celeba/images/114867.jpg
inflating: ./data/celeba/images/125935.jpg
inflating: ./data/celeba/images/139489.jpg
inflating: ./data/celeba/images/060411.jpg
inflating: ./data/celeba/images/061636.jpg
inflating: ./data/celeba/images/141051.jpg
inflating: ./data/celeba/images/088529.jpg
inflating: ./data/celeba/images/112001.jpg
inflating: ./data/celeba/images/084335.jpg
inflating: ./data/celeba/images/103147.jpg
inflating: ./data/celeba/images/075777.jpg
inflating: ./data/celeba/images/063735.jpg
inflating: ./data/celeba/images/090836.jpg
inflating: ./data/celeba/images/063794.jpg
inflating: ./data/celeba/images/127012.jpg
inflating: ./data/celeba/images/054490.jpg
inflating: ./data/celeba/images/159174.jpg
inflating: ./data/celeba/images/036567.jpg
inflating: ./data/celeba/images/190033.jpg
inflating: ./data/celeba/images/079109.jpg
inflating: ./data/celeba/images/179048.jpg
inflating: ./data/celeba/images/114503.jpg
inflating: ./data/celeba/images/138299.jpg
inflating: ./data/celeba/images/023469.jpg
inflating: ./data/celeba/images/042853.jpg
inflating: ./data/celeba/images/045381.jpg
inflating: ./data/celeba/images/184690.jpg
inflating: ./data/celeba/images/148394.jpg
inflating: ./data/celeba/images/174455.jpg
inflating: ./data/celeba/images/080765.jpg
inflating: ./data/celeba/images/022293.jpg
inflating: ./data/celeba/images/114325.jpg
inflating: ./data/celeba/images/151676.jpg
inflating: ./data/celeba/images/084734.jpg
inflating: ./data/celeba/images/122591.jpg
inflating: ./data/celeba/images/028752.jpg
inflating: ./data/celeba/images/009462.jpg
inflating: ./data/celeba/images/042831.jpg
inflating: ./data/celeba/images/004067.jpg
inflating: ./data/celeba/images/057391.jpg
inflating: ./data/celeba/images/139955.jpg
inflating: ./data/celeba/images/105643.jpg
inflating: ./data/celeba/images/148561.jpg
inflating: ./data/celeba/images/088444.jpg
inflating: ./data/celeba/images/068965.jpg
inflating: ./data/celeba/images/098020.jpg
inflating: ./data/celeba/images/058926.jpg
inflating: ./data/celeba/images/118874.jpg
inflating: ./data/celeba/images/012572.jpg
inflating: ./data/celeba/images/109418.jpg
inflating: ./data/celeba/images/117167.jpg
inflating: ./data/celeba/images/032791.jpg
inflating: ./data/celeba/images/150426.jpg
inflating: ./data/celeba/images/176453.jpg
inflating: ./data/celeba/images/067952.jpg
inflating: ./data/celeba/images/099144.jpg
inflating: ./data/celeba/images/103165.jpg
inflating: ./data/celeba/images/080260.jpg
inflating: ./data/celeba/images/183794.jpg
inflating: ./data/celeba/images/096264.jpg
inflating: ./data/celeba/images/156141.jpg
inflating: ./data/celeba/images/200850.jpg
inflating: ./data/celeba/images/172573.jpg
inflating: ./data/celeba/images/174650.jpg
inflating: ./data/celeba/images/139024.jpg
inflating: ./data/celeba/images/053161.jpg
inflating: ./data/celeba/images/191055.jpg
inflating: ./data/celeba/images/195816.jpg
inflating: ./data/celeba/images/050344.jpg
inflating: ./data/celeba/images/195338.jpg
inflating: ./data/celeba/images/043928.jpg
inflating: ./data/celeba/images/126146.jpg
inflating: ./data/celeba/images/022023.jpg
inflating: ./data/celeba/images/089739.jpg
inflating: ./data/celeba/images/094363.jpg
inflating: ./data/celeba/images/115192.jpg
inflating: ./data/celeba/images/052940.jpg
inflating: ./data/celeba/images/024174.jpg
inflating: ./data/celeba/images/128580.jpg
inflating: ./data/celeba/images/008450.jpg
inflating: ./data/celeba/images/136195.jpg
inflating: ./data/celeba/images/030933.jpg
inflating: ./data/celeba/images/161037.jpg
inflating: ./data/celeba/images/023949.jpg
inflating: ./data/celeba/images/138812.jpg
inflating: ./data/celeba/images/033370.jpg
inflating: ./data/celeba/images/139804.jpg
inflating: ./data/celeba/images/110621.jpg
inflating: ./data/celeba/images/067088.jpg
inflating: ./data/celeba/images/137076.jpg
inflating: ./data/celeba/images/026766.jpg
inflating: ./data/celeba/images/163215.jpg
inflating: ./data/celeba/images/002712.jpg
inflating: ./data/celeba/images/086030.jpg
inflating: ./data/celeba/images/141891.jpg
inflating: ./data/celeba/images/069354.jpg
inflating: ./data/celeba/images/008717.jpg
inflating: ./data/celeba/images/131008.jpg
inflating: ./data/celeba/images/161312.jpg
inflating: ./data/celeba/images/008107.jpg
inflating: ./data/celeba/images/037814.jpg
inflating: ./data/celeba/images/143898.jpg
inflating: ./data/celeba/images/096661.jpg
inflating: ./data/celeba/images/068064.jpg
inflating: ./data/celeba/images/107608.jpg
inflating: ./data/celeba/images/093002.jpg
inflating: ./data/celeba/images/069063.jpg
inflating: ./data/celeba/images/007360.jpg
inflating: ./data/celeba/images/097283.jpg
inflating: ./data/celeba/images/056430.jpg
inflating: ./data/celeba/images/030523.jpg
inflating: ./data/celeba/images/066883.jpg
inflating: ./data/celeba/images/079006.jpg
inflating: ./data/celeba/images/094625.jpg
inflating: ./data/celeba/images/112003.jpg
inflating: ./data/celeba/images/144136.jpg
inflating: ./data/celeba/images/118535.jpg
inflating: ./data/celeba/images/025870.jpg
inflating: ./data/celeba/images/094721.jpg
inflating: ./data/celeba/images/060421.jpg
inflating: ./data/celeba/images/083817.jpg
inflating: ./data/celeba/images/016512.jpg
inflating: ./data/celeba/images/027002.jpg
inflating: ./data/celeba/images/035644.jpg
inflating: ./data/celeba/images/108655.jpg
inflating: ./data/celeba/images/196808.jpg
inflating: ./data/celeba/images/081980.jpg
inflating: ./data/celeba/images/093586.jpg
inflating: ./data/celeba/images/157305.jpg
inflating: ./data/celeba/images/079169.jpg
inflating: ./data/celeba/images/047169.jpg
inflating: ./data/celeba/images/102131.jpg
inflating: ./data/celeba/images/198204.jpg
inflating: ./data/celeba/images/178880.jpg
inflating: ./data/celeba/images/137322.jpg
inflating: ./data/celeba/images/041797.jpg
inflating: ./data/celeba/images/012906.jpg
inflating: ./data/celeba/images/018330.jpg
inflating: ./data/celeba/images/136768.jpg
inflating: ./data/celeba/images/025047.jpg
inflating: ./data/celeba/images/050204.jpg
inflating: ./data/celeba/images/112744.jpg
inflating: ./data/celeba/images/074263.jpg
inflating: ./data/celeba/images/003457.jpg
inflating: ./data/celeba/images/092205.jpg
inflating: ./data/celeba/images/186212.jpg
inflating: ./data/celeba/images/033819.jpg
inflating: ./data/celeba/images/084248.jpg
inflating: ./data/celeba/images/198424.jpg
inflating: ./data/celeba/images/093744.jpg
inflating: ./data/celeba/images/196734.jpg
inflating: ./data/celeba/images/160746.jpg
inflating: ./data/celeba/images/194032.jpg
inflating: ./data/celeba/images/118169.jpg
inflating: ./data/celeba/images/054598.jpg
inflating: ./data/celeba/images/065828.jpg
inflating: ./data/celeba/images/126427.jpg
inflating: ./data/celeba/images/075669.jpg
inflating: ./data/celeba/images/104220.jpg
inflating: ./data/celeba/images/072476.jpg
inflating: ./data/celeba/images/103151.jpg
inflating: ./data/celeba/images/118808.jpg
inflating: ./data/celeba/images/064196.jpg
inflating: ./data/celeba/images/089264.jpg
inflating: ./data/celeba/images/131303.jpg
inflating: ./data/celeba/images/019561.jpg
inflating: ./data/celeba/images/093601.jpg
inflating: ./data/celeba/images/117381.jpg
inflating: ./data/celeba/images/020301.jpg
inflating: ./data/celeba/images/106789.jpg
inflating: ./data/celeba/images/169935.jpg
inflating: ./data/celeba/images/015062.jpg
inflating: ./data/celeba/images/019159.jpg
inflating: ./data/celeba/images/166154.jpg
inflating: ./data/celeba/images/087428.jpg
inflating: ./data/celeba/images/145891.jpg
inflating: ./data/celeba/images/026795.jpg
inflating: ./data/celeba/images/033946.jpg
inflating: ./data/celeba/images/173005.jpg
inflating: ./data/celeba/images/183487.jpg
inflating: ./data/celeba/images/056360.jpg
inflating: ./data/celeba/images/189848.jpg
inflating: ./data/celeba/images/133414.jpg
inflating: ./data/celeba/images/019881.jpg
inflating: ./data/celeba/images/180534.jpg
inflating: ./data/celeba/images/177503.jpg
inflating: ./data/celeba/images/177209.jpg
inflating: ./data/celeba/images/051849.jpg
inflating: ./data/celeba/images/148383.jpg
inflating: ./data/celeba/images/014624.jpg
inflating: ./data/celeba/images/143245.jpg
inflating: ./data/celeba/images/136802.jpg
inflating: ./data/celeba/images/166945.jpg
inflating: ./data/celeba/images/194909.jpg
inflating: ./data/celeba/images/073250.jpg
inflating: ./data/celeba/images/097798.jpg
inflating: ./data/celeba/images/121537.jpg
inflating: ./data/celeba/images/026131.jpg
inflating: ./data/celeba/images/128555.jpg
inflating: ./data/celeba/images/176526.jpg
inflating: ./data/celeba/images/094399.jpg
inflating: ./data/celeba/images/100688.jpg
inflating: ./data/celeba/images/102707.jpg
inflating: ./data/celeba/images/082173.jpg
inflating: ./data/celeba/images/037199.jpg
inflating: ./data/celeba/images/072286.jpg
inflating: ./data/celeba/images/077662.jpg
inflating: ./data/celeba/images/036229.jpg
inflating: ./data/celeba/images/156460.jpg
inflating: ./data/celeba/images/149061.jpg
inflating: ./data/celeba/images/016184.jpg
inflating: ./data/celeba/images/015592.jpg
inflating: ./data/celeba/images/031229.jpg
inflating: ./data/celeba/images/046021.jpg
inflating: ./data/celeba/images/173271.jpg
inflating: ./data/celeba/images/070895.jpg
inflating: ./data/celeba/images/178187.jpg
inflating: ./data/celeba/images/058029.jpg
inflating: ./data/celeba/images/088966.jpg
inflating: ./data/celeba/images/115725.jpg
inflating: ./data/celeba/images/135148.jpg
inflating: ./data/celeba/images/150696.jpg
inflating: ./data/celeba/images/094292.jpg
inflating: ./data/celeba/images/199600.jpg
inflating: ./data/celeba/images/118641.jpg
inflating: ./data/celeba/images/147070.jpg
inflating: ./data/celeba/images/076069.jpg
inflating: ./data/celeba/images/153138.jpg
inflating: ./data/celeba/images/151005.jpg
inflating: ./data/celeba/images/089930.jpg
inflating: ./data/celeba/images/103842.jpg
inflating: ./data/celeba/images/112616.jpg
inflating: ./data/celeba/images/046208.jpg
inflating: ./data/celeba/images/084174.jpg
inflating: ./data/celeba/images/122423.jpg
inflating: ./data/celeba/images/093750.jpg
inflating: ./data/celeba/images/018642.jpg
inflating: ./data/celeba/images/082111.jpg
inflating: ./data/celeba/images/049776.jpg
inflating: ./data/celeba/images/060820.jpg
inflating: ./data/celeba/images/151087.jpg
inflating: ./data/celeba/images/142734.jpg
inflating: ./data/celeba/images/097186.jpg
inflating: ./data/celeba/images/135371.jpg
inflating: ./data/celeba/images/059771.jpg
inflating: ./data/celeba/images/066568.jpg
inflating: ./data/celeba/images/063973.jpg
inflating: ./data/celeba/images/100685.jpg
inflating: ./data/celeba/images/157385.jpg
inflating: ./data/celeba/images/152749.jpg
inflating: ./data/celeba/images/202127.jpg
inflating: ./data/celeba/images/038891.jpg
inflating: ./data/celeba/images/066035.jpg
inflating: ./data/celeba/images/032412.jpg
inflating: ./data/celeba/images/027892.jpg
inflating: ./data/celeba/images/104738.jpg
inflating: ./data/celeba/images/150771.jpg
inflating: ./data/celeba/images/060867.jpg
inflating: ./data/celeba/images/118593.jpg
inflating: ./data/celeba/images/044607.jpg
inflating: ./data/celeba/images/019809.jpg
inflating: ./data/celeba/images/015503.jpg
inflating: ./data/celeba/images/099742.jpg
inflating: ./data/celeba/images/158703.jpg
inflating: ./data/celeba/images/094204.jpg
inflating: ./data/celeba/images/104041.jpg
inflating: ./data/celeba/images/042530.jpg
inflating: ./data/celeba/images/059635.jpg
inflating: ./data/celeba/images/046809.jpg
inflating: ./data/celeba/images/119433.jpg
inflating: ./data/celeba/images/018521.jpg
inflating: ./data/celeba/images/168320.jpg
inflating: ./data/celeba/images/017062.jpg
inflating: ./data/celeba/images/038109.jpg
inflating: ./data/celeba/images/200684.jpg
inflating: ./data/celeba/images/075651.jpg
inflating: ./data/celeba/images/010305.jpg
inflating: ./data/celeba/images/127830.jpg
inflating: ./data/celeba/images/110906.jpg
inflating: ./data/celeba/images/149064.jpg
inflating: ./data/celeba/images/075927.jpg
inflating: ./data/celeba/images/115719.jpg
inflating: ./data/celeba/images/005875.jpg
inflating: ./data/celeba/images/032623.jpg
inflating: ./data/celeba/images/001701.jpg
inflating: ./data/celeba/images/011758.jpg
inflating: ./data/celeba/images/024885.jpg
inflating: ./data/celeba/images/016580.jpg
inflating: ./data/celeba/images/050853.jpg
inflating: ./data/celeba/images/067101.jpg
inflating: ./data/celeba/images/167794.jpg
inflating: ./data/celeba/images/175873.jpg
inflating: ./data/celeba/images/095075.jpg
inflating: ./data/celeba/images/005205.jpg
inflating: ./data/celeba/images/095926.jpg
inflating: ./data/celeba/images/082251.jpg
inflating: ./data/celeba/images/102031.jpg
inflating: ./data/celeba/images/140576.jpg
inflating: ./data/celeba/images/135879.jpg
inflating: ./data/celeba/images/040595.jpg
inflating: ./data/celeba/images/098570.jpg
inflating: ./data/celeba/images/125492.jpg
inflating: ./data/celeba/images/056948.jpg
inflating: ./data/celeba/images/168418.jpg
inflating: ./data/celeba/images/131218.jpg
inflating: ./data/celeba/images/168451.jpg
inflating: ./data/celeba/images/116971.jpg
inflating: ./data/celeba/images/128505.jpg
inflating: ./data/celeba/images/037437.jpg
inflating: ./data/celeba/images/098759.jpg
inflating: ./data/celeba/images/161694.jpg
inflating: ./data/celeba/images/122318.jpg
inflating: ./data/celeba/images/072360.jpg
inflating: ./data/celeba/images/157586.jpg
inflating: ./data/celeba/images/010099.jpg
inflating: ./data/celeba/images/070374.jpg
inflating: ./data/celeba/images/167960.jpg
inflating: ./data/celeba/images/113415.jpg
inflating: ./data/celeba/images/000869.jpg
inflating: ./data/celeba/images/177493.jpg
inflating: ./data/celeba/images/174925.jpg
inflating: ./data/celeba/images/123097.jpg
inflating: ./data/celeba/images/043313.jpg
inflating: ./data/celeba/images/124330.jpg
inflating: ./data/celeba/images/178340.jpg
inflating: ./data/celeba/images/159264.jpg
inflating: ./data/celeba/images/170514.jpg
inflating: ./data/celeba/images/179604.jpg
inflating: ./data/celeba/images/031076.jpg
inflating: ./data/celeba/images/160252.jpg
inflating: ./data/celeba/images/041687.jpg
inflating: ./data/celeba/images/167379.jpg
inflating: ./data/celeba/images/167170.jpg
inflating: ./data/celeba/images/016311.jpg
inflating: ./data/celeba/images/023650.jpg
inflating: ./data/celeba/images/063248.jpg
inflating: ./data/celeba/images/049382.jpg
inflating: ./data/celeba/images/115432.jpg
inflating: ./data/celeba/images/017985.jpg
inflating: ./data/celeba/images/120210.jpg
inflating: ./data/celeba/images/005609.jpg
inflating: ./data/celeba/images/069172.jpg
inflating: ./data/celeba/images/007525.jpg
inflating: ./data/celeba/images/016109.jpg
inflating: ./data/celeba/images/197032.jpg
inflating: ./data/celeba/images/015247.jpg
inflating: ./data/celeba/images/093824.jpg
inflating: ./data/celeba/images/125860.jpg
inflating: ./data/celeba/images/000958.jpg
inflating: ./data/celeba/images/118214.jpg
inflating: ./data/celeba/images/013389.jpg
inflating: ./data/celeba/images/011933.jpg
inflating: ./data/celeba/images/174353.jpg
inflating: ./data/celeba/images/131431.jpg
inflating: ./data/celeba/images/013910.jpg
inflating: ./data/celeba/images/008541.jpg
inflating: ./data/celeba/images/071982.jpg
inflating: ./data/celeba/images/061082.jpg
inflating: ./data/celeba/images/055536.jpg
inflating: ./data/celeba/images/126602.jpg
inflating: ./data/celeba/images/062888.jpg
inflating: ./data/celeba/images/013898.jpg
inflating: ./data/celeba/images/076635.jpg
inflating: ./data/celeba/images/088047.jpg
inflating: ./data/celeba/images/083502.jpg
inflating: ./data/celeba/images/139143.jpg
inflating: ./data/celeba/images/092935.jpg
inflating: ./data/celeba/images/028006.jpg
inflating: ./data/celeba/images/000537.jpg
inflating: ./data/celeba/images/031347.jpg
inflating: ./data/celeba/images/010699.jpg
inflating: ./data/celeba/images/182867.jpg
inflating: ./data/celeba/images/197050.jpg
inflating: ./data/celeba/images/140439.jpg
inflating: ./data/celeba/images/145050.jpg
inflating: ./data/celeba/images/164848.jpg
inflating: ./data/celeba/images/116893.jpg
inflating: ./data/celeba/images/191953.jpg
inflating: ./data/celeba/images/051604.jpg
inflating: ./data/celeba/images/047899.jpg
inflating: ./data/celeba/images/183666.jpg
inflating: ./data/celeba/images/101666.jpg
inflating: ./data/celeba/images/159339.jpg
inflating: ./data/celeba/images/074697.jpg
inflating: ./data/celeba/images/103932.jpg
inflating: ./data/celeba/images/056514.jpg
inflating: ./data/celeba/images/117060.jpg
inflating: ./data/celeba/images/055883.jpg
inflating: ./data/celeba/images/070343.jpg
inflating: ./data/celeba/images/053500.jpg
inflating: ./data/celeba/images/028195.jpg
inflating: ./data/celeba/images/052790.jpg
inflating: ./data/celeba/images/161759.jpg
inflating: ./data/celeba/images/181683.jpg
inflating: ./data/celeba/images/063155.jpg
inflating: ./data/celeba/images/007066.jpg
inflating: ./data/celeba/images/150509.jpg
inflating: ./data/celeba/images/130365.jpg
inflating: ./data/celeba/images/047149.jpg
inflating: ./data/celeba/images/119156.jpg
inflating: ./data/celeba/images/016307.jpg
inflating: ./data/celeba/images/050459.jpg
inflating: ./data/celeba/images/034868.jpg
inflating: ./data/celeba/images/200866.jpg
inflating: ./data/celeba/images/188976.jpg
inflating: ./data/celeba/images/146753.jpg
inflating: ./data/celeba/images/184355.jpg
inflating: ./data/celeba/images/034770.jpg
inflating: ./data/celeba/images/079286.jpg
inflating: ./data/celeba/images/082235.jpg
inflating: ./data/celeba/images/163816.jpg
inflating: ./data/celeba/images/090449.jpg
inflating: ./data/celeba/images/068991.jpg
inflating: ./data/celeba/images/174679.jpg
inflating: ./data/celeba/images/005716.jpg
inflating: ./data/celeba/images/188360.jpg
inflating: ./data/celeba/images/148412.jpg
inflating: ./data/celeba/images/105964.jpg
inflating: ./data/celeba/images/043030.jpg
inflating: ./data/celeba/images/172086.jpg
inflating: ./data/celeba/images/055716.jpg
inflating: ./data/celeba/images/009865.jpg
inflating: ./data/celeba/images/040043.jpg
inflating: ./data/celeba/images/052046.jpg
inflating: ./data/celeba/images/013514.jpg
inflating: ./data/celeba/images/155677.jpg
inflating: ./data/celeba/images/006179.jpg
inflating: ./data/celeba/images/120021.jpg
inflating: ./data/celeba/images/068553.jpg
inflating: ./data/celeba/images/190893.jpg
inflating: ./data/celeba/images/194869.jpg
inflating: ./data/celeba/images/091154.jpg
inflating: ./data/celeba/images/112729.jpg
inflating: ./data/celeba/images/056127.jpg
inflating: ./data/celeba/images/063280.jpg
inflating: ./data/celeba/images/010734.jpg
inflating: ./data/celeba/images/020818.jpg
inflating: ./data/celeba/images/055721.jpg
inflating: ./data/celeba/images/171950.jpg
inflating: ./data/celeba/images/169660.jpg
inflating: ./data/celeba/images/053529.jpg
inflating: ./data/celeba/images/085221.jpg
inflating: ./data/celeba/images/055814.jpg
inflating: ./data/celeba/images/089083.jpg
inflating: ./data/celeba/images/077515.jpg
inflating: ./data/celeba/images/125125.jpg
inflating: ./data/celeba/images/061642.jpg
inflating: ./data/celeba/images/012399.jpg
inflating: ./data/celeba/images/023483.jpg
inflating: ./data/celeba/images/188382.jpg
inflating: ./data/celeba/images/061052.jpg
inflating: ./data/celeba/images/032444.jpg
inflating: ./data/celeba/images/142521.jpg
inflating: ./data/celeba/images/006548.jpg
inflating: ./data/celeba/images/123360.jpg
inflating: ./data/celeba/images/075124.jpg
inflating: ./data/celeba/images/104906.jpg
inflating: ./data/celeba/images/158834.jpg
inflating: ./data/celeba/images/015271.jpg
inflating: ./data/celeba/images/083347.jpg
inflating: ./data/celeba/images/005898.jpg
inflating: ./data/celeba/images/164278.jpg
inflating: ./data/celeba/images/049790.jpg
inflating: ./data/celeba/images/021100.jpg
inflating: ./data/celeba/images/126823.jpg
inflating: ./data/celeba/images/047080.jpg
inflating: ./data/celeba/images/023193.jpg
inflating: ./data/celeba/images/139391.jpg
inflating: ./data/celeba/images/052234.jpg
inflating: ./data/celeba/images/189752.jpg
inflating: ./data/celeba/images/058772.jpg
inflating: ./data/celeba/images/144519.jpg
inflating: ./data/celeba/images/148655.jpg
inflating: ./data/celeba/images/103021.jpg
inflating: ./data/celeba/images/140514.jpg
inflating: ./data/celeba/images/085551.jpg
inflating: ./data/celeba/images/173710.jpg
inflating: ./data/celeba/images/183299.jpg
inflating: ./data/celeba/images/089861.jpg
inflating: ./data/celeba/images/141657.jpg
inflating: ./data/celeba/images/044517.jpg
inflating: ./data/celeba/images/007537.jpg
inflating: ./data/celeba/images/077004.jpg
inflating: ./data/celeba/images/137912.jpg
inflating: ./data/celeba/images/155867.jpg
inflating: ./data/celeba/images/078711.jpg
inflating: ./data/celeba/images/195485.jpg
inflating: ./data/celeba/images/172816.jpg
inflating: ./data/celeba/images/015652.jpg
inflating: ./data/celeba/images/069111.jpg
inflating: ./data/celeba/images/033034.jpg
inflating: ./data/celeba/images/030025.jpg
inflating: ./data/celeba/images/188128.jpg
inflating: ./data/celeba/images/015114.jpg
inflating: ./data/celeba/images/081943.jpg
inflating: ./data/celeba/images/029155.jpg
inflating: ./data/celeba/images/093568.jpg
inflating: ./data/celeba/images/079689.jpg
inflating: ./data/celeba/images/016134.jpg
inflating: ./data/celeba/images/109415.jpg
inflating: ./data/celeba/images/064171.jpg
inflating: ./data/celeba/images/156391.jpg
inflating: ./data/celeba/images/058644.jpg
inflating: ./data/celeba/images/119255.jpg
inflating: ./data/celeba/images/202062.jpg
inflating: ./data/celeba/images/128899.jpg
inflating: ./data/celeba/images/017934.jpg
inflating: ./data/celeba/images/003370.jpg
inflating: ./data/celeba/images/143591.jpg
inflating: ./data/celeba/images/068257.jpg
inflating: ./data/celeba/images/163201.jpg
inflating: ./data/celeba/images/058500.jpg
inflating: ./data/celeba/images/120605.jpg
inflating: ./data/celeba/images/123967.jpg
inflating: ./data/celeba/images/131544.jpg
inflating: ./data/celeba/images/172677.jpg
inflating: ./data/celeba/images/068969.jpg
inflating: ./data/celeba/images/036662.jpg
inflating: ./data/celeba/images/139615.jpg
inflating: ./data/celeba/images/176819.jpg
inflating: ./data/celeba/images/201733.jpg
inflating: ./data/celeba/images/062741.jpg
inflating: ./data/celeba/images/078306.jpg
inflating: ./data/celeba/images/147630.jpg
inflating: ./data/celeba/images/105274.jpg
inflating: ./data/celeba/images/137727.jpg
inflating: ./data/celeba/images/062715.jpg
inflating: ./data/celeba/images/083656.jpg
inflating: ./data/celeba/images/038226.jpg
inflating: ./data/celeba/images/033969.jpg
inflating: ./data/celeba/images/026217.jpg
inflating: ./data/celeba/images/139238.jpg
inflating: ./data/celeba/images/006595.jpg
inflating: ./data/celeba/images/051693.jpg
inflating: ./data/celeba/images/052080.jpg
inflating: ./data/celeba/images/059112.jpg
inflating: ./data/celeba/images/102453.jpg
inflating: ./data/celeba/images/058372.jpg
inflating: ./data/celeba/images/099384.jpg
inflating: ./data/celeba/images/131075.jpg
inflating: ./data/celeba/images/178090.jpg
inflating: ./data/celeba/images/140728.jpg
inflating: ./data/celeba/images/062619.jpg
inflating: ./data/celeba/images/124612.jpg
inflating: ./data/celeba/images/161155.jpg
inflating: ./data/celeba/images/198731.jpg
inflating: ./data/celeba/images/018322.jpg
inflating: ./data/celeba/images/054357.jpg
inflating: ./data/celeba/images/094490.jpg
inflating: ./data/celeba/images/194137.jpg
inflating: ./data/celeba/images/010603.jpg
inflating: ./data/celeba/images/149331.jpg
inflating: ./data/celeba/images/044469.jpg
inflating: ./data/celeba/images/152414.jpg
inflating: ./data/celeba/images/074476.jpg
inflating: ./data/celeba/images/141322.jpg
inflating: ./data/celeba/images/072808.jpg
inflating: ./data/celeba/images/040305.jpg
inflating: ./data/celeba/images/025357.jpg
inflating: ./data/celeba/images/055566.jpg
inflating: ./data/celeba/images/113828.jpg
inflating: ./data/celeba/images/122818.jpg
inflating: ./data/celeba/images/189198.jpg
inflating: ./data/celeba/images/131526.jpg
inflating: ./data/celeba/images/082317.jpg
inflating: ./data/celeba/images/109557.jpg
inflating: ./data/celeba/images/155998.jpg
inflating: ./data/celeba/images/145496.jpg
inflating: ./data/celeba/images/144858.jpg
inflating: ./data/celeba/images/157874.jpg
inflating: ./data/celeba/images/049280.jpg
inflating: ./data/celeba/images/197463.jpg
inflating: ./data/celeba/images/015157.jpg
inflating: ./data/celeba/images/163677.jpg
inflating: ./data/celeba/images/183683.jpg
inflating: ./data/celeba/images/134806.jpg
inflating: ./data/celeba/images/170434.jpg
inflating: ./data/celeba/images/097965.jpg
inflating: ./data/celeba/images/149916.jpg
inflating: ./data/celeba/images/024853.jpg
inflating: ./data/celeba/images/113176.jpg
inflating: ./data/celeba/images/186309.jpg
inflating: ./data/celeba/images/077642.jpg
inflating: ./data/celeba/images/129383.jpg
inflating: ./data/celeba/images/036808.jpg
inflating: ./data/celeba/images/006378.jpg
inflating: ./data/celeba/images/053359.jpg
inflating: ./data/celeba/images/149137.jpg
inflating: ./data/celeba/images/058166.jpg
inflating: ./data/celeba/images/173030.jpg
inflating: ./data/celeba/images/191393.jpg
inflating: ./data/celeba/images/160460.jpg
inflating: ./data/celeba/images/067108.jpg
inflating: ./data/celeba/images/146826.jpg
inflating: ./data/celeba/images/122186.jpg
inflating: ./data/celeba/images/169125.jpg
inflating: ./data/celeba/images/028314.jpg
inflating: ./data/celeba/images/041884.jpg
inflating: ./data/celeba/images/105905.jpg
inflating: ./data/celeba/images/200869.jpg
inflating: ./data/celeba/images/063308.jpg
inflating: ./data/celeba/images/089705.jpg
inflating: ./data/celeba/images/190620.jpg
inflating: ./data/celeba/images/192582.jpg
inflating: ./data/celeba/images/048047.jpg
inflating: ./data/celeba/images/003958.jpg
inflating: ./data/celeba/images/064711.jpg
inflating: ./data/celeba/images/108529.jpg
inflating: ./data/celeba/images/125589.jpg
inflating: ./data/celeba/images/031472.jpg
inflating: ./data/celeba/images/194363.jpg
inflating: ./data/celeba/images/160309.jpg
inflating: ./data/celeba/images/072575.jpg
inflating: ./data/celeba/images/186465.jpg
inflating: ./data/celeba/images/086121.jpg
inflating: ./data/celeba/images/052869.jpg
inflating: ./data/celeba/images/188378.jpg
inflating: ./data/celeba/images/108183.jpg
inflating: ./data/celeba/images/153055.jpg
inflating: ./data/celeba/images/066298.jpg
inflating: ./data/celeba/images/162825.jpg
inflating: ./data/celeba/images/028612.jpg
inflating: ./data/celeba/images/154493.jpg
inflating: ./data/celeba/images/046973.jpg
inflating: ./data/celeba/images/028982.jpg
inflating: ./data/celeba/images/029400.jpg
inflating: ./data/celeba/images/152380.jpg
inflating: ./data/celeba/images/060999.jpg
inflating: ./data/celeba/images/101749.jpg
inflating: ./data/celeba/images/018126.jpg
inflating: ./data/celeba/images/019323.jpg
inflating: ./data/celeba/images/023259.jpg
inflating: ./data/celeba/images/119529.jpg
inflating: ./data/celeba/images/110708.jpg
inflating: ./data/celeba/images/085403.jpg
inflating: ./data/celeba/images/083683.jpg
inflating: ./data/celeba/images/150242.jpg
inflating: ./data/celeba/images/142007.jpg
inflating: ./data/celeba/images/056733.jpg
inflating: ./data/celeba/images/200068.jpg
inflating: ./data/celeba/images/042126.jpg
inflating: ./data/celeba/images/053127.jpg
inflating: ./data/celeba/images/167580.jpg
inflating: ./data/celeba/images/156336.jpg
inflating: ./data/celeba/images/128223.jpg
inflating: ./data/celeba/images/060770.jpg
inflating: ./data/celeba/images/144892.jpg
inflating: ./data/celeba/images/060783.jpg
inflating: ./data/celeba/images/006062.jpg
inflating: ./data/celeba/images/044934.jpg
inflating: ./data/celeba/images/063704.jpg
inflating: ./data/celeba/images/186904.jpg
inflating: ./data/celeba/images/079735.jpg
inflating: ./data/celeba/images/202546.jpg
inflating: ./data/celeba/images/120115.jpg
inflating: ./data/celeba/images/048403.jpg
inflating: ./data/celeba/images/191914.jpg
inflating: ./data/celeba/images/016142.jpg
inflating: ./data/celeba/images/194617.jpg
inflating: ./data/celeba/images/181773.jpg
inflating: ./data/celeba/images/026157.jpg
inflating: ./data/celeba/images/131503.jpg
inflating: ./data/celeba/images/164206.jpg
inflating: ./data/celeba/images/005216.jpg
inflating: ./data/celeba/images/179345.jpg
inflating: ./data/celeba/images/035032.jpg
inflating: ./data/celeba/images/034925.jpg
inflating: ./data/celeba/images/088296.jpg
inflating: ./data/celeba/images/156528.jpg
inflating: ./data/celeba/images/086758.jpg
inflating: ./data/celeba/images/024430.jpg
inflating: ./data/celeba/images/094448.jpg
inflating: ./data/celeba/images/110703.jpg
inflating: ./data/celeba/images/120556.jpg
inflating: ./data/celeba/images/022442.jpg
inflating: ./data/celeba/images/162303.jpg
inflating: ./data/celeba/images/002587.jpg
inflating: ./data/celeba/images/160504.jpg
inflating: ./data/celeba/images/155830.jpg
inflating: ./data/celeba/images/198521.jpg
inflating: ./data/celeba/images/188211.jpg
inflating: ./data/celeba/images/159478.jpg
inflating: ./data/celeba/images/033453.jpg
inflating: ./data/celeba/images/006170.jpg
inflating: ./data/celeba/images/103381.jpg
inflating: ./data/celeba/images/183958.jpg
inflating: ./data/celeba/images/194423.jpg
inflating: ./data/celeba/images/044454.jpg
inflating: ./data/celeba/images/104771.jpg
inflating: ./data/celeba/images/078413.jpg
inflating: ./data/celeba/images/151198.jpg
inflating: ./data/celeba/images/020888.jpg
inflating: ./data/celeba/images/119541.jpg
inflating: ./data/celeba/images/067903.jpg
inflating: ./data/celeba/images/195084.jpg
inflating: ./data/celeba/images/095495.jpg
inflating: ./data/celeba/images/058490.jpg
inflating: ./data/celeba/images/151379.jpg
inflating: ./data/celeba/images/054235.jpg
inflating: ./data/celeba/images/118915.jpg
inflating: ./data/celeba/images/188933.jpg
inflating: ./data/celeba/images/014686.jpg
inflating: ./data/celeba/images/199596.jpg
inflating: ./data/celeba/images/183252.jpg
inflating: ./data/celeba/images/200103.jpg
inflating: ./data/celeba/images/133022.jpg
inflating: ./data/celeba/images/076131.jpg
inflating: ./data/celeba/images/156383.jpg
inflating: ./data/celeba/images/157708.jpg
inflating: ./data/celeba/images/145690.jpg
inflating: ./data/celeba/images/104935.jpg
inflating: ./data/celeba/images/177456.jpg
inflating: ./data/celeba/images/197204.jpg
inflating: ./data/celeba/images/100723.jpg
inflating: ./data/celeba/images/052603.jpg
inflating: ./data/celeba/images/113347.jpg
inflating: ./data/celeba/images/125998.jpg
inflating: ./data/celeba/images/039526.jpg
inflating: ./data/celeba/images/028852.jpg
inflating: ./data/celeba/images/113029.jpg
inflating: ./data/celeba/images/117877.jpg
inflating: ./data/celeba/images/045563.jpg
inflating: ./data/celeba/images/144342.jpg
inflating: ./data/celeba/images/054049.jpg
inflating: ./data/celeba/images/066057.jpg
inflating: ./data/celeba/images/001858.jpg
inflating: ./data/celeba/images/040894.jpg
inflating: ./data/celeba/images/001352.jpg
inflating: ./data/celeba/images/165675.jpg
inflating: ./data/celeba/images/081044.jpg
inflating: ./data/celeba/images/096053.jpg
inflating: ./data/celeba/images/165623.jpg
inflating: ./data/celeba/images/107347.jpg
inflating: ./data/celeba/images/191275.jpg
inflating: ./data/celeba/images/155615.jpg
inflating: ./data/celeba/images/065374.jpg
inflating: ./data/celeba/images/123314.jpg
inflating: ./data/celeba/images/093441.jpg
inflating: ./data/celeba/images/026117.jpg
inflating: ./data/celeba/images/158935.jpg
inflating: ./data/celeba/images/131609.jpg
inflating: ./data/celeba/images/008011.jpg
inflating: ./data/celeba/images/039582.jpg
inflating: ./data/celeba/images/090564.jpg
inflating: ./data/celeba/images/102602.jpg
inflating: ./data/celeba/images/146966.jpg
inflating: ./data/celeba/images/109438.jpg
inflating: ./data/celeba/images/013335.jpg
inflating: ./data/celeba/images/180745.jpg
inflating: ./data/celeba/images/036191.jpg
inflating: ./data/celeba/images/031886.jpg
inflating: ./data/celeba/images/093251.jpg
inflating: ./data/celeba/images/166677.jpg
inflating: ./data/celeba/images/135855.jpg
inflating: ./data/celeba/images/193533.jpg
inflating: ./data/celeba/images/191757.jpg
inflating: ./data/celeba/images/049812.jpg
inflating: ./data/celeba/images/021860.jpg
inflating: ./data/celeba/images/007067.jpg
inflating: ./data/celeba/images/047592.jpg
inflating: ./data/celeba/images/024328.jpg
inflating: ./data/celeba/images/181919.jpg
inflating: ./data/celeba/images/146085.jpg
inflating: ./data/celeba/images/178724.jpg
inflating: ./data/celeba/images/191107.jpg
inflating: ./data/celeba/images/100374.jpg
inflating: ./data/celeba/images/035157.jpg
inflating: ./data/celeba/images/097611.jpg
inflating: ./data/celeba/images/118043.jpg
inflating: ./data/celeba/images/148690.jpg
inflating: ./data/celeba/images/198300.jpg
inflating: ./data/celeba/images/074565.jpg
inflating: ./data/celeba/images/039533.jpg
inflating: ./data/celeba/images/080353.jpg
inflating: ./data/celeba/images/039016.jpg
inflating: ./data/celeba/images/106636.jpg
inflating: ./data/celeba/images/189937.jpg
inflating: ./data/celeba/images/165461.jpg
inflating: ./data/celeba/images/064746.jpg
inflating: ./data/celeba/images/032297.jpg
inflating: ./data/celeba/images/057649.jpg
inflating: ./data/celeba/images/048911.jpg
inflating: ./data/celeba/images/187134.jpg
inflating: ./data/celeba/images/079392.jpg
inflating: ./data/celeba/images/002642.jpg
inflating: ./data/celeba/images/164937.jpg
inflating: ./data/celeba/images/098349.jpg
inflating: ./data/celeba/images/023965.jpg
inflating: ./data/celeba/images/144956.jpg
inflating: ./data/celeba/images/032398.jpg
inflating: ./data/celeba/images/113697.jpg
inflating: ./data/celeba/images/175191.jpg
inflating: ./data/celeba/images/032058.jpg
inflating: ./data/celeba/images/193770.jpg
inflating: ./data/celeba/images/141143.jpg
inflating: ./data/celeba/images/044364.jpg
inflating: ./data/celeba/images/010954.jpg
inflating: ./data/celeba/images/114977.jpg
inflating: ./data/celeba/images/018409.jpg
inflating: ./data/celeba/images/058256.jpg
inflating: ./data/celeba/images/147007.jpg
inflating: ./data/celeba/images/138775.jpg
inflating: ./data/celeba/images/093923.jpg
inflating: ./data/celeba/images/136946.jpg
inflating: ./data/celeba/images/158895.jpg
inflating: ./data/celeba/images/088178.jpg
inflating: ./data/celeba/images/077161.jpg
inflating: ./data/celeba/images/167855.jpg
inflating: ./data/celeba/images/170579.jpg
inflating: ./data/celeba/images/153203.jpg
inflating: ./data/celeba/images/168156.jpg
inflating: ./data/celeba/images/103091.jpg
inflating: ./data/celeba/images/069380.jpg
inflating: ./data/celeba/images/199151.jpg
inflating: ./data/celeba/images/085386.jpg
inflating: ./data/celeba/images/031782.jpg
inflating: ./data/celeba/images/050052.jpg
inflating: ./data/celeba/images/167153.jpg
inflating: ./data/celeba/images/021622.jpg
inflating: ./data/celeba/images/186809.jpg
inflating: ./data/celeba/images/091854.jpg
inflating: ./data/celeba/images/140669.jpg
inflating: ./data/celeba/images/020492.jpg
inflating: ./data/celeba/images/071633.jpg
inflating: ./data/celeba/images/182657.jpg
inflating: ./data/celeba/images/137269.jpg
inflating: ./data/celeba/images/129205.jpg
inflating: ./data/celeba/images/046924.jpg
inflating: ./data/celeba/images/073149.jpg
inflating: ./data/celeba/images/025265.jpg
inflating: ./data/celeba/images/002173.jpg
inflating: ./data/celeba/images/016233.jpg
inflating: ./data/celeba/images/089402.jpg
inflating: ./data/celeba/images/146530.jpg
inflating: ./data/celeba/images/041580.jpg
inflating: ./data/celeba/images/035146.jpg
inflating: ./data/celeba/images/142977.jpg
inflating: ./data/celeba/images/148102.jpg
inflating: ./data/celeba/images/146338.jpg
inflating: ./data/celeba/images/163221.jpg
inflating: ./data/celeba/images/174898.jpg
inflating: ./data/celeba/images/036632.jpg
inflating: ./data/celeba/images/177219.jpg
inflating: ./data/celeba/images/076178.jpg
inflating: ./data/celeba/images/044338.jpg
inflating: ./data/celeba/images/130045.jpg
inflating: ./data/celeba/images/129954.jpg
inflating: ./data/celeba/images/029757.jpg
inflating: ./data/celeba/images/191279.jpg
inflating: ./data/celeba/images/080836.jpg
inflating: ./data/celeba/images/134898.jpg
inflating: ./data/celeba/images/069937.jpg
inflating: ./data/celeba/images/128747.jpg
inflating: ./data/celeba/images/040025.jpg
inflating: ./data/celeba/images/197650.jpg
inflating: ./data/celeba/images/082169.jpg
inflating: ./data/celeba/images/010077.jpg
inflating: ./data/celeba/images/081053.jpg
inflating: ./data/celeba/images/102594.jpg
inflating: ./data/celeba/images/167856.jpg
inflating: ./data/celeba/images/039672.jpg
inflating: ./data/celeba/images/109802.jpg
inflating: ./data/celeba/images/061454.jpg
inflating: ./data/celeba/images/084146.jpg
inflating: ./data/celeba/images/198071.jpg
inflating: ./data/celeba/images/098828.jpg
inflating: ./data/celeba/images/117423.jpg
inflating: ./data/celeba/images/165303.jpg
inflating: ./data/celeba/images/189748.jpg
inflating: ./data/celeba/images/067657.jpg
inflating: ./data/celeba/images/166214.jpg
inflating: ./data/celeba/images/199080.jpg
inflating: ./data/celeba/images/030640.jpg
inflating: ./data/celeba/images/078316.jpg
inflating: ./data/celeba/images/067787.jpg
inflating: ./data/celeba/images/140015.jpg
inflating: ./data/celeba/images/128070.jpg
inflating: ./data/celeba/images/053826.jpg
inflating: ./data/celeba/images/201597.jpg
inflating: ./data/celeba/images/136005.jpg
inflating: ./data/celeba/images/090847.jpg
inflating: ./data/celeba/images/198195.jpg
inflating: ./data/celeba/images/021730.jpg
inflating: ./data/celeba/images/167663.jpg
inflating: ./data/celeba/images/125976.jpg
inflating: ./data/celeba/images/031049.jpg
inflating: ./data/celeba/images/006089.jpg
inflating: ./data/celeba/images/146760.jpg
inflating: ./data/celeba/images/146631.jpg
inflating: ./data/celeba/images/092739.jpg
inflating: ./data/celeba/images/027675.jpg
inflating: ./data/celeba/images/035319.jpg
inflating: ./data/celeba/images/163863.jpg
inflating: ./data/celeba/images/164027.jpg
inflating: ./data/celeba/images/063332.jpg
inflating: ./data/celeba/images/137336.jpg
inflating: ./data/celeba/images/190223.jpg
inflating: ./data/celeba/images/143004.jpg
inflating: ./data/celeba/images/145238.jpg
inflating: ./data/celeba/images/023065.jpg
inflating: ./data/celeba/images/025688.jpg
inflating: ./data/celeba/images/024237.jpg
inflating: ./data/celeba/images/073869.jpg
inflating: ./data/celeba/images/130025.jpg
inflating: ./data/celeba/images/139592.jpg
inflating: ./data/celeba/images/033876.jpg
inflating: ./data/celeba/images/196799.jpg
inflating: ./data/celeba/images/119549.jpg
inflating: ./data/celeba/images/198951.jpg
inflating: ./data/celeba/images/018250.jpg
inflating: ./data/celeba/images/062268.jpg
inflating: ./data/celeba/images/171970.jpg
inflating: ./data/celeba/images/056291.jpg
inflating: ./data/celeba/images/046441.jpg
inflating: ./data/celeba/images/135633.jpg
inflating: ./data/celeba/images/069870.jpg
inflating: ./data/celeba/images/057316.jpg
inflating: ./data/celeba/images/149424.jpg
inflating: ./data/celeba/images/140713.jpg
inflating: ./data/celeba/images/104989.jpg
inflating: ./data/celeba/images/030283.jpg
inflating: ./data/celeba/images/100972.jpg
inflating: ./data/celeba/images/119564.jpg
inflating: ./data/celeba/images/082222.jpg
inflating: ./data/celeba/images/105345.jpg
inflating: ./data/celeba/images/002490.jpg
inflating: ./data/celeba/images/098194.jpg
inflating: ./data/celeba/images/150996.jpg
inflating: ./data/celeba/images/014422.jpg
inflating: ./data/celeba/images/042975.jpg
inflating: ./data/celeba/images/189869.jpg
inflating: ./data/celeba/images/152225.jpg
inflating: ./data/celeba/images/197426.jpg
inflating: ./data/celeba/images/046827.jpg
inflating: ./data/celeba/images/080871.jpg
inflating: ./data/celeba/images/075514.jpg
inflating: ./data/celeba/images/086558.jpg
inflating: ./data/celeba/images/198223.jpg
inflating: ./data/celeba/images/135998.jpg
inflating: ./data/celeba/images/111558.jpg
inflating: ./data/celeba/images/045395.jpg
inflating: ./data/celeba/images/013851.jpg
inflating: ./data/celeba/images/082257.jpg
inflating: ./data/celeba/images/066326.jpg
inflating: ./data/celeba/images/156085.jpg
inflating: ./data/celeba/images/129319.jpg
inflating: ./data/celeba/images/049537.jpg
inflating: ./data/celeba/images/083271.jpg
inflating: ./data/celeba/images/133864.jpg
inflating: ./data/celeba/images/081204.jpg
inflating: ./data/celeba/images/115962.jpg
inflating: ./data/celeba/images/021457.jpg
inflating: ./data/celeba/images/101971.jpg
inflating: ./data/celeba/images/077250.jpg
inflating: ./data/celeba/images/008718.jpg
inflating: ./data/celeba/images/188093.jpg
inflating: ./data/celeba/images/152624.jpg
inflating: ./data/celeba/images/166704.jpg
inflating: ./data/celeba/images/183593.jpg
inflating: ./data/celeba/images/167468.jpg
inflating: ./data/celeba/images/150051.jpg
inflating: ./data/celeba/images/181269.jpg
inflating: ./data/celeba/images/147753.jpg
inflating: ./data/celeba/images/155176.jpg
inflating: ./data/celeba/images/103396.jpg
inflating: ./data/celeba/images/078305.jpg
inflating: ./data/celeba/images/053357.jpg
inflating: ./data/celeba/images/198082.jpg
inflating: ./data/celeba/images/173618.jpg
inflating: ./data/celeba/images/094780.jpg
inflating: ./data/celeba/images/087794.jpg
inflating: ./data/celeba/images/078021.jpg
inflating: ./data/celeba/images/082855.jpg
inflating: ./data/celeba/images/191795.jpg
inflating: ./data/celeba/images/183746.jpg
inflating: ./data/celeba/images/031838.jpg
inflating: ./data/celeba/images/019370.jpg
inflating: ./data/celeba/images/006047.jpg
inflating: ./data/celeba/images/092145.jpg
inflating: ./data/celeba/images/101438.jpg
inflating: ./data/celeba/images/062262.jpg
inflating: ./data/celeba/images/082665.jpg
inflating: ./data/celeba/images/056910.jpg
inflating: ./data/celeba/images/061853.jpg
inflating: ./data/celeba/images/115881.jpg
inflating: ./data/celeba/images/070972.jpg
inflating: ./data/celeba/images/008458.jpg
inflating: ./data/celeba/images/183795.jpg
inflating: ./data/celeba/images/043295.jpg
inflating: ./data/celeba/images/161620.jpg
inflating: ./data/celeba/images/085425.jpg
inflating: ./data/celeba/images/160714.jpg
inflating: ./data/celeba/images/176348.jpg
inflating: ./data/celeba/images/171889.jpg
inflating: ./data/celeba/images/018397.jpg
inflating: ./data/celeba/images/041792.jpg
inflating: ./data/celeba/images/013544.jpg
inflating: ./data/celeba/images/023670.jpg
inflating: ./data/celeba/images/123848.jpg
inflating: ./data/celeba/images/129623.jpg
inflating: ./data/celeba/images/137866.jpg
inflating: ./data/celeba/images/105930.jpg
inflating: ./data/celeba/images/180761.jpg
inflating: ./data/celeba/images/051751.jpg
inflating: ./data/celeba/images/128102.jpg
inflating: ./data/celeba/images/138154.jpg
inflating: ./data/celeba/images/137985.jpg
inflating: ./data/celeba/images/049098.jpg
inflating: ./data/celeba/images/182900.jpg
inflating: ./data/celeba/images/072363.jpg
inflating: ./data/celeba/images/068869.jpg
inflating: ./data/celeba/images/190642.jpg
inflating: ./data/celeba/images/055870.jpg
inflating: ./data/celeba/images/174507.jpg
inflating: ./data/celeba/images/100628.jpg
inflating: ./data/celeba/images/025083.jpg
inflating: ./data/celeba/images/092574.jpg
inflating: ./data/celeba/images/004710.jpg
inflating: ./data/celeba/images/088223.jpg
inflating: ./data/celeba/images/161861.jpg
inflating: ./data/celeba/images/001182.jpg
inflating: ./data/celeba/images/105513.jpg
inflating: ./data/celeba/images/144104.jpg
inflating: ./data/celeba/images/178149.jpg
inflating: ./data/celeba/images/108298.jpg
inflating: ./data/celeba/images/074050.jpg
inflating: ./data/celeba/images/023079.jpg
inflating: ./data/celeba/images/036055.jpg
inflating: ./data/celeba/images/052855.jpg
inflating: ./data/celeba/images/137210.jpg
inflating: ./data/celeba/images/052849.jpg
inflating: ./data/celeba/images/112637.jpg
inflating: ./data/celeba/images/108987.jpg
inflating: ./data/celeba/images/174998.jpg
inflating: ./data/celeba/images/110859.jpg
inflating: ./data/celeba/images/201389.jpg
inflating: ./data/celeba/images/190576.jpg
inflating: ./data/celeba/images/151071.jpg
inflating: ./data/celeba/images/077427.jpg
inflating: ./data/celeba/images/069774.jpg
inflating: ./data/celeba/images/088143.jpg
inflating: ./data/celeba/images/052664.jpg
inflating: ./data/celeba/images/183362.jpg
inflating: ./data/celeba/images/125105.jpg
inflating: ./data/celeba/images/039271.jpg
inflating: ./data/celeba/images/070148.jpg
inflating: ./data/celeba/images/156686.jpg
inflating: ./data/celeba/images/146679.jpg
inflating: ./data/celeba/images/166402.jpg
inflating: ./data/celeba/images/015048.jpg
inflating: ./data/celeba/images/175725.jpg
inflating: ./data/celeba/images/115667.jpg
inflating: ./data/celeba/images/032774.jpg
inflating: ./data/celeba/images/134276.jpg
inflating: ./data/celeba/images/002315.jpg
inflating: ./data/celeba/images/024403.jpg
inflating: ./data/celeba/images/059033.jpg
inflating: ./data/celeba/images/188258.jpg
inflating: ./data/celeba/images/197011.jpg
inflating: ./data/celeba/images/046084.jpg
inflating: ./data/celeba/images/125394.jpg
inflating: ./data/celeba/images/187642.jpg
inflating: ./data/celeba/images/153690.jpg
inflating: ./data/celeba/images/055549.jpg
inflating: ./data/celeba/images/130018.jpg
inflating: ./data/celeba/images/120340.jpg
inflating: ./data/celeba/images/070325.jpg
inflating: ./data/celeba/images/105585.jpg
inflating: ./data/celeba/images/030992.jpg
inflating: ./data/celeba/images/023610.jpg
inflating: ./data/celeba/images/088609.jpg
inflating: ./data/celeba/images/033599.jpg
inflating: ./data/celeba/images/124534.jpg
inflating: ./data/celeba/images/045214.jpg
inflating: ./data/celeba/images/064608.jpg
inflating: ./data/celeba/images/132065.jpg
inflating: ./data/celeba/images/113099.jpg
inflating: ./data/celeba/images/175451.jpg
inflating: ./data/celeba/images/103405.jpg
inflating: ./data/celeba/images/069623.jpg
inflating: ./data/celeba/images/150025.jpg
inflating: ./data/celeba/images/170965.jpg
inflating: ./data/celeba/images/054094.jpg
inflating: ./data/celeba/images/068998.jpg
inflating: ./data/celeba/images/113542.jpg
inflating: ./data/celeba/images/033499.jpg
inflating: ./data/celeba/images/178348.jpg
inflating: ./data/celeba/images/002827.jpg
inflating: ./data/celeba/images/081348.jpg
inflating: ./data/celeba/images/159640.jpg
inflating: ./data/celeba/images/003540.jpg
inflating: ./data/celeba/images/176675.jpg
inflating: ./data/celeba/images/201961.jpg
inflating: ./data/celeba/images/083256.jpg
inflating: ./data/celeba/images/101451.jpg
inflating: ./data/celeba/images/185785.jpg
inflating: ./data/celeba/images/181508.jpg
inflating: ./data/celeba/images/136965.jpg
inflating: ./data/celeba/images/144979.jpg
inflating: ./data/celeba/images/043425.jpg
inflating: ./data/celeba/images/115615.jpg
inflating: ./data/celeba/images/101222.jpg
inflating: ./data/celeba/images/026559.jpg
inflating: ./data/celeba/images/095364.jpg
inflating: ./data/celeba/images/185323.jpg
inflating: ./data/celeba/images/177216.jpg
inflating: ./data/celeba/images/198349.jpg
inflating: ./data/celeba/images/147898.jpg
inflating: ./data/celeba/images/046768.jpg
inflating: ./data/celeba/images/181491.jpg
inflating: ./data/celeba/images/081632.jpg
inflating: ./data/celeba/images/069889.jpg
inflating: ./data/celeba/images/037738.jpg
inflating: ./data/celeba/images/005591.jpg
inflating: ./data/celeba/images/104045.jpg
inflating: ./data/celeba/images/148890.jpg
inflating: ./data/celeba/images/119481.jpg
inflating: ./data/celeba/images/060006.jpg
inflating: ./data/celeba/images/187723.jpg
inflating: ./data/celeba/images/103909.jpg
inflating: ./data/celeba/images/095957.jpg
inflating: ./data/celeba/images/010646.jpg
inflating: ./data/celeba/images/084295.jpg
inflating: ./data/celeba/images/192827.jpg
inflating: ./data/celeba/images/186106.jpg
inflating: ./data/celeba/images/168491.jpg
inflating: ./data/celeba/images/049945.jpg
inflating: ./data/celeba/images/027481.jpg
inflating: ./data/celeba/images/017860.jpg
inflating: ./data/celeba/images/108352.jpg
inflating: ./data/celeba/images/066786.jpg
inflating: ./data/celeba/images/082862.jpg
inflating: ./data/celeba/images/124707.jpg
inflating: ./data/celeba/images/044828.jpg
inflating: ./data/celeba/images/114804.jpg
inflating: ./data/celeba/images/061464.jpg
inflating: ./data/celeba/images/061282.jpg
inflating: ./data/celeba/images/134470.jpg
inflating: ./data/celeba/images/041706.jpg
inflating: ./data/celeba/images/175824.jpg
inflating: ./data/celeba/images/061608.jpg
inflating: ./data/celeba/images/080302.jpg
inflating: ./data/celeba/images/152182.jpg
inflating: ./data/celeba/images/029421.jpg
inflating: ./data/celeba/images/099446.jpg
inflating: ./data/celeba/images/068047.jpg
inflating: ./data/celeba/images/196610.jpg
inflating: ./data/celeba/images/012057.jpg
inflating: ./data/celeba/images/076699.jpg
inflating: ./data/celeba/images/119096.jpg
inflating: ./data/celeba/images/150676.jpg
inflating: ./data/celeba/images/035886.jpg
inflating: ./data/celeba/images/091545.jpg
inflating: ./data/celeba/images/017533.jpg
inflating: ./data/celeba/images/142189.jpg
inflating: ./data/celeba/images/183518.jpg
inflating: ./data/celeba/images/083276.jpg
inflating: ./data/celeba/images/150552.jpg
inflating: ./data/celeba/images/190337.jpg
inflating: ./data/celeba/images/158227.jpg
inflating: ./data/celeba/images/093469.jpg
inflating: ./data/celeba/images/169165.jpg
inflating: ./data/celeba/images/016998.jpg
inflating: ./data/celeba/images/039630.jpg
inflating: ./data/celeba/images/126467.jpg
inflating: ./data/celeba/images/163354.jpg
inflating: ./data/celeba/images/097146.jpg
inflating: ./data/celeba/images/076532.jpg
inflating: ./data/celeba/images/071286.jpg
inflating: ./data/celeba/images/046532.jpg
inflating: ./data/celeba/images/197946.jpg
inflating: ./data/celeba/images/177295.jpg
inflating: ./data/celeba/images/133947.jpg
inflating: ./data/celeba/images/146685.jpg
inflating: ./data/celeba/images/023502.jpg
inflating: ./data/celeba/images/102210.jpg
inflating: ./data/celeba/images/143444.jpg
inflating: ./data/celeba/images/032703.jpg
inflating: ./data/celeba/images/186083.jpg
inflating: ./data/celeba/images/050541.jpg
inflating: ./data/celeba/images/154786.jpg
inflating: ./data/celeba/images/153209.jpg
inflating: ./data/celeba/images/082368.jpg
inflating: ./data/celeba/images/140410.jpg
inflating: ./data/celeba/images/105473.jpg
inflating: ./data/celeba/images/140176.jpg
inflating: ./data/celeba/images/026879.jpg
inflating: ./data/celeba/images/164881.jpg
inflating: ./data/celeba/images/152137.jpg
inflating: ./data/celeba/images/110667.jpg
inflating: ./data/celeba/images/135261.jpg
inflating: ./data/celeba/images/116784.jpg
inflating: ./data/celeba/images/166923.jpg
inflating: ./data/celeba/images/122919.jpg
inflating: ./data/celeba/images/099209.jpg
inflating: ./data/celeba/images/045111.jpg
inflating: ./data/celeba/images/116829.jpg
inflating: ./data/celeba/images/077926.jpg
inflating: ./data/celeba/images/067078.jpg
inflating: ./data/celeba/images/175883.jpg
inflating: ./data/celeba/images/136641.jpg
inflating: ./data/celeba/images/121069.jpg
inflating: ./data/celeba/images/180425.jpg
inflating: ./data/celeba/images/046243.jpg
inflating: ./data/celeba/images/075282.jpg
inflating: ./data/celeba/images/129447.jpg
inflating: ./data/celeba/images/040799.jpg
inflating: ./data/celeba/images/011625.jpg
inflating: ./data/celeba/images/120145.jpg
inflating: ./data/celeba/images/079829.jpg
inflating: ./data/celeba/images/023311.jpg
inflating: ./data/celeba/images/154389.jpg
inflating: ./data/celeba/images/109314.jpg
inflating: ./data/celeba/images/012201.jpg
inflating: ./data/celeba/images/118233.jpg
inflating: ./data/celeba/images/077831.jpg
inflating: ./data/celeba/images/130971.jpg
inflating: ./data/celeba/images/004931.jpg
inflating: ./data/celeba/images/052561.jpg
inflating: ./data/celeba/images/182664.jpg
inflating: ./data/celeba/images/147057.jpg
inflating: ./data/celeba/images/011090.jpg
inflating: ./data/celeba/images/186161.jpg
inflating: ./data/celeba/images/065740.jpg
inflating: ./data/celeba/images/059756.jpg
inflating: ./data/celeba/images/182422.jpg
inflating: ./data/celeba/images/045724.jpg
inflating: ./data/celeba/images/133387.jpg
inflating: ./data/celeba/images/029332.jpg
inflating: ./data/celeba/images/150498.jpg
inflating: ./data/celeba/images/192811.jpg
inflating: ./data/celeba/images/188104.jpg
inflating: ./data/celeba/images/181122.jpg
inflating: ./data/celeba/images/067739.jpg
inflating: ./data/celeba/images/157245.jpg
inflating: ./data/celeba/images/122483.jpg
inflating: ./data/celeba/images/044226.jpg
inflating: ./data/celeba/images/096232.jpg
inflating: ./data/celeba/images/104460.jpg
inflating: ./data/celeba/images/196386.jpg
inflating: ./data/celeba/images/113150.jpg
inflating: ./data/celeba/images/023198.jpg
inflating: ./data/celeba/images/039930.jpg
inflating: ./data/celeba/images/147217.jpg
inflating: ./data/celeba/images/021931.jpg
inflating: ./data/celeba/images/199931.jpg
inflating: ./data/celeba/images/134200.jpg
inflating: ./data/celeba/images/009209.jpg
inflating: ./data/celeba/images/013096.jpg
inflating: ./data/celeba/images/042326.jpg
inflating: ./data/celeba/images/002232.jpg
inflating: ./data/celeba/images/034450.jpg
inflating: ./data/celeba/images/189675.jpg
inflating: ./data/celeba/images/030098.jpg
inflating: ./data/celeba/images/012170.jpg
inflating: ./data/celeba/images/156160.jpg
inflating: ./data/celeba/images/092899.jpg
inflating: ./data/celeba/images/158099.jpg
inflating: ./data/celeba/images/101302.jpg
inflating: ./data/celeba/images/000728.jpg
inflating: ./data/celeba/images/131715.jpg
inflating: ./data/celeba/images/106448.jpg
inflating: ./data/celeba/images/155034.jpg
inflating: ./data/celeba/images/014306.jpg
inflating: ./data/celeba/images/017504.jpg
inflating: ./data/celeba/images/000882.jpg
inflating: ./data/celeba/images/093009.jpg
inflating: ./data/celeba/images/049679.jpg
inflating: ./data/celeba/images/028230.jpg
inflating: ./data/celeba/images/083592.jpg
inflating: ./data/celeba/images/108783.jpg
inflating: ./data/celeba/images/171811.jpg
inflating: ./data/celeba/images/188249.jpg
inflating: ./data/celeba/images/062060.jpg
inflating: ./data/celeba/images/200435.jpg
inflating: ./data/celeba/images/014554.jpg
inflating: ./data/celeba/images/095876.jpg
inflating: ./data/celeba/images/047577.jpg
inflating: ./data/celeba/images/166893.jpg
inflating: ./data/celeba/images/004995.jpg
inflating: ./data/celeba/images/142474.jpg
inflating: ./data/celeba/images/098483.jpg
inflating: ./data/celeba/images/136748.jpg
inflating: ./data/celeba/images/122971.jpg
inflating: ./data/celeba/images/030676.jpg
inflating: ./data/celeba/images/200881.jpg
inflating: ./data/celeba/images/055237.jpg
inflating: ./data/celeba/images/176368.jpg
inflating: ./data/celeba/images/019156.jpg
inflating: ./data/celeba/images/007320.jpg
inflating: ./data/celeba/images/086453.jpg
inflating: ./data/celeba/images/169482.jpg
inflating: ./data/celeba/images/104331.jpg
inflating: ./data/celeba/images/142562.jpg
inflating: ./data/celeba/images/036010.jpg
inflating: ./data/celeba/images/099859.jpg
inflating: ./data/celeba/images/129902.jpg
inflating: ./data/celeba/images/002744.jpg
inflating: ./data/celeba/images/044268.jpg
inflating: ./data/celeba/images/113635.jpg
inflating: ./data/celeba/images/126482.jpg
inflating: ./data/celeba/images/118344.jpg
inflating: ./data/celeba/images/007620.jpg
inflating: ./data/celeba/images/081450.jpg
inflating: ./data/celeba/images/015806.jpg
inflating: ./data/celeba/images/092088.jpg
inflating: ./data/celeba/images/033784.jpg
inflating: ./data/celeba/images/082188.jpg
inflating: ./data/celeba/images/069593.jpg
inflating: ./data/celeba/images/043544.jpg
inflating: ./data/celeba/images/036510.jpg
inflating: ./data/celeba/images/054637.jpg
inflating: ./data/celeba/images/160648.jpg
inflating: ./data/celeba/images/049130.jpg
inflating: ./data/celeba/images/126009.jpg
inflating: ./data/celeba/images/010461.jpg
inflating: ./data/celeba/images/132819.jpg
inflating: ./data/celeba/images/180521.jpg
inflating: ./data/celeba/images/195001.jpg
inflating: ./data/celeba/images/182275.jpg
inflating: ./data/celeba/images/195179.jpg
inflating: ./data/celeba/images/180722.jpg
inflating: ./data/celeba/images/156488.jpg
inflating: ./data/celeba/images/154692.jpg
inflating: ./data/celeba/images/135814.jpg
inflating: ./data/celeba/images/002299.jpg
inflating: ./data/celeba/images/023988.jpg
inflating: ./data/celeba/images/036100.jpg
inflating: ./data/celeba/images/142975.jpg
inflating: ./data/celeba/images/157674.jpg
inflating: ./data/celeba/images/121523.jpg
inflating: ./data/celeba/images/170978.jpg
inflating: ./data/celeba/images/133391.jpg
inflating: ./data/celeba/images/075125.jpg
inflating: ./data/celeba/images/191664.jpg
inflating: ./data/celeba/images/130322.jpg
inflating: ./data/celeba/images/018786.jpg
inflating: ./data/celeba/images/129761.jpg
inflating: ./data/celeba/images/072289.jpg
inflating: ./data/celeba/images/012947.jpg
inflating: ./data/celeba/images/196151.jpg
inflating: ./data/celeba/images/069852.jpg
inflating: ./data/celeba/images/129161.jpg
inflating: ./data/celeba/images/069837.jpg
inflating: ./data/celeba/images/103980.jpg
inflating: ./data/celeba/images/150637.jpg
inflating: ./data/celeba/images/033552.jpg
inflating: ./data/celeba/images/105153.jpg
inflating: ./data/celeba/images/121267.jpg
inflating: ./data/celeba/images/023665.jpg
inflating: ./data/celeba/images/002196.jpg
inflating: ./data/celeba/images/059076.jpg
inflating: ./data/celeba/images/129478.jpg
inflating: ./data/celeba/images/198736.jpg
inflating: ./data/celeba/images/149566.jpg
inflating: ./data/celeba/images/009707.jpg
inflating: ./data/celeba/images/107473.jpg
inflating: ./data/celeba/images/114927.jpg
inflating: ./data/celeba/images/051625.jpg
inflating: ./data/celeba/images/085457.jpg
inflating: ./data/celeba/images/128860.jpg
inflating: ./data/celeba/images/090243.jpg
inflating: ./data/celeba/images/125415.jpg
inflating: ./data/celeba/images/017801.jpg
inflating: ./data/celeba/images/092256.jpg
inflating: ./data/celeba/images/053183.jpg
inflating: ./data/celeba/images/087765.jpg
inflating: ./data/celeba/images/118660.jpg
inflating: ./data/celeba/images/137010.jpg
inflating: ./data/celeba/images/163085.jpg
inflating: ./data/celeba/images/150909.jpg
inflating: ./data/celeba/images/041566.jpg
inflating: ./data/celeba/images/113247.jpg
inflating: ./data/celeba/images/050884.jpg
inflating: ./data/celeba/images/194894.jpg
inflating: ./data/celeba/images/168291.jpg
inflating: ./data/celeba/images/180923.jpg
inflating: ./data/celeba/images/062240.jpg
inflating: ./data/celeba/images/105860.jpg
inflating: ./data/celeba/images/163261.jpg
inflating: ./data/celeba/images/028803.jpg
inflating: ./data/celeba/images/115627.jpg
inflating: ./data/celeba/images/098252.jpg
inflating: ./data/celeba/images/139658.jpg
inflating: ./data/celeba/images/137400.jpg
inflating: ./data/celeba/images/059661.jpg
inflating: ./data/celeba/images/133242.jpg
inflating: ./data/celeba/images/172912.jpg
inflating: ./data/celeba/images/168668.jpg
inflating: ./data/celeba/images/046663.jpg
inflating: ./data/celeba/images/104734.jpg
inflating: ./data/celeba/images/079643.jpg
inflating: ./data/celeba/images/132444.jpg
inflating: ./data/celeba/images/040192.jpg
inflating: ./data/celeba/images/130039.jpg
inflating: ./data/celeba/images/142605.jpg
inflating: ./data/celeba/images/068335.jpg
inflating: ./data/celeba/images/156416.jpg
inflating: ./data/celeba/images/008970.jpg
inflating: ./data/celeba/images/114685.jpg
inflating: ./data/celeba/images/159143.jpg
inflating: ./data/celeba/images/026322.jpg
inflating: ./data/celeba/images/090083.jpg
inflating: ./data/celeba/images/097845.jpg
inflating: ./data/celeba/images/031901.jpg
inflating: ./data/celeba/images/201962.jpg
inflating: ./data/celeba/images/055797.jpg
inflating: ./data/celeba/images/006242.jpg
inflating: ./data/celeba/images/082010.jpg
inflating: ./data/celeba/images/150226.jpg
inflating: ./data/celeba/images/134105.jpg
inflating: ./data/celeba/images/003074.jpg
inflating: ./data/celeba/images/182516.jpg
inflating: ./data/celeba/images/144545.jpg
inflating: ./data/celeba/images/117475.jpg
inflating: ./data/celeba/images/110014.jpg
inflating: ./data/celeba/images/106770.jpg
inflating: ./data/celeba/images/031947.jpg
inflating: ./data/celeba/images/021758.jpg
inflating: ./data/celeba/images/019801.jpg
inflating: ./data/celeba/images/033827.jpg
inflating: ./data/celeba/images/095179.jpg
inflating: ./data/celeba/images/069182.jpg
inflating: ./data/celeba/images/200675.jpg
inflating: ./data/celeba/images/179142.jpg
inflating: ./data/celeba/images/185429.jpg
inflating: ./data/celeba/images/095197.jpg
inflating: ./data/celeba/images/173819.jpg
inflating: ./data/celeba/images/099009.jpg
inflating: ./data/celeba/images/063285.jpg
inflating: ./data/celeba/images/167195.jpg
inflating: ./data/celeba/images/091880.jpg
inflating: ./data/celeba/images/056468.jpg
inflating: ./data/celeba/images/012540.jpg
inflating: ./data/celeba/images/166760.jpg
inflating: ./data/celeba/images/141558.jpg
inflating: ./data/celeba/images/096028.jpg
inflating: ./data/celeba/images/188696.jpg
inflating: ./data/celeba/images/138754.jpg
inflating: ./data/celeba/images/089027.jpg
inflating: ./data/celeba/images/014449.jpg
inflating: ./data/celeba/images/007211.jpg
inflating: ./data/celeba/images/180137.jpg
inflating: ./data/celeba/images/050882.jpg
inflating: ./data/celeba/images/060675.jpg
inflating: ./data/celeba/images/069823.jpg
inflating: ./data/celeba/images/082844.jpg
inflating: ./data/celeba/images/025662.jpg
inflating: ./data/celeba/images/052126.jpg
inflating: ./data/celeba/images/044738.jpg
inflating: ./data/celeba/images/118097.jpg
inflating: ./data/celeba/images/200932.jpg
inflating: ./data/celeba/images/174745.jpg
inflating: ./data/celeba/images/183424.jpg
inflating: ./data/celeba/images/055624.jpg
inflating: ./data/celeba/images/037085.jpg
inflating: ./data/celeba/images/001659.jpg
inflating: ./data/celeba/images/044919.jpg
inflating: ./data/celeba/images/137192.jpg
inflating: ./data/celeba/images/083987.jpg
inflating: ./data/celeba/images/126104.jpg
inflating: ./data/celeba/images/104898.jpg
inflating: ./data/celeba/images/192543.jpg
inflating: ./data/celeba/images/189893.jpg
inflating: ./data/celeba/images/079777.jpg
inflating: ./data/celeba/images/098062.jpg
inflating: ./data/celeba/images/167729.jpg
inflating: ./data/celeba/images/164764.jpg
inflating: ./data/celeba/images/192545.jpg
inflating: ./data/celeba/images/188149.jpg
inflating: ./data/celeba/images/160033.jpg
inflating: ./data/celeba/images/133491.jpg
inflating: ./data/celeba/images/190709.jpg
inflating: ./data/celeba/images/164942.jpg
inflating: ./data/celeba/images/037376.jpg
inflating: ./data/celeba/images/199240.jpg
inflating: ./data/celeba/images/071321.jpg
inflating: ./data/celeba/images/066422.jpg
inflating: ./data/celeba/images/099407.jpg
inflating: ./data/celeba/images/178004.jpg
inflating: ./data/celeba/images/166076.jpg
inflating: ./data/celeba/images/197331.jpg
inflating: ./data/celeba/images/114085.jpg
inflating: ./data/celeba/images/064835.jpg
inflating: ./data/celeba/images/131277.jpg
inflating: ./data/celeba/images/041084.jpg
inflating: ./data/celeba/images/044916.jpg
inflating: ./data/celeba/images/038607.jpg
inflating: ./data/celeba/images/150779.jpg
inflating: ./data/celeba/images/131029.jpg
inflating: ./data/celeba/images/132945.jpg
inflating: ./data/celeba/images/174191.jpg
inflating: ./data/celeba/images/086389.jpg
inflating: ./data/celeba/images/136409.jpg
inflating: ./data/celeba/images/095680.jpg
inflating: ./data/celeba/images/033838.jpg
inflating: ./data/celeba/images/115832.jpg
inflating: ./data/celeba/images/083116.jpg
inflating: ./data/celeba/images/107536.jpg
inflating: ./data/celeba/images/044066.jpg
inflating: ./data/celeba/images/180984.jpg
inflating: ./data/celeba/images/051737.jpg
inflating: ./data/celeba/images/186668.jpg
inflating: ./data/celeba/images/063677.jpg
inflating: ./data/celeba/images/172385.jpg
inflating: ./data/celeba/images/001515.jpg
inflating: ./data/celeba/images/104364.jpg
inflating: ./data/celeba/images/086457.jpg
inflating: ./data/celeba/images/068612.jpg
inflating: ./data/celeba/images/059908.jpg
inflating: ./data/celeba/images/180776.jpg
inflating: ./data/celeba/images/173255.jpg
inflating: ./data/celeba/images/124084.jpg
inflating: ./data/celeba/images/087842.jpg
inflating: ./data/celeba/images/173212.jpg
inflating: ./data/celeba/images/151130.jpg
inflating: ./data/celeba/images/079592.jpg
inflating: ./data/celeba/images/197222.jpg
inflating: ./data/celeba/images/022724.jpg
inflating: ./data/celeba/images/088401.jpg
inflating: ./data/celeba/images/014591.jpg
inflating: ./data/celeba/images/064762.jpg
inflating: ./data/celeba/images/127654.jpg
inflating: ./data/celeba/images/053486.jpg
inflating: ./data/celeba/images/188777.jpg
inflating: ./data/celeba/images/165720.jpg
inflating: ./data/celeba/images/155560.jpg
inflating: ./data/celeba/images/168496.jpg
inflating: ./data/celeba/images/184886.jpg
inflating: ./data/celeba/images/113685.jpg
inflating: ./data/celeba/images/154983.jpg
inflating: ./data/celeba/images/159275.jpg
inflating: ./data/celeba/images/010532.jpg
inflating: ./data/celeba/images/076353.jpg
inflating: ./data/celeba/images/102379.jpg
inflating: ./data/celeba/images/145525.jpg
inflating: ./data/celeba/images/121198.jpg
inflating: ./data/celeba/images/175549.jpg
inflating: ./data/celeba/images/002516.jpg
inflating: ./data/celeba/images/040728.jpg
inflating: ./data/celeba/images/197242.jpg
inflating: ./data/celeba/images/150450.jpg
inflating: ./data/celeba/images/025727.jpg
inflating: ./data/celeba/images/095064.jpg
inflating: ./data/celeba/images/052441.jpg
inflating: ./data/celeba/images/086061.jpg
inflating: ./data/celeba/images/053830.jpg
inflating: ./data/celeba/images/159359.jpg
inflating: ./data/celeba/images/152301.jpg
inflating: ./data/celeba/images/033928.jpg
inflating: ./data/celeba/images/082392.jpg
inflating: ./data/celeba/images/047100.jpg
inflating: ./data/celeba/images/142745.jpg
inflating: ./data/celeba/images/124777.jpg
inflating: ./data/celeba/images/064535.jpg
inflating: ./data/celeba/images/121790.jpg
inflating: ./data/celeba/images/133342.jpg
inflating: ./data/celeba/images/040262.jpg
inflating: ./data/celeba/images/123724.jpg
inflating: ./data/celeba/images/152365.jpg
inflating: ./data/celeba/images/118785.jpg
inflating: ./data/celeba/images/122413.jpg
inflating: ./data/celeba/images/075952.jpg
inflating: ./data/celeba/images/044725.jpg
inflating: ./data/celeba/images/157448.jpg
inflating: ./data/celeba/images/039778.jpg
inflating: ./data/celeba/images/039491.jpg
inflating: ./data/celeba/images/152187.jpg
inflating: ./data/celeba/images/093769.jpg
inflating: ./data/celeba/images/059391.jpg
inflating: ./data/celeba/images/146278.jpg
inflating: ./data/celeba/images/046152.jpg
inflating: ./data/celeba/images/098900.jpg
inflating: ./data/celeba/images/007834.jpg
inflating: ./data/celeba/images/170740.jpg
inflating: ./data/celeba/images/053318.jpg
inflating: ./data/celeba/images/202513.jpg
inflating: ./data/celeba/images/102331.jpg
inflating: ./data/celeba/images/048048.jpg
inflating: ./data/celeba/images/177729.jpg
inflating: ./data/celeba/images/069338.jpg
inflating: ./data/celeba/images/164111.jpg
inflating: ./data/celeba/images/069132.jpg
inflating: ./data/celeba/images/198150.jpg
inflating: ./data/celeba/images/085661.jpg
inflating: ./data/celeba/images/026918.jpg
inflating: ./data/celeba/images/143600.jpg
inflating: ./data/celeba/images/179126.jpg
inflating: ./data/celeba/images/184060.jpg
inflating: ./data/celeba/images/163578.jpg
inflating: ./data/celeba/images/034016.jpg
inflating: ./data/celeba/images/087359.jpg
inflating: ./data/celeba/images/018221.jpg
inflating: ./data/celeba/images/169091.jpg
inflating: ./data/celeba/images/066483.jpg
inflating: ./data/celeba/images/108641.jpg
inflating: ./data/celeba/images/109353.jpg
inflating: ./data/celeba/images/082630.jpg
inflating: ./data/celeba/images/117146.jpg
inflating: ./data/celeba/images/052000.jpg
inflating: ./data/celeba/images/086876.jpg
inflating: ./data/celeba/images/161239.jpg
inflating: ./data/celeba/images/181594.jpg
inflating: ./data/celeba/images/191702.jpg
inflating: ./data/celeba/images/074305.jpg
inflating: ./data/celeba/images/180688.jpg
inflating: ./data/celeba/images/010805.jpg
inflating: ./data/celeba/images/113455.jpg
inflating: ./data/celeba/images/187672.jpg
inflating: ./data/celeba/images/110604.jpg
inflating: ./data/celeba/images/072526.jpg
inflating: ./data/celeba/images/036137.jpg
inflating: ./data/celeba/images/186717.jpg
inflating: ./data/celeba/images/067081.jpg
inflating: ./data/celeba/images/169705.jpg
inflating: ./data/celeba/images/178094.jpg
inflating: ./data/celeba/images/157898.jpg
inflating: ./data/celeba/images/077532.jpg
inflating: ./data/celeba/images/103117.jpg
inflating: ./data/celeba/images/050230.jpg
inflating: ./data/celeba/images/012837.jpg
inflating: ./data/celeba/images/097728.jpg
inflating: ./data/celeba/images/143390.jpg
inflating: ./data/celeba/images/179422.jpg
inflating: ./data/celeba/images/094556.jpg
inflating: ./data/celeba/images/198992.jpg
inflating: ./data/celeba/images/135461.jpg
inflating: ./data/celeba/images/021892.jpg
inflating: ./data/celeba/images/143672.jpg
inflating: ./data/celeba/images/201820.jpg
inflating: ./data/celeba/images/019097.jpg
inflating: ./data/celeba/images/161293.jpg
inflating: ./data/celeba/images/144024.jpg
inflating: ./data/celeba/images/159812.jpg
inflating: ./data/celeba/images/134583.jpg
inflating: ./data/celeba/images/010955.jpg
inflating: ./data/celeba/images/042942.jpg
inflating: ./data/celeba/images/050269.jpg
inflating: ./data/celeba/images/174991.jpg
inflating: ./data/celeba/images/030135.jpg
inflating: ./data/celeba/images/021871.jpg
inflating: ./data/celeba/images/137114.jpg
inflating: ./data/celeba/images/081023.jpg
inflating: ./data/celeba/images/152044.jpg
inflating: ./data/celeba/images/187217.jpg
inflating: ./data/celeba/images/110393.jpg
inflating: ./data/celeba/images/162770.jpg
inflating: ./data/celeba/images/053532.jpg
inflating: ./data/celeba/images/022204.jpg
inflating: ./data/celeba/images/162309.jpg
inflating: ./data/celeba/images/079185.jpg
inflating: ./data/celeba/images/114032.jpg
inflating: ./data/celeba/images/165140.jpg
inflating: ./data/celeba/images/063698.jpg
inflating: ./data/celeba/images/055255.jpg
inflating: ./data/celeba/images/065391.jpg
inflating: ./data/celeba/images/168284.jpg
inflating: ./data/celeba/images/154881.jpg
inflating: ./data/celeba/images/033759.jpg
inflating: ./data/celeba/images/114670.jpg
inflating: ./data/celeba/images/099112.jpg
inflating: ./data/celeba/images/119536.jpg
inflating: ./data/celeba/images/087999.jpg
inflating: ./data/celeba/images/153393.jpg
inflating: ./data/celeba/images/063162.jpg
inflating: ./data/celeba/images/117806.jpg
inflating: ./data/celeba/images/130606.jpg
inflating: ./data/celeba/images/191672.jpg
inflating: ./data/celeba/images/030579.jpg
inflating: ./data/celeba/images/096701.jpg
inflating: ./data/celeba/images/116538.jpg
inflating: ./data/celeba/images/163775.jpg
inflating: ./data/celeba/images/189059.jpg
inflating: ./data/celeba/images/118831.jpg
inflating: ./data/celeba/images/064775.jpg
inflating: ./data/celeba/images/079149.jpg
inflating: ./data/celeba/images/074470.jpg
inflating: ./data/celeba/images/178279.jpg
inflating: ./data/celeba/images/042697.jpg
inflating: ./data/celeba/images/145576.jpg
inflating: ./data/celeba/images/201785.jpg
inflating: ./data/celeba/images/132210.jpg
inflating: ./data/celeba/images/194555.jpg
inflating: ./data/celeba/images/124499.jpg
inflating: ./data/celeba/images/202194.jpg
inflating: ./data/celeba/images/135099.jpg
inflating: ./data/celeba/images/000702.jpg
inflating: ./data/celeba/images/106358.jpg
inflating: ./data/celeba/images/090073.jpg
inflating: ./data/celeba/images/161910.jpg
inflating: ./data/celeba/images/161136.jpg
inflating: ./data/celeba/images/157741.jpg
inflating: ./data/celeba/images/172055.jpg
inflating: ./data/celeba/images/108082.jpg
inflating: ./data/celeba/images/047296.jpg
inflating: ./data/celeba/images/054188.jpg
inflating: ./data/celeba/images/167073.jpg
inflating: ./data/celeba/images/019378.jpg
inflating: ./data/celeba/images/036560.jpg
inflating: ./data/celeba/images/055685.jpg
inflating: ./data/celeba/images/102022.jpg
inflating: ./data/celeba/images/036052.jpg
inflating: ./data/celeba/images/060779.jpg
inflating: ./data/celeba/images/122117.jpg
inflating: ./data/celeba/images/171042.jpg
inflating: ./data/celeba/images/012824.jpg
inflating: ./data/celeba/images/118105.jpg
inflating: ./data/celeba/images/112497.jpg
inflating: ./data/celeba/images/051621.jpg
inflating: ./data/celeba/images/049612.jpg
inflating: ./data/celeba/images/187187.jpg
inflating: ./data/celeba/images/144944.jpg
inflating: ./data/celeba/images/017676.jpg
inflating: ./data/celeba/images/028858.jpg
inflating: ./data/celeba/images/144320.jpg
inflating: ./data/celeba/images/019952.jpg
inflating: ./data/celeba/images/101348.jpg
inflating: ./data/celeba/images/185638.jpg
inflating: ./data/celeba/images/174151.jpg
inflating: ./data/celeba/images/188657.jpg
inflating: ./data/celeba/images/149145.jpg
inflating: ./data/celeba/images/193032.jpg
inflating: ./data/celeba/images/105633.jpg
inflating: ./data/celeba/images/041811.jpg
inflating: ./data/celeba/images/082933.jpg
inflating: ./data/celeba/images/117113.jpg
inflating: ./data/celeba/images/044504.jpg
inflating: ./data/celeba/images/149653.jpg
inflating: ./data/celeba/images/153370.jpg
inflating: ./data/celeba/images/038764.jpg
inflating: ./data/celeba/images/024071.jpg
inflating: ./data/celeba/images/194404.jpg
inflating: ./data/celeba/images/043829.jpg
inflating: ./data/celeba/images/015326.jpg
inflating: ./data/celeba/images/017150.jpg
inflating: ./data/celeba/images/132416.jpg
inflating: ./data/celeba/images/175344.jpg
inflating: ./data/celeba/images/052286.jpg
inflating: ./data/celeba/images/147908.jpg
inflating: ./data/celeba/images/013564.jpg
inflating: ./data/celeba/images/173904.jpg
inflating: ./data/celeba/images/096397.jpg
inflating: ./data/celeba/images/080867.jpg
inflating: ./data/celeba/images/135958.jpg
inflating: ./data/celeba/images/197938.jpg
inflating: ./data/celeba/images/046346.jpg
inflating: ./data/celeba/images/147781.jpg
inflating: ./data/celeba/images/013253.jpg
inflating: ./data/celeba/images/004912.jpg
inflating: ./data/celeba/images/184932.jpg
inflating: ./data/celeba/images/015896.jpg
inflating: ./data/celeba/images/000343.jpg
inflating: ./data/celeba/images/184823.jpg
inflating: ./data/celeba/images/186576.jpg
inflating: ./data/celeba/images/108300.jpg
inflating: ./data/celeba/images/178117.jpg
inflating: ./data/celeba/images/164950.jpg
inflating: ./data/celeba/images/017867.jpg
inflating: ./data/celeba/images/064006.jpg
inflating: ./data/celeba/images/073527.jpg
inflating: ./data/celeba/images/141289.jpg
inflating: ./data/celeba/images/140234.jpg
inflating: ./data/celeba/images/080706.jpg
inflating: ./data/celeba/images/051823.jpg
inflating: ./data/celeba/images/087429.jpg
inflating: ./data/celeba/images/183297.jpg
inflating: ./data/celeba/images/086608.jpg
inflating: ./data/celeba/images/086708.jpg
inflating: ./data/celeba/images/040661.jpg
inflating: ./data/celeba/images/034678.jpg
inflating: ./data/celeba/images/163785.jpg
inflating: ./data/celeba/images/181729.jpg
inflating: ./data/celeba/images/014515.jpg
inflating: ./data/celeba/images/072248.jpg
inflating: ./data/celeba/images/062953.jpg
inflating: ./data/celeba/images/176951.jpg
inflating: ./data/celeba/images/006990.jpg
inflating: ./data/celeba/images/159410.jpg
inflating: ./data/celeba/images/032964.jpg
inflating: ./data/celeba/images/107700.jpg
inflating: ./data/celeba/images/031981.jpg
inflating: ./data/celeba/images/057922.jpg
inflating: ./data/celeba/images/115199.jpg
inflating: ./data/celeba/images/164222.jpg
inflating: ./data/celeba/images/026708.jpg
inflating: ./data/celeba/images/132248.jpg
inflating: ./data/celeba/images/196889.jpg
inflating: ./data/celeba/images/082917.jpg
inflating: ./data/celeba/images/085653.jpg
inflating: ./data/celeba/images/032326.jpg
inflating: ./data/celeba/images/058818.jpg
inflating: ./data/celeba/images/118319.jpg
inflating: ./data/celeba/images/194065.jpg
inflating: ./data/celeba/images/198562.jpg
inflating: ./data/celeba/images/131908.jpg
inflating: ./data/celeba/images/134667.jpg
inflating: ./data/celeba/images/094991.jpg
inflating: ./data/celeba/images/174598.jpg
inflating: ./data/celeba/images/202383.jpg
inflating: ./data/celeba/images/058641.jpg
inflating: ./data/celeba/images/169788.jpg
inflating: ./data/celeba/images/101888.jpg
inflating: ./data/celeba/images/054944.jpg
inflating: ./data/celeba/images/039874.jpg
inflating: ./data/celeba/images/127526.jpg
inflating: ./data/celeba/images/004540.jpg
inflating: ./data/celeba/images/171918.jpg
inflating: ./data/celeba/images/114877.jpg
inflating: ./data/celeba/images/063512.jpg
inflating: ./data/celeba/images/175144.jpg
inflating: ./data/celeba/images/040968.jpg
inflating: ./data/celeba/images/011521.jpg
inflating: ./data/celeba/images/133211.jpg
inflating: ./data/celeba/images/102795.jpg
inflating: ./data/celeba/images/076855.jpg
inflating: ./data/celeba/images/130984.jpg
inflating: ./data/celeba/images/135266.jpg
inflating: ./data/celeba/images/186135.jpg
inflating: ./data/celeba/images/165100.jpg
inflating: ./data/celeba/images/165404.jpg
inflating: ./data/celeba/images/086634.jpg
inflating: ./data/celeba/images/059238.jpg
inflating: ./data/celeba/images/111747.jpg
inflating: ./data/celeba/images/014666.jpg
inflating: ./data/celeba/images/140009.jpg
inflating: ./data/celeba/images/188432.jpg
inflating: ./data/celeba/images/124422.jpg
inflating: ./data/celeba/images/035436.jpg
inflating: ./data/celeba/images/163003.jpg
inflating: ./data/celeba/images/137081.jpg
inflating: ./data/celeba/images/079565.jpg
inflating: ./data/celeba/images/033503.jpg
inflating: ./data/celeba/images/154112.jpg
inflating: ./data/celeba/images/085360.jpg
inflating: ./data/celeba/images/001491.jpg
inflating: ./data/celeba/images/106511.jpg
inflating: ./data/celeba/images/076435.jpg
inflating: ./data/celeba/images/155987.jpg
inflating: ./data/celeba/images/165312.jpg
inflating: ./data/celeba/images/115144.jpg
inflating: ./data/celeba/images/115300.jpg
inflating: ./data/celeba/images/172787.jpg
inflating: ./data/celeba/images/062819.jpg
inflating: ./data/celeba/images/155701.jpg
inflating: ./data/celeba/images/183885.jpg
inflating: ./data/celeba/images/072370.jpg
inflating: ./data/celeba/images/076181.jpg
inflating: ./data/celeba/images/116995.jpg
inflating: ./data/celeba/images/169377.jpg
inflating: ./data/celeba/images/145055.jpg
inflating: ./data/celeba/images/179979.jpg
inflating: ./data/celeba/images/199969.jpg
inflating: ./data/celeba/images/031664.jpg
inflating: ./data/celeba/images/018415.jpg
inflating: ./data/celeba/images/199256.jpg
inflating: ./data/celeba/images/002367.jpg
inflating: ./data/celeba/images/199378.jpg
inflating: ./data/celeba/images/171658.jpg
inflating: ./data/celeba/images/145903.jpg
inflating: ./data/celeba/images/018272.jpg
inflating: ./data/celeba/images/095923.jpg
inflating: ./data/celeba/images/127687.jpg
inflating: ./data/celeba/images/127991.jpg
inflating: ./data/celeba/images/047368.jpg
inflating: ./data/celeba/images/001104.jpg
inflating: ./data/celeba/images/097963.jpg
inflating: ./data/celeba/images/014215.jpg
inflating: ./data/celeba/images/087673.jpg
inflating: ./data/celeba/images/133530.jpg
inflating: ./data/celeba/images/159138.jpg
inflating: ./data/celeba/images/040560.jpg
inflating: ./data/celeba/images/096582.jpg
inflating: ./data/celeba/images/164227.jpg
inflating: ./data/celeba/images/094364.jpg
inflating: ./data/celeba/images/057949.jpg
inflating: ./data/celeba/images/015913.jpg
inflating: ./data/celeba/images/038471.jpg
inflating: ./data/celeba/images/130568.jpg
inflating: ./data/celeba/images/045870.jpg
inflating: ./data/celeba/images/022286.jpg
inflating: ./data/celeba/images/151712.jpg
inflating: ./data/celeba/images/120071.jpg
inflating: ./data/celeba/images/009346.jpg
inflating: ./data/celeba/images/032285.jpg
inflating: ./data/celeba/images/021095.jpg
inflating: ./data/celeba/images/143567.jpg
inflating: ./data/celeba/images/023486.jpg
inflating: ./data/celeba/images/171771.jpg
inflating: ./data/celeba/images/094374.jpg
inflating: ./data/celeba/images/051467.jpg
inflating: ./data/celeba/images/020669.jpg
inflating: ./data/celeba/images/015319.jpg
inflating: ./data/celeba/images/200945.jpg
inflating: ./data/celeba/images/133086.jpg
inflating: ./data/celeba/images/150128.jpg
inflating: ./data/celeba/images/073693.jpg
inflating: ./data/celeba/images/008925.jpg
inflating: ./data/celeba/images/004153.jpg
inflating: ./data/celeba/images/189818.jpg
inflating: ./data/celeba/images/046289.jpg
inflating: ./data/celeba/images/028432.jpg
inflating: ./data/celeba/images/078372.jpg
inflating: ./data/celeba/images/154398.jpg
inflating: ./data/celeba/images/129855.jpg
inflating: ./data/celeba/images/083353.jpg
inflating: ./data/celeba/images/032532.jpg
inflating: ./data/celeba/images/169159.jpg
inflating: ./data/celeba/images/196111.jpg
inflating: ./data/celeba/images/033858.jpg
inflating: ./data/celeba/images/008879.jpg
inflating: ./data/celeba/images/103278.jpg
inflating: ./data/celeba/images/119065.jpg
inflating: ./data/celeba/images/010676.jpg
inflating: ./data/celeba/images/046564.jpg
inflating: ./data/celeba/images/060209.jpg
inflating: ./data/celeba/images/032985.jpg
inflating: ./data/celeba/images/164041.jpg
inflating: ./data/celeba/images/146194.jpg
inflating: ./data/celeba/images/074619.jpg
inflating: ./data/celeba/images/127764.jpg
inflating: ./data/celeba/images/083723.jpg
inflating: ./data/celeba/images/077593.jpg
inflating: ./data/celeba/images/188522.jpg
inflating: ./data/celeba/images/127731.jpg
inflating: ./data/celeba/images/149699.jpg
inflating: ./data/celeba/images/161541.jpg
inflating: ./data/celeba/images/002372.jpg
inflating: ./data/celeba/images/157837.jpg
inflating: ./data/celeba/images/026348.jpg
inflating: ./data/celeba/images/078468.jpg
inflating: ./data/celeba/images/048199.jpg
inflating: ./data/celeba/images/035377.jpg
inflating: ./data/celeba/images/121144.jpg
inflating: ./data/celeba/images/064444.jpg
inflating: ./data/celeba/images/124300.jpg
inflating: ./data/celeba/images/004525.jpg
inflating: ./data/celeba/images/115231.jpg
inflating: ./data/celeba/images/145685.jpg
inflating: ./data/celeba/images/124386.jpg
inflating: ./data/celeba/images/139288.jpg
inflating: ./data/celeba/images/201010.jpg
inflating: ./data/celeba/images/142359.jpg
inflating: ./data/celeba/images/126730.jpg
inflating: ./data/celeba/images/093985.jpg
inflating: ./data/celeba/images/158311.jpg
inflating: ./data/celeba/images/115986.jpg
inflating: ./data/celeba/images/099842.jpg
inflating: ./data/celeba/images/192797.jpg
inflating: ./data/celeba/images/094895.jpg
inflating: ./data/celeba/images/110681.jpg
inflating: ./data/celeba/images/079740.jpg
inflating: ./data/celeba/images/110839.jpg
inflating: ./data/celeba/images/038997.jpg
inflating: ./data/celeba/images/182057.jpg
inflating: ./data/celeba/images/161697.jpg
inflating: ./data/celeba/images/081477.jpg
inflating: ./data/celeba/images/091184.jpg
inflating: ./data/celeba/images/128251.jpg
inflating: ./data/celeba/images/078093.jpg
inflating: ./data/celeba/images/059999.jpg
inflating: ./data/celeba/images/116998.jpg
inflating: ./data/celeba/images/011685.jpg
inflating: ./data/celeba/images/093414.jpg
inflating: ./data/celeba/images/077572.jpg
inflating: ./data/celeba/images/096583.jpg
inflating: ./data/celeba/images/033365.jpg
inflating: ./data/celeba/images/070473.jpg
inflating: ./data/celeba/images/058959.jpg
inflating: ./data/celeba/images/113424.jpg
inflating: ./data/celeba/images/188582.jpg
inflating: ./data/celeba/images/200213.jpg
inflating: ./data/celeba/images/039428.jpg
inflating: ./data/celeba/images/169016.jpg
inflating: ./data/celeba/images/018747.jpg
inflating: ./data/celeba/images/068404.jpg
inflating: ./data/celeba/images/076626.jpg
inflating: ./data/celeba/images/012644.jpg
inflating: ./data/celeba/images/034739.jpg
inflating: ./data/celeba/images/013467.jpg
inflating: ./data/celeba/images/100794.jpg
inflating: ./data/celeba/images/065635.jpg
inflating: ./data/celeba/images/118894.jpg
inflating: ./data/celeba/images/171359.jpg
inflating: ./data/celeba/images/062171.jpg
inflating: ./data/celeba/images/143282.jpg
inflating: ./data/celeba/images/089532.jpg
inflating: ./data/celeba/images/177743.jpg
inflating: ./data/celeba/images/107214.jpg
inflating: ./data/celeba/images/159425.jpg
inflating: ./data/celeba/images/060268.jpg
inflating: ./data/celeba/images/109615.jpg
inflating: ./data/celeba/images/073035.jpg
inflating: ./data/celeba/images/169134.jpg
inflating: ./data/celeba/images/121141.jpg
inflating: ./data/celeba/images/107734.jpg
inflating: ./data/celeba/images/156518.jpg
inflating: ./data/celeba/images/185362.jpg
inflating: ./data/celeba/images/130428.jpg
inflating: ./data/celeba/images/146822.jpg
inflating: ./data/celeba/images/107973.jpg
inflating: ./data/celeba/images/092836.jpg
inflating: ./data/celeba/images/064116.jpg
inflating: ./data/celeba/images/023570.jpg
inflating: ./data/celeba/images/037860.jpg
inflating: ./data/celeba/images/019182.jpg
inflating: ./data/celeba/images/154318.jpg
inflating: ./data/celeba/images/161429.jpg
inflating: ./data/celeba/images/010466.jpg
inflating: ./data/celeba/images/146728.jpg
inflating: ./data/celeba/images/198014.jpg
inflating: ./data/celeba/images/091182.jpg
inflating: ./data/celeba/images/153622.jpg
inflating: ./data/celeba/images/015416.jpg
inflating: ./data/celeba/images/074899.jpg
inflating: ./data/celeba/images/042287.jpg
inflating: ./data/celeba/images/072758.jpg
inflating: ./data/celeba/images/117469.jpg
inflating: ./data/celeba/images/117833.jpg
inflating: ./data/celeba/images/195211.jpg
inflating: ./data/celeba/images/195806.jpg
inflating: ./data/celeba/images/116403.jpg
inflating: ./data/celeba/images/063410.jpg
inflating: ./data/celeba/images/171594.jpg
inflating: ./data/celeba/images/031579.jpg
inflating: ./data/celeba/images/018074.jpg
inflating: ./data/celeba/images/080519.jpg
inflating: ./data/celeba/images/089225.jpg
inflating: ./data/celeba/images/048415.jpg
inflating: ./data/celeba/images/153648.jpg
inflating: ./data/celeba/images/186188.jpg
inflating: ./data/celeba/images/034419.jpg
inflating: ./data/celeba/images/104908.jpg
inflating: ./data/celeba/images/033905.jpg
inflating: ./data/celeba/images/095308.jpg
inflating: ./data/celeba/images/088741.jpg
inflating: ./data/celeba/images/020505.jpg
inflating: ./data/celeba/images/009870.jpg
inflating: ./data/celeba/images/144694.jpg
inflating: ./data/celeba/images/197564.jpg
inflating: ./data/celeba/images/050377.jpg
inflating: ./data/celeba/images/116676.jpg
inflating: ./data/celeba/images/046026.jpg
inflating: ./data/celeba/images/007202.jpg
inflating: ./data/celeba/images/064969.jpg
inflating: ./data/celeba/images/093247.jpg
inflating: ./data/celeba/images/048464.jpg
inflating: ./data/celeba/images/101792.jpg
inflating: ./data/celeba/images/050945.jpg
inflating: ./data/celeba/images/108823.jpg
inflating: ./data/celeba/images/007885.jpg
inflating: ./data/celeba/images/091635.jpg
inflating: ./data/celeba/images/202024.jpg
inflating: ./data/celeba/images/171842.jpg
inflating: ./data/celeba/images/146193.jpg
inflating: ./data/celeba/images/095265.jpg
inflating: ./data/celeba/images/063946.jpg
inflating: ./data/celeba/images/115251.jpg
inflating: ./data/celeba/images/003960.jpg
inflating: ./data/celeba/images/062790.jpg
inflating: ./data/celeba/images/118320.jpg
inflating: ./data/celeba/images/028941.jpg
inflating: ./data/celeba/images/145916.jpg
inflating: ./data/celeba/images/031418.jpg
inflating: ./data/celeba/images/005572.jpg
inflating: ./data/celeba/images/017480.jpg
inflating: ./data/celeba/images/190402.jpg
inflating: ./data/celeba/images/040796.jpg
inflating: ./data/celeba/images/078090.jpg
inflating: ./data/celeba/images/053945.jpg
inflating: ./data/celeba/images/043437.jpg
inflating: ./data/celeba/images/087137.jpg
inflating: ./data/celeba/images/065982.jpg
inflating: ./data/celeba/images/077138.jpg
inflating: ./data/celeba/images/149973.jpg
inflating: ./data/celeba/images/181207.jpg
inflating: ./data/celeba/images/068664.jpg
inflating: ./data/celeba/images/097874.jpg
inflating: ./data/celeba/images/146612.jpg
inflating: ./data/celeba/images/035390.jpg
inflating: ./data/celeba/images/060873.jpg
inflating: ./data/celeba/images/051968.jpg
inflating: ./data/celeba/images/066249.jpg
inflating: ./data/celeba/images/074871.jpg
inflating: ./data/celeba/images/051447.jpg
inflating: ./data/celeba/images/164269.jpg
inflating: ./data/celeba/images/044781.jpg
inflating: ./data/celeba/images/016099.jpg
inflating: ./data/celeba/images/110141.jpg
inflating: ./data/celeba/images/069819.jpg
inflating: ./data/celeba/images/023278.jpg
inflating: ./data/celeba/images/030262.jpg
inflating: ./data/celeba/images/113236.jpg
inflating: ./data/celeba/images/157710.jpg
inflating: ./data/celeba/images/075211.jpg
inflating: ./data/celeba/images/012407.jpg
inflating: ./data/celeba/images/101590.jpg
inflating: ./data/celeba/images/171379.jpg
inflating: ./data/celeba/images/097969.jpg
inflating: ./data/celeba/images/087401.jpg
inflating: ./data/celeba/images/195582.jpg
inflating: ./data/celeba/images/155403.jpg
inflating: ./data/celeba/images/034927.jpg
inflating: ./data/celeba/images/093361.jpg
inflating: ./data/celeba/images/162875.jpg
inflating: ./data/celeba/images/029297.jpg
inflating: ./data/celeba/images/100135.jpg
inflating: ./data/celeba/images/134851.jpg
inflating: ./data/celeba/images/018995.jpg
inflating: ./data/celeba/images/067242.jpg
inflating: ./data/celeba/images/054446.jpg
inflating: ./data/celeba/images/007091.jpg
inflating: ./data/celeba/images/013990.jpg
inflating: ./data/celeba/images/124945.jpg
inflating: ./data/celeba/images/049583.jpg
inflating: ./data/celeba/images/153406.jpg
inflating: ./data/celeba/images/103788.jpg
inflating: ./data/celeba/images/196987.jpg
inflating: ./data/celeba/images/146977.jpg
inflating: ./data/celeba/images/160096.jpg
inflating: ./data/celeba/images/163957.jpg
inflating: ./data/celeba/images/027555.jpg
inflating: ./data/celeba/images/158059.jpg
inflating: ./data/celeba/images/169058.jpg
inflating: ./data/celeba/images/108597.jpg
inflating: ./data/celeba/images/018450.jpg
inflating: ./data/celeba/images/020140.jpg
inflating: ./data/celeba/images/111746.jpg
inflating: ./data/celeba/images/065238.jpg
inflating: ./data/celeba/images/014347.jpg
inflating: ./data/celeba/images/069417.jpg
inflating: ./data/celeba/images/007053.jpg
inflating: ./data/celeba/images/151809.jpg
inflating: ./data/celeba/images/134122.jpg
inflating: ./data/celeba/images/022905.jpg
inflating: ./data/celeba/images/137602.jpg
inflating: ./data/celeba/images/169791.jpg
inflating: ./data/celeba/images/123903.jpg
inflating: ./data/celeba/images/036977.jpg
inflating: ./data/celeba/images/147232.jpg
inflating: ./data/celeba/images/038945.jpg
inflating: ./data/celeba/images/150379.jpg
inflating: ./data/celeba/images/011479.jpg
inflating: ./data/celeba/images/036769.jpg
inflating: ./data/celeba/images/149396.jpg
inflating: ./data/celeba/images/122370.jpg
inflating: ./data/celeba/images/137922.jpg
inflating: ./data/celeba/images/127631.jpg
inflating: ./data/celeba/images/008972.jpg
inflating: ./data/celeba/images/048154.jpg
inflating: ./data/celeba/images/055436.jpg
inflating: ./data/celeba/images/117764.jpg
inflating: ./data/celeba/images/176867.jpg
inflating: ./data/celeba/images/087574.jpg
inflating: ./data/celeba/images/049106.jpg
inflating: ./data/celeba/images/093727.jpg
inflating: ./data/celeba/images/006235.jpg
inflating: ./data/celeba/images/189249.jpg
inflating: ./data/celeba/images/155013.jpg
inflating: ./data/celeba/images/041775.jpg
inflating: ./data/celeba/images/124734.jpg
inflating: ./data/celeba/images/012252.jpg
inflating: ./data/celeba/images/000776.jpg
inflating: ./data/celeba/images/066992.jpg
inflating: ./data/celeba/images/143407.jpg
inflating: ./data/celeba/images/193595.jpg
inflating: ./data/celeba/images/179675.jpg
inflating: ./data/celeba/images/113401.jpg
inflating: ./data/celeba/images/184325.jpg
inflating: ./data/celeba/images/140770.jpg
inflating: ./data/celeba/images/040177.jpg
inflating: ./data/celeba/images/056521.jpg
inflating: ./data/celeba/images/075116.jpg
inflating: ./data/celeba/images/106145.jpg
inflating: ./data/celeba/images/104172.jpg
inflating: ./data/celeba/images/144307.jpg
inflating: ./data/celeba/images/004622.jpg
inflating: ./data/celeba/images/072137.jpg
inflating: ./data/celeba/images/027742.jpg
inflating: ./data/celeba/images/188764.jpg
inflating: ./data/celeba/list_attr_celeba.txt
###Markdown
**Pretrained Model**Then I had to bring in the Training Networks. To train the StarGan on CelebA inserted the train and test. There were is pre-trained network inserted through bash. The messages from the images then translated through an evaluation script. To identify hair color, age, and gender. Pre-trained networks were met with a model checkpoint to run the script for the hair, color, age, and gender.
###Code
!bash download.sh pretrained-celeba-256x256
###Output
WARNING: timestamping does nothing in combination with -O. See the manual
for details.
--2021-11-30 03:52:45-- https://www.dropbox.com/s/zdq6roqf63m0v5f/celeba-256x256-5attrs.zip?dl=0
Resolving www.dropbox.com (www.dropbox.com)... 162.125.68.18, 2620:100:601a:18::a27d:712
Connecting to www.dropbox.com (www.dropbox.com)|162.125.68.18|:443... connected.
HTTP request sent, awaiting response... 301 Moved Permanently
Location: /s/raw/zdq6roqf63m0v5f/celeba-256x256-5attrs.zip [following]
--2021-11-30 03:52:45-- https://www.dropbox.com/s/raw/zdq6roqf63m0v5f/celeba-256x256-5attrs.zip
Reusing existing connection to www.dropbox.com:443.
HTTP request sent, awaiting response... 302 Found
Location: https://uca376f067801af5d54011d6c5a0.dl.dropboxusercontent.com/cd/0/inline/Ba4LYuMq774zU1D49UHZVSA2jvgLXy864lhs20w3vXLCnWn10kE1ieBSceaTNJ2elQKV_CSg7NB7FWaSY1SairQNcpnMbczSEUYG7nmNnveKGhmBOlz3Bk4WrJgEdNjlmtqdvX8JXFMmU6cRRRCBgFqS/file# [following]
--2021-11-30 03:52:46-- https://uca376f067801af5d54011d6c5a0.dl.dropboxusercontent.com/cd/0/inline/Ba4LYuMq774zU1D49UHZVSA2jvgLXy864lhs20w3vXLCnWn10kE1ieBSceaTNJ2elQKV_CSg7NB7FWaSY1SairQNcpnMbczSEUYG7nmNnveKGhmBOlz3Bk4WrJgEdNjlmtqdvX8JXFMmU6cRRRCBgFqS/file
Resolving uca376f067801af5d54011d6c5a0.dl.dropboxusercontent.com (uca376f067801af5d54011d6c5a0.dl.dropboxusercontent.com)... 162.125.68.15, 2620:100:6023:15::a27d:430f
Connecting to uca376f067801af5d54011d6c5a0.dl.dropboxusercontent.com (uca376f067801af5d54011d6c5a0.dl.dropboxusercontent.com)|162.125.68.15|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: /cd/0/inline2/Ba5Fu4oz9ELeUW2Uxijg9Sz9QJjDCdUj0MTr8PlkuHNaoKWsPj8vjaAdepEqT-oIKKoXaLUcpKkx05EPzHJxGW5R6Be5Ux9ve0T2xjIdPbKoAos2ZI5eriHetTs6deI6yfpOE6wrbcy2GrSKA-yHq4M8O72YFUtzeJmw9XTvHK1J3aPAmgKJG6yIDavxMIEH-L6FZNIwdXP6LywkuF9yYdU7lPjBQ-JscJxwuRaxBx7chaDbK0GpvzA-bHsnTbmf125WPKT-OJICJ4Cl8R6yjsAYypMN5hzLmmKD1ILAslIvxfvqi7_kgzJsNvo1XFgCEaSISrFq-2WdOZhxWq35w9J7u1VFrrRWM21RXtwrxbvchaI0ckpfW3c2hj037-t1YSI/file [following]
--2021-11-30 03:52:46-- https://uca376f067801af5d54011d6c5a0.dl.dropboxusercontent.com/cd/0/inline2/Ba5Fu4oz9ELeUW2Uxijg9Sz9QJjDCdUj0MTr8PlkuHNaoKWsPj8vjaAdepEqT-oIKKoXaLUcpKkx05EPzHJxGW5R6Be5Ux9ve0T2xjIdPbKoAos2ZI5eriHetTs6deI6yfpOE6wrbcy2GrSKA-yHq4M8O72YFUtzeJmw9XTvHK1J3aPAmgKJG6yIDavxMIEH-L6FZNIwdXP6LywkuF9yYdU7lPjBQ-JscJxwuRaxBx7chaDbK0GpvzA-bHsnTbmf125WPKT-OJICJ4Cl8R6yjsAYypMN5hzLmmKD1ILAslIvxfvqi7_kgzJsNvo1XFgCEaSISrFq-2WdOZhxWq35w9J7u1VFrrRWM21RXtwrxbvchaI0ckpfW3c2hj037-t1YSI/file
Reusing existing connection to uca376f067801af5d54011d6c5a0.dl.dropboxusercontent.com:443.
HTTP request sent, awaiting response... 200 OK
Length: 198042124 (189M) [application/zip]
Saving to: ‘./stargan_celeba_256/models/celeba-256x256-5attrs.zip’
./stargan_celeba_25 100%[===================>] 188.87M 13.0MB/s in 13s
2021-11-30 03:53:01 (14.1 MB/s) - ‘./stargan_celeba_256/models/celeba-256x256-5attrs.zip’ saved [198042124/198042124]
Archive: ./stargan_celeba_256/models/celeba-256x256-5attrs.zip
inflating: ./stargan_celeba_256/models/200000-G.ckpt
inflating: ./stargan_celeba_256/models/200000-D.ckpt
###Markdown
**Testing**That information was running with the celebrity faces then taken for testing.Once the testing was complete a TensorFlow was installed.
###Code
!python main.py --mode train --dataset CelebA --image_size 128 --c_dim 5 \
--sample_dir stargan_celeba/samples --log_dir stargan_celeba/logs \
--model_save_dir stargan_celeba/models --result_dir stargan_celeba/results \
--selected_attrs Black_Hair Blond_Hair Brown_Hair Male Young
!pip install TensorFlow==1.13.1
###Output
Collecting TensorFlow==1.13.1
Downloading tensorflow-1.13.1-cp37-cp37m-manylinux1_x86_64.whl (92.6 MB)
[K |████████████████████████████████| 92.6 MB 1.4 MB/s
[?25hRequirement already satisfied: grpcio>=1.8.6 in /usr/local/lib/python3.7/dist-packages (from TensorFlow==1.13.1) (1.42.0)
Requirement already satisfied: gast>=0.2.0 in /usr/local/lib/python3.7/dist-packages (from TensorFlow==1.13.1) (0.4.0)
Collecting tensorboard<1.14.0,>=1.13.0
Downloading tensorboard-1.13.1-py3-none-any.whl (3.2 MB)
[K |████████████████████████████████| 3.2 MB 39.3 MB/s
[?25hRequirement already satisfied: termcolor>=1.1.0 in /usr/local/lib/python3.7/dist-packages (from TensorFlow==1.13.1) (1.1.0)
Requirement already satisfied: wheel>=0.26 in /usr/local/lib/python3.7/dist-packages (from TensorFlow==1.13.1) (0.37.0)
Collecting tensorflow-estimator<1.14.0rc0,>=1.13.0
Downloading tensorflow_estimator-1.13.0-py2.py3-none-any.whl (367 kB)
[K |████████████████████████████████| 367 kB 40.7 MB/s
[?25hRequirement already satisfied: absl-py>=0.1.6 in /usr/local/lib/python3.7/dist-packages (from TensorFlow==1.13.1) (0.12.0)
Collecting keras-applications>=1.0.6
Downloading Keras_Applications-1.0.8-py3-none-any.whl (50 kB)
[K |████████████████████████████████| 50 kB 7.0 MB/s
[?25hRequirement already satisfied: numpy>=1.13.3 in /usr/local/lib/python3.7/dist-packages (from TensorFlow==1.13.1) (1.19.5)
Requirement already satisfied: six>=1.10.0 in /usr/local/lib/python3.7/dist-packages (from TensorFlow==1.13.1) (1.15.0)
Requirement already satisfied: astor>=0.6.0 in /usr/local/lib/python3.7/dist-packages (from TensorFlow==1.13.1) (0.8.1)
Requirement already satisfied: keras-preprocessing>=1.0.5 in /usr/local/lib/python3.7/dist-packages (from TensorFlow==1.13.1) (1.1.2)
Requirement already satisfied: protobuf>=3.6.1 in /usr/local/lib/python3.7/dist-packages (from TensorFlow==1.13.1) (3.17.3)
Requirement already satisfied: h5py in /usr/local/lib/python3.7/dist-packages (from keras-applications>=1.0.6->TensorFlow==1.13.1) (3.1.0)
Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.7/dist-packages (from tensorboard<1.14.0,>=1.13.0->TensorFlow==1.13.1) (3.3.6)
Requirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.7/dist-packages (from tensorboard<1.14.0,>=1.13.0->TensorFlow==1.13.1) (1.0.1)
Requirement already satisfied: importlib-metadata>=4.4 in /usr/local/lib/python3.7/dist-packages (from markdown>=2.6.8->tensorboard<1.14.0,>=1.13.0->TensorFlow==1.13.1) (4.8.2)
Requirement already satisfied: typing-extensions>=3.6.4 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata>=4.4->markdown>=2.6.8->tensorboard<1.14.0,>=1.13.0->TensorFlow==1.13.1) (3.10.0.2)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata>=4.4->markdown>=2.6.8->tensorboard<1.14.0,>=1.13.0->TensorFlow==1.13.1) (3.6.0)
Collecting mock>=2.0.0
Downloading mock-4.0.3-py3-none-any.whl (28 kB)
Requirement already satisfied: cached-property in /usr/local/lib/python3.7/dist-packages (from h5py->keras-applications>=1.0.6->TensorFlow==1.13.1) (1.5.2)
Installing collected packages: mock, tensorflow-estimator, tensorboard, keras-applications, TensorFlow
Attempting uninstall: tensorflow-estimator
Found existing installation: tensorflow-estimator 2.7.0
Uninstalling tensorflow-estimator-2.7.0:
Successfully uninstalled tensorflow-estimator-2.7.0
Attempting uninstall: tensorboard
Found existing installation: tensorboard 2.7.0
Uninstalling tensorboard-2.7.0:
Successfully uninstalled tensorboard-2.7.0
Attempting uninstall: TensorFlow
Found existing installation: tensorflow 2.7.0
Uninstalling tensorflow-2.7.0:
Successfully uninstalled tensorflow-2.7.0
[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
kapre 0.3.6 requires tensorflow>=2.0.0, but you have tensorflow 1.13.1 which is incompatible.[0m
Successfully installed TensorFlow-1.13.1 keras-applications-1.0.8 mock-4.0.3 tensorboard-1.13.1 tensorflow-estimator-1.13.0
###Markdown
**New Data Set****George Bush Faces **After the tensor flow the George W. Bush faces were added from a UMass.
###Code
!wget 'http://vis-www.cs.umass.edu/lfw/lfw-bush.tgz'
!tar xzvf lfw-bush.tgz
###Output
--2021-11-30 03:54:18-- http://vis-www.cs.umass.edu/lfw/lfw-bush.tgz
Resolving vis-www.cs.umass.edu (vis-www.cs.umass.edu)... 128.119.244.95
Connecting to vis-www.cs.umass.edu (vis-www.cs.umass.edu)|128.119.244.95|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 7143480 (6.8M) [application/x-gzip]
Saving to: ‘lfw-bush.tgz’
lfw-bush.tgz 100%[===================>] 6.81M 8.45MB/s in 0.8s
2021-11-30 03:54:19 (8.45 MB/s) - ‘lfw-bush.tgz’ saved [7143480/7143480]
lfw/George_W_Bush/
lfw/George_W_Bush/George_W_Bush_0001.jpg
lfw/George_W_Bush/George_W_Bush_0002.jpg
lfw/George_W_Bush/George_W_Bush_0003.jpg
lfw/George_W_Bush/George_W_Bush_0004.jpg
lfw/George_W_Bush/George_W_Bush_0005.jpg
lfw/George_W_Bush/George_W_Bush_0006.jpg
lfw/George_W_Bush/George_W_Bush_0007.jpg
lfw/George_W_Bush/George_W_Bush_0008.jpg
lfw/George_W_Bush/George_W_Bush_0009.jpg
lfw/George_W_Bush/George_W_Bush_0010.jpg
lfw/George_W_Bush/George_W_Bush_0011.jpg
lfw/George_W_Bush/George_W_Bush_0012.jpg
lfw/George_W_Bush/George_W_Bush_0013.jpg
lfw/George_W_Bush/George_W_Bush_0014.jpg
lfw/George_W_Bush/George_W_Bush_0015.jpg
lfw/George_W_Bush/George_W_Bush_0016.jpg
lfw/George_W_Bush/George_W_Bush_0017.jpg
lfw/George_W_Bush/George_W_Bush_0018.jpg
lfw/George_W_Bush/George_W_Bush_0019.jpg
lfw/George_W_Bush/George_W_Bush_0020.jpg
lfw/George_W_Bush/George_W_Bush_0021.jpg
lfw/George_W_Bush/George_W_Bush_0022.jpg
lfw/George_W_Bush/George_W_Bush_0023.jpg
lfw/George_W_Bush/George_W_Bush_0024.jpg
lfw/George_W_Bush/George_W_Bush_0025.jpg
lfw/George_W_Bush/George_W_Bush_0026.jpg
lfw/George_W_Bush/George_W_Bush_0027.jpg
lfw/George_W_Bush/George_W_Bush_0028.jpg
lfw/George_W_Bush/George_W_Bush_0029.jpg
lfw/George_W_Bush/George_W_Bush_0030.jpg
lfw/George_W_Bush/George_W_Bush_0031.jpg
lfw/George_W_Bush/George_W_Bush_0032.jpg
lfw/George_W_Bush/George_W_Bush_0033.jpg
lfw/George_W_Bush/George_W_Bush_0034.jpg
lfw/George_W_Bush/George_W_Bush_0035.jpg
lfw/George_W_Bush/George_W_Bush_0036.jpg
lfw/George_W_Bush/George_W_Bush_0037.jpg
lfw/George_W_Bush/George_W_Bush_0038.jpg
lfw/George_W_Bush/George_W_Bush_0039.jpg
lfw/George_W_Bush/George_W_Bush_0040.jpg
lfw/George_W_Bush/George_W_Bush_0041.jpg
lfw/George_W_Bush/George_W_Bush_0042.jpg
lfw/George_W_Bush/George_W_Bush_0043.jpg
lfw/George_W_Bush/George_W_Bush_0044.jpg
lfw/George_W_Bush/George_W_Bush_0045.jpg
lfw/George_W_Bush/George_W_Bush_0046.jpg
lfw/George_W_Bush/George_W_Bush_0047.jpg
lfw/George_W_Bush/George_W_Bush_0048.jpg
lfw/George_W_Bush/George_W_Bush_0049.jpg
lfw/George_W_Bush/George_W_Bush_0050.jpg
lfw/George_W_Bush/George_W_Bush_0051.jpg
lfw/George_W_Bush/George_W_Bush_0052.jpg
lfw/George_W_Bush/George_W_Bush_0053.jpg
lfw/George_W_Bush/George_W_Bush_0054.jpg
lfw/George_W_Bush/George_W_Bush_0055.jpg
lfw/George_W_Bush/George_W_Bush_0056.jpg
lfw/George_W_Bush/George_W_Bush_0057.jpg
lfw/George_W_Bush/George_W_Bush_0058.jpg
lfw/George_W_Bush/George_W_Bush_0059.jpg
lfw/George_W_Bush/George_W_Bush_0060.jpg
lfw/George_W_Bush/George_W_Bush_0061.jpg
lfw/George_W_Bush/George_W_Bush_0062.jpg
lfw/George_W_Bush/George_W_Bush_0063.jpg
lfw/George_W_Bush/George_W_Bush_0064.jpg
lfw/George_W_Bush/George_W_Bush_0065.jpg
lfw/George_W_Bush/George_W_Bush_0066.jpg
lfw/George_W_Bush/George_W_Bush_0067.jpg
lfw/George_W_Bush/George_W_Bush_0068.jpg
lfw/George_W_Bush/George_W_Bush_0069.jpg
lfw/George_W_Bush/George_W_Bush_0070.jpg
lfw/George_W_Bush/George_W_Bush_0071.jpg
lfw/George_W_Bush/George_W_Bush_0072.jpg
lfw/George_W_Bush/George_W_Bush_0073.jpg
lfw/George_W_Bush/George_W_Bush_0074.jpg
lfw/George_W_Bush/George_W_Bush_0075.jpg
lfw/George_W_Bush/George_W_Bush_0076.jpg
lfw/George_W_Bush/George_W_Bush_0077.jpg
lfw/George_W_Bush/George_W_Bush_0078.jpg
lfw/George_W_Bush/George_W_Bush_0079.jpg
lfw/George_W_Bush/George_W_Bush_0080.jpg
lfw/George_W_Bush/George_W_Bush_0081.jpg
lfw/George_W_Bush/George_W_Bush_0082.jpg
lfw/George_W_Bush/George_W_Bush_0083.jpg
lfw/George_W_Bush/George_W_Bush_0084.jpg
lfw/George_W_Bush/George_W_Bush_0085.jpg
lfw/George_W_Bush/George_W_Bush_0086.jpg
lfw/George_W_Bush/George_W_Bush_0087.jpg
lfw/George_W_Bush/George_W_Bush_0088.jpg
lfw/George_W_Bush/George_W_Bush_0089.jpg
lfw/George_W_Bush/George_W_Bush_0090.jpg
lfw/George_W_Bush/George_W_Bush_0091.jpg
lfw/George_W_Bush/George_W_Bush_0092.jpg
lfw/George_W_Bush/George_W_Bush_0093.jpg
lfw/George_W_Bush/George_W_Bush_0094.jpg
lfw/George_W_Bush/George_W_Bush_0095.jpg
lfw/George_W_Bush/George_W_Bush_0096.jpg
lfw/George_W_Bush/George_W_Bush_0097.jpg
lfw/George_W_Bush/George_W_Bush_0098.jpg
lfw/George_W_Bush/George_W_Bush_0099.jpg
lfw/George_W_Bush/George_W_Bush_0100.jpg
lfw/George_W_Bush/George_W_Bush_0101.jpg
lfw/George_W_Bush/George_W_Bush_0102.jpg
lfw/George_W_Bush/George_W_Bush_0103.jpg
lfw/George_W_Bush/George_W_Bush_0104.jpg
lfw/George_W_Bush/George_W_Bush_0105.jpg
lfw/George_W_Bush/George_W_Bush_0106.jpg
lfw/George_W_Bush/George_W_Bush_0107.jpg
lfw/George_W_Bush/George_W_Bush_0108.jpg
lfw/George_W_Bush/George_W_Bush_0109.jpg
lfw/George_W_Bush/George_W_Bush_0110.jpg
lfw/George_W_Bush/George_W_Bush_0111.jpg
lfw/George_W_Bush/George_W_Bush_0112.jpg
lfw/George_W_Bush/George_W_Bush_0113.jpg
lfw/George_W_Bush/George_W_Bush_0114.jpg
lfw/George_W_Bush/George_W_Bush_0115.jpg
lfw/George_W_Bush/George_W_Bush_0116.jpg
lfw/George_W_Bush/George_W_Bush_0117.jpg
lfw/George_W_Bush/George_W_Bush_0118.jpg
lfw/George_W_Bush/George_W_Bush_0119.jpg
lfw/George_W_Bush/George_W_Bush_0120.jpg
lfw/George_W_Bush/George_W_Bush_0121.jpg
lfw/George_W_Bush/George_W_Bush_0122.jpg
lfw/George_W_Bush/George_W_Bush_0123.jpg
lfw/George_W_Bush/George_W_Bush_0124.jpg
lfw/George_W_Bush/George_W_Bush_0125.jpg
lfw/George_W_Bush/George_W_Bush_0126.jpg
lfw/George_W_Bush/George_W_Bush_0127.jpg
lfw/George_W_Bush/George_W_Bush_0128.jpg
lfw/George_W_Bush/George_W_Bush_0129.jpg
lfw/George_W_Bush/George_W_Bush_0130.jpg
lfw/George_W_Bush/George_W_Bush_0131.jpg
lfw/George_W_Bush/George_W_Bush_0132.jpg
lfw/George_W_Bush/George_W_Bush_0133.jpg
lfw/George_W_Bush/George_W_Bush_0134.jpg
lfw/George_W_Bush/George_W_Bush_0135.jpg
lfw/George_W_Bush/George_W_Bush_0136.jpg
lfw/George_W_Bush/George_W_Bush_0137.jpg
lfw/George_W_Bush/George_W_Bush_0138.jpg
lfw/George_W_Bush/George_W_Bush_0139.jpg
lfw/George_W_Bush/George_W_Bush_0140.jpg
lfw/George_W_Bush/George_W_Bush_0141.jpg
lfw/George_W_Bush/George_W_Bush_0142.jpg
lfw/George_W_Bush/George_W_Bush_0143.jpg
lfw/George_W_Bush/George_W_Bush_0144.jpg
lfw/George_W_Bush/George_W_Bush_0145.jpg
lfw/George_W_Bush/George_W_Bush_0146.jpg
lfw/George_W_Bush/George_W_Bush_0147.jpg
lfw/George_W_Bush/George_W_Bush_0148.jpg
lfw/George_W_Bush/George_W_Bush_0149.jpg
lfw/George_W_Bush/George_W_Bush_0150.jpg
lfw/George_W_Bush/George_W_Bush_0151.jpg
lfw/George_W_Bush/George_W_Bush_0152.jpg
lfw/George_W_Bush/George_W_Bush_0153.jpg
lfw/George_W_Bush/George_W_Bush_0154.jpg
lfw/George_W_Bush/George_W_Bush_0155.jpg
lfw/George_W_Bush/George_W_Bush_0156.jpg
lfw/George_W_Bush/George_W_Bush_0157.jpg
lfw/George_W_Bush/George_W_Bush_0158.jpg
lfw/George_W_Bush/George_W_Bush_0159.jpg
lfw/George_W_Bush/George_W_Bush_0160.jpg
lfw/George_W_Bush/George_W_Bush_0161.jpg
lfw/George_W_Bush/George_W_Bush_0162.jpg
lfw/George_W_Bush/George_W_Bush_0163.jpg
lfw/George_W_Bush/George_W_Bush_0164.jpg
lfw/George_W_Bush/George_W_Bush_0165.jpg
lfw/George_W_Bush/George_W_Bush_0166.jpg
lfw/George_W_Bush/George_W_Bush_0167.jpg
lfw/George_W_Bush/George_W_Bush_0168.jpg
lfw/George_W_Bush/George_W_Bush_0169.jpg
lfw/George_W_Bush/George_W_Bush_0170.jpg
lfw/George_W_Bush/George_W_Bush_0171.jpg
lfw/George_W_Bush/George_W_Bush_0172.jpg
lfw/George_W_Bush/George_W_Bush_0173.jpg
lfw/George_W_Bush/George_W_Bush_0174.jpg
lfw/George_W_Bush/George_W_Bush_0175.jpg
lfw/George_W_Bush/George_W_Bush_0176.jpg
lfw/George_W_Bush/George_W_Bush_0177.jpg
lfw/George_W_Bush/George_W_Bush_0178.jpg
lfw/George_W_Bush/George_W_Bush_0179.jpg
lfw/George_W_Bush/George_W_Bush_0180.jpg
lfw/George_W_Bush/George_W_Bush_0181.jpg
lfw/George_W_Bush/George_W_Bush_0182.jpg
lfw/George_W_Bush/George_W_Bush_0183.jpg
lfw/George_W_Bush/George_W_Bush_0184.jpg
lfw/George_W_Bush/George_W_Bush_0185.jpg
lfw/George_W_Bush/George_W_Bush_0186.jpg
lfw/George_W_Bush/George_W_Bush_0187.jpg
lfw/George_W_Bush/George_W_Bush_0188.jpg
lfw/George_W_Bush/George_W_Bush_0189.jpg
lfw/George_W_Bush/George_W_Bush_0190.jpg
lfw/George_W_Bush/George_W_Bush_0191.jpg
lfw/George_W_Bush/George_W_Bush_0192.jpg
lfw/George_W_Bush/George_W_Bush_0193.jpg
lfw/George_W_Bush/George_W_Bush_0194.jpg
lfw/George_W_Bush/George_W_Bush_0195.jpg
lfw/George_W_Bush/George_W_Bush_0196.jpg
lfw/George_W_Bush/George_W_Bush_0197.jpg
lfw/George_W_Bush/George_W_Bush_0198.jpg
lfw/George_W_Bush/George_W_Bush_0199.jpg
lfw/George_W_Bush/George_W_Bush_0200.jpg
lfw/George_W_Bush/George_W_Bush_0201.jpg
lfw/George_W_Bush/George_W_Bush_0202.jpg
lfw/George_W_Bush/George_W_Bush_0203.jpg
lfw/George_W_Bush/George_W_Bush_0204.jpg
lfw/George_W_Bush/George_W_Bush_0205.jpg
lfw/George_W_Bush/George_W_Bush_0206.jpg
lfw/George_W_Bush/George_W_Bush_0207.jpg
lfw/George_W_Bush/George_W_Bush_0208.jpg
lfw/George_W_Bush/George_W_Bush_0209.jpg
lfw/George_W_Bush/George_W_Bush_0210.jpg
lfw/George_W_Bush/George_W_Bush_0211.jpg
lfw/George_W_Bush/George_W_Bush_0212.jpg
lfw/George_W_Bush/George_W_Bush_0213.jpg
lfw/George_W_Bush/George_W_Bush_0214.jpg
lfw/George_W_Bush/George_W_Bush_0215.jpg
lfw/George_W_Bush/George_W_Bush_0216.jpg
lfw/George_W_Bush/George_W_Bush_0217.jpg
lfw/George_W_Bush/George_W_Bush_0218.jpg
lfw/George_W_Bush/George_W_Bush_0219.jpg
lfw/George_W_Bush/George_W_Bush_0220.jpg
lfw/George_W_Bush/George_W_Bush_0221.jpg
lfw/George_W_Bush/George_W_Bush_0222.jpg
lfw/George_W_Bush/George_W_Bush_0223.jpg
lfw/George_W_Bush/George_W_Bush_0224.jpg
lfw/George_W_Bush/George_W_Bush_0225.jpg
lfw/George_W_Bush/George_W_Bush_0226.jpg
lfw/George_W_Bush/George_W_Bush_0227.jpg
lfw/George_W_Bush/George_W_Bush_0228.jpg
lfw/George_W_Bush/George_W_Bush_0229.jpg
lfw/George_W_Bush/George_W_Bush_0230.jpg
lfw/George_W_Bush/George_W_Bush_0231.jpg
lfw/George_W_Bush/George_W_Bush_0232.jpg
lfw/George_W_Bush/George_W_Bush_0233.jpg
lfw/George_W_Bush/George_W_Bush_0234.jpg
lfw/George_W_Bush/George_W_Bush_0235.jpg
lfw/George_W_Bush/George_W_Bush_0236.jpg
lfw/George_W_Bush/George_W_Bush_0237.jpg
lfw/George_W_Bush/George_W_Bush_0238.jpg
lfw/George_W_Bush/George_W_Bush_0239.jpg
lfw/George_W_Bush/George_W_Bush_0240.jpg
lfw/George_W_Bush/George_W_Bush_0241.jpg
lfw/George_W_Bush/George_W_Bush_0242.jpg
lfw/George_W_Bush/George_W_Bush_0243.jpg
lfw/George_W_Bush/George_W_Bush_0244.jpg
lfw/George_W_Bush/George_W_Bush_0245.jpg
lfw/George_W_Bush/George_W_Bush_0246.jpg
lfw/George_W_Bush/George_W_Bush_0247.jpg
lfw/George_W_Bush/George_W_Bush_0248.jpg
lfw/George_W_Bush/George_W_Bush_0249.jpg
lfw/George_W_Bush/George_W_Bush_0250.jpg
lfw/George_W_Bush/George_W_Bush_0251.jpg
lfw/George_W_Bush/George_W_Bush_0252.jpg
lfw/George_W_Bush/George_W_Bush_0253.jpg
lfw/George_W_Bush/George_W_Bush_0254.jpg
lfw/George_W_Bush/George_W_Bush_0255.jpg
lfw/George_W_Bush/George_W_Bush_0256.jpg
lfw/George_W_Bush/George_W_Bush_0257.jpg
lfw/George_W_Bush/George_W_Bush_0258.jpg
lfw/George_W_Bush/George_W_Bush_0259.jpg
lfw/George_W_Bush/George_W_Bush_0260.jpg
lfw/George_W_Bush/George_W_Bush_0261.jpg
lfw/George_W_Bush/George_W_Bush_0262.jpg
lfw/George_W_Bush/George_W_Bush_0263.jpg
lfw/George_W_Bush/George_W_Bush_0264.jpg
lfw/George_W_Bush/George_W_Bush_0265.jpg
lfw/George_W_Bush/George_W_Bush_0266.jpg
lfw/George_W_Bush/George_W_Bush_0267.jpg
lfw/George_W_Bush/George_W_Bush_0268.jpg
lfw/George_W_Bush/George_W_Bush_0269.jpg
lfw/George_W_Bush/George_W_Bush_0270.jpg
lfw/George_W_Bush/George_W_Bush_0271.jpg
lfw/George_W_Bush/George_W_Bush_0272.jpg
lfw/George_W_Bush/George_W_Bush_0273.jpg
lfw/George_W_Bush/George_W_Bush_0274.jpg
lfw/George_W_Bush/George_W_Bush_0275.jpg
lfw/George_W_Bush/George_W_Bush_0276.jpg
lfw/George_W_Bush/George_W_Bush_0277.jpg
lfw/George_W_Bush/George_W_Bush_0278.jpg
lfw/George_W_Bush/George_W_Bush_0279.jpg
lfw/George_W_Bush/George_W_Bush_0280.jpg
lfw/George_W_Bush/George_W_Bush_0281.jpg
lfw/George_W_Bush/George_W_Bush_0282.jpg
lfw/George_W_Bush/George_W_Bush_0283.jpg
lfw/George_W_Bush/George_W_Bush_0284.jpg
lfw/George_W_Bush/George_W_Bush_0285.jpg
lfw/George_W_Bush/George_W_Bush_0286.jpg
lfw/George_W_Bush/George_W_Bush_0287.jpg
lfw/George_W_Bush/George_W_Bush_0288.jpg
lfw/George_W_Bush/George_W_Bush_0289.jpg
lfw/George_W_Bush/George_W_Bush_0290.jpg
lfw/George_W_Bush/George_W_Bush_0291.jpg
lfw/George_W_Bush/George_W_Bush_0292.jpg
lfw/George_W_Bush/George_W_Bush_0293.jpg
lfw/George_W_Bush/George_W_Bush_0294.jpg
lfw/George_W_Bush/George_W_Bush_0295.jpg
lfw/George_W_Bush/George_W_Bush_0296.jpg
lfw/George_W_Bush/George_W_Bush_0297.jpg
lfw/George_W_Bush/George_W_Bush_0298.jpg
lfw/George_W_Bush/George_W_Bush_0299.jpg
lfw/George_W_Bush/George_W_Bush_0300.jpg
lfw/George_W_Bush/George_W_Bush_0301.jpg
lfw/George_W_Bush/George_W_Bush_0302.jpg
lfw/George_W_Bush/George_W_Bush_0303.jpg
lfw/George_W_Bush/George_W_Bush_0304.jpg
lfw/George_W_Bush/George_W_Bush_0305.jpg
lfw/George_W_Bush/George_W_Bush_0306.jpg
lfw/George_W_Bush/George_W_Bush_0307.jpg
lfw/George_W_Bush/George_W_Bush_0308.jpg
lfw/George_W_Bush/George_W_Bush_0309.jpg
lfw/George_W_Bush/George_W_Bush_0310.jpg
lfw/George_W_Bush/George_W_Bush_0311.jpg
lfw/George_W_Bush/George_W_Bush_0312.jpg
lfw/George_W_Bush/George_W_Bush_0313.jpg
lfw/George_W_Bush/George_W_Bush_0314.jpg
lfw/George_W_Bush/George_W_Bush_0315.jpg
lfw/George_W_Bush/George_W_Bush_0316.jpg
lfw/George_W_Bush/George_W_Bush_0317.jpg
lfw/George_W_Bush/George_W_Bush_0318.jpg
lfw/George_W_Bush/George_W_Bush_0319.jpg
lfw/George_W_Bush/George_W_Bush_0320.jpg
lfw/George_W_Bush/George_W_Bush_0321.jpg
lfw/George_W_Bush/George_W_Bush_0322.jpg
lfw/George_W_Bush/George_W_Bush_0323.jpg
lfw/George_W_Bush/George_W_Bush_0324.jpg
lfw/George_W_Bush/George_W_Bush_0325.jpg
lfw/George_W_Bush/George_W_Bush_0326.jpg
lfw/George_W_Bush/George_W_Bush_0327.jpg
lfw/George_W_Bush/George_W_Bush_0328.jpg
lfw/George_W_Bush/George_W_Bush_0329.jpg
lfw/George_W_Bush/George_W_Bush_0330.jpg
lfw/George_W_Bush/George_W_Bush_0331.jpg
lfw/George_W_Bush/George_W_Bush_0332.jpg
lfw/George_W_Bush/George_W_Bush_0333.jpg
lfw/George_W_Bush/George_W_Bush_0334.jpg
lfw/George_W_Bush/George_W_Bush_0335.jpg
lfw/George_W_Bush/George_W_Bush_0336.jpg
lfw/George_W_Bush/George_W_Bush_0337.jpg
lfw/George_W_Bush/George_W_Bush_0338.jpg
lfw/George_W_Bush/George_W_Bush_0339.jpg
lfw/George_W_Bush/George_W_Bush_0340.jpg
lfw/George_W_Bush/George_W_Bush_0341.jpg
lfw/George_W_Bush/George_W_Bush_0342.jpg
lfw/George_W_Bush/George_W_Bush_0343.jpg
lfw/George_W_Bush/George_W_Bush_0344.jpg
lfw/George_W_Bush/George_W_Bush_0345.jpg
lfw/George_W_Bush/George_W_Bush_0346.jpg
lfw/George_W_Bush/George_W_Bush_0347.jpg
lfw/George_W_Bush/George_W_Bush_0348.jpg
lfw/George_W_Bush/George_W_Bush_0349.jpg
lfw/George_W_Bush/George_W_Bush_0350.jpg
lfw/George_W_Bush/George_W_Bush_0351.jpg
lfw/George_W_Bush/George_W_Bush_0352.jpg
lfw/George_W_Bush/George_W_Bush_0353.jpg
lfw/George_W_Bush/George_W_Bush_0354.jpg
lfw/George_W_Bush/George_W_Bush_0355.jpg
lfw/George_W_Bush/George_W_Bush_0356.jpg
lfw/George_W_Bush/George_W_Bush_0357.jpg
lfw/George_W_Bush/George_W_Bush_0358.jpg
lfw/George_W_Bush/George_W_Bush_0359.jpg
lfw/George_W_Bush/George_W_Bush_0360.jpg
lfw/George_W_Bush/George_W_Bush_0361.jpg
lfw/George_W_Bush/George_W_Bush_0362.jpg
lfw/George_W_Bush/George_W_Bush_0363.jpg
lfw/George_W_Bush/George_W_Bush_0364.jpg
lfw/George_W_Bush/George_W_Bush_0365.jpg
lfw/George_W_Bush/George_W_Bush_0366.jpg
lfw/George_W_Bush/George_W_Bush_0367.jpg
lfw/George_W_Bush/George_W_Bush_0368.jpg
lfw/George_W_Bush/George_W_Bush_0369.jpg
lfw/George_W_Bush/George_W_Bush_0370.jpg
lfw/George_W_Bush/George_W_Bush_0371.jpg
lfw/George_W_Bush/George_W_Bush_0372.jpg
lfw/George_W_Bush/George_W_Bush_0373.jpg
lfw/George_W_Bush/George_W_Bush_0374.jpg
lfw/George_W_Bush/George_W_Bush_0375.jpg
lfw/George_W_Bush/George_W_Bush_0376.jpg
lfw/George_W_Bush/George_W_Bush_0377.jpg
lfw/George_W_Bush/George_W_Bush_0378.jpg
lfw/George_W_Bush/George_W_Bush_0379.jpg
lfw/George_W_Bush/George_W_Bush_0380.jpg
lfw/George_W_Bush/George_W_Bush_0381.jpg
lfw/George_W_Bush/George_W_Bush_0382.jpg
lfw/George_W_Bush/George_W_Bush_0383.jpg
lfw/George_W_Bush/George_W_Bush_0384.jpg
lfw/George_W_Bush/George_W_Bush_0385.jpg
lfw/George_W_Bush/George_W_Bush_0386.jpg
lfw/George_W_Bush/George_W_Bush_0387.jpg
lfw/George_W_Bush/George_W_Bush_0388.jpg
lfw/George_W_Bush/George_W_Bush_0389.jpg
lfw/George_W_Bush/George_W_Bush_0390.jpg
lfw/George_W_Bush/George_W_Bush_0391.jpg
lfw/George_W_Bush/George_W_Bush_0392.jpg
lfw/George_W_Bush/George_W_Bush_0393.jpg
lfw/George_W_Bush/George_W_Bush_0394.jpg
lfw/George_W_Bush/George_W_Bush_0395.jpg
lfw/George_W_Bush/George_W_Bush_0396.jpg
lfw/George_W_Bush/George_W_Bush_0397.jpg
lfw/George_W_Bush/George_W_Bush_0398.jpg
lfw/George_W_Bush/George_W_Bush_0399.jpg
lfw/George_W_Bush/George_W_Bush_0400.jpg
lfw/George_W_Bush/George_W_Bush_0401.jpg
lfw/George_W_Bush/George_W_Bush_0402.jpg
lfw/George_W_Bush/George_W_Bush_0403.jpg
lfw/George_W_Bush/George_W_Bush_0404.jpg
lfw/George_W_Bush/George_W_Bush_0405.jpg
lfw/George_W_Bush/George_W_Bush_0406.jpg
lfw/George_W_Bush/George_W_Bush_0407.jpg
lfw/George_W_Bush/George_W_Bush_0408.jpg
lfw/George_W_Bush/George_W_Bush_0409.jpg
lfw/George_W_Bush/George_W_Bush_0410.jpg
lfw/George_W_Bush/George_W_Bush_0411.jpg
lfw/George_W_Bush/George_W_Bush_0412.jpg
lfw/George_W_Bush/George_W_Bush_0413.jpg
lfw/George_W_Bush/George_W_Bush_0414.jpg
lfw/George_W_Bush/George_W_Bush_0415.jpg
lfw/George_W_Bush/George_W_Bush_0416.jpg
lfw/George_W_Bush/George_W_Bush_0417.jpg
lfw/George_W_Bush/George_W_Bush_0418.jpg
lfw/George_W_Bush/George_W_Bush_0419.jpg
lfw/George_W_Bush/George_W_Bush_0420.jpg
lfw/George_W_Bush/George_W_Bush_0421.jpg
lfw/George_W_Bush/George_W_Bush_0422.jpg
lfw/George_W_Bush/George_W_Bush_0423.jpg
lfw/George_W_Bush/George_W_Bush_0424.jpg
lfw/George_W_Bush/George_W_Bush_0425.jpg
lfw/George_W_Bush/George_W_Bush_0426.jpg
lfw/George_W_Bush/George_W_Bush_0427.jpg
lfw/George_W_Bush/George_W_Bush_0428.jpg
lfw/George_W_Bush/George_W_Bush_0429.jpg
lfw/George_W_Bush/George_W_Bush_0430.jpg
lfw/George_W_Bush/George_W_Bush_0431.jpg
lfw/George_W_Bush/George_W_Bush_0432.jpg
lfw/George_W_Bush/George_W_Bush_0433.jpg
lfw/George_W_Bush/George_W_Bush_0434.jpg
lfw/George_W_Bush/George_W_Bush_0435.jpg
lfw/George_W_Bush/George_W_Bush_0436.jpg
lfw/George_W_Bush/George_W_Bush_0437.jpg
lfw/George_W_Bush/George_W_Bush_0438.jpg
lfw/George_W_Bush/George_W_Bush_0439.jpg
lfw/George_W_Bush/George_W_Bush_0440.jpg
lfw/George_W_Bush/George_W_Bush_0441.jpg
lfw/George_W_Bush/George_W_Bush_0442.jpg
lfw/George_W_Bush/George_W_Bush_0443.jpg
lfw/George_W_Bush/George_W_Bush_0444.jpg
lfw/George_W_Bush/George_W_Bush_0445.jpg
lfw/George_W_Bush/George_W_Bush_0446.jpg
lfw/George_W_Bush/George_W_Bush_0447.jpg
lfw/George_W_Bush/George_W_Bush_0448.jpg
lfw/George_W_Bush/George_W_Bush_0449.jpg
lfw/George_W_Bush/George_W_Bush_0450.jpg
lfw/George_W_Bush/George_W_Bush_0451.jpg
lfw/George_W_Bush/George_W_Bush_0452.jpg
lfw/George_W_Bush/George_W_Bush_0453.jpg
lfw/George_W_Bush/George_W_Bush_0454.jpg
lfw/George_W_Bush/George_W_Bush_0455.jpg
lfw/George_W_Bush/George_W_Bush_0456.jpg
lfw/George_W_Bush/George_W_Bush_0457.jpg
lfw/George_W_Bush/George_W_Bush_0458.jpg
lfw/George_W_Bush/George_W_Bush_0459.jpg
lfw/George_W_Bush/George_W_Bush_0460.jpg
lfw/George_W_Bush/George_W_Bush_0461.jpg
lfw/George_W_Bush/George_W_Bush_0462.jpg
lfw/George_W_Bush/George_W_Bush_0463.jpg
lfw/George_W_Bush/George_W_Bush_0464.jpg
lfw/George_W_Bush/George_W_Bush_0465.jpg
lfw/George_W_Bush/George_W_Bush_0466.jpg
lfw/George_W_Bush/George_W_Bush_0467.jpg
lfw/George_W_Bush/George_W_Bush_0468.jpg
lfw/George_W_Bush/George_W_Bush_0469.jpg
lfw/George_W_Bush/George_W_Bush_0470.jpg
lfw/George_W_Bush/George_W_Bush_0471.jpg
lfw/George_W_Bush/George_W_Bush_0472.jpg
lfw/George_W_Bush/George_W_Bush_0473.jpg
lfw/George_W_Bush/George_W_Bush_0474.jpg
lfw/George_W_Bush/George_W_Bush_0475.jpg
lfw/George_W_Bush/George_W_Bush_0476.jpg
lfw/George_W_Bush/George_W_Bush_0477.jpg
lfw/George_W_Bush/George_W_Bush_0478.jpg
lfw/George_W_Bush/George_W_Bush_0479.jpg
lfw/George_W_Bush/George_W_Bush_0480.jpg
lfw/George_W_Bush/George_W_Bush_0481.jpg
lfw/George_W_Bush/George_W_Bush_0482.jpg
lfw/George_W_Bush/George_W_Bush_0483.jpg
lfw/George_W_Bush/George_W_Bush_0484.jpg
lfw/George_W_Bush/George_W_Bush_0485.jpg
lfw/George_W_Bush/George_W_Bush_0486.jpg
lfw/George_W_Bush/George_W_Bush_0487.jpg
lfw/George_W_Bush/George_W_Bush_0488.jpg
lfw/George_W_Bush/George_W_Bush_0489.jpg
lfw/George_W_Bush/George_W_Bush_0490.jpg
lfw/George_W_Bush/George_W_Bush_0491.jpg
lfw/George_W_Bush/George_W_Bush_0492.jpg
lfw/George_W_Bush/George_W_Bush_0493.jpg
lfw/George_W_Bush/George_W_Bush_0494.jpg
lfw/George_W_Bush/George_W_Bush_0495.jpg
lfw/George_W_Bush/George_W_Bush_0496.jpg
lfw/George_W_Bush/George_W_Bush_0497.jpg
lfw/George_W_Bush/George_W_Bush_0498.jpg
lfw/George_W_Bush/George_W_Bush_0499.jpg
lfw/George_W_Bush/George_W_Bush_0500.jpg
lfw/George_W_Bush/George_W_Bush_0501.jpg
lfw/George_W_Bush/George_W_Bush_0502.jpg
lfw/George_W_Bush/George_W_Bush_0503.jpg
lfw/George_W_Bush/George_W_Bush_0504.jpg
lfw/George_W_Bush/George_W_Bush_0505.jpg
lfw/George_W_Bush/George_W_Bush_0506.jpg
lfw/George_W_Bush/George_W_Bush_0507.jpg
lfw/George_W_Bush/George_W_Bush_0508.jpg
lfw/George_W_Bush/George_W_Bush_0509.jpg
lfw/George_W_Bush/George_W_Bush_0510.jpg
lfw/George_W_Bush/George_W_Bush_0511.jpg
lfw/George_W_Bush/George_W_Bush_0512.jpg
lfw/George_W_Bush/George_W_Bush_0513.jpg
lfw/George_W_Bush/George_W_Bush_0514.jpg
lfw/George_W_Bush/George_W_Bush_0515.jpg
lfw/George_W_Bush/George_W_Bush_0516.jpg
lfw/George_W_Bush/George_W_Bush_0517.jpg
lfw/George_W_Bush/George_W_Bush_0518.jpg
lfw/George_W_Bush/George_W_Bush_0519.jpg
lfw/George_W_Bush/George_W_Bush_0520.jpg
lfw/George_W_Bush/George_W_Bush_0521.jpg
lfw/George_W_Bush/George_W_Bush_0522.jpg
lfw/George_W_Bush/George_W_Bush_0523.jpg
lfw/George_W_Bush/George_W_Bush_0524.jpg
lfw/George_W_Bush/George_W_Bush_0525.jpg
lfw/George_W_Bush/George_W_Bush_0526.jpg
lfw/George_W_Bush/George_W_Bush_0527.jpg
lfw/George_W_Bush/George_W_Bush_0528.jpg
lfw/George_W_Bush/George_W_Bush_0529.jpg
lfw/George_W_Bush/George_W_Bush_0530.jpg
###Markdown
Generating AttributesOnce that was over the Attributes were generated; then testing the new model with the new data.
###Code
import glob
jpg_list = glob.glob('lfw/*/*.jpg')
with open("lfw.txt", "w") as f:
f.write(f"{len(jpg_list)}\n")
f.write(f"5_o_Clock_Shadow Arched_Eyebrows Attractive Bags_Under_Eyes Bald Bangs Big_Lips Big_Nose Black_Hair Blond_Hair Blurry Brown_Hair Bushy_Eyebrows Chubby Double_Chin Eyeglasses Goatee Gray_Hair Heavy_Makeup High_Cheekbones Male Mouth_Slightly_Open Mustache Narrow_Eyes No_Beard Oval_Face Pale_Skin Pointy_Nose Receding_Hairline Rosy_Cheeks Sideburns Smiling Straight_Hair Wavy_Hair Wearing_Earrings Wearing_Hat Wearing_Lipstick Wearing_Necklace Wearing_Necktie Young \n")
for jpg in jpg_list:
f.write(jpg.replace("lfw/","")+" -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1\n")
###Output
_____no_output_____
###Markdown
Test New Model with New Data The result of the testing and training of all the new data together resulted in the evaluation of the George Bush data.
###Code
!python main.py --mode test --dataset CelebA --image_size 256 --c_dim 5 \
--celeba_image_dir ./lfw/ --attr_path lfw.txt \
--selected_attrs Black_Hair Blond_Hair Brown_Hair Male Young \
--model_save_dir='stargan_celeba_256/models' \
--result_dir='stargan_celeba_256/results_ext'
###Output
Namespace(attr_path='lfw.txt', batch_size=16, beta1=0.5, beta2=0.999, c2_dim=8, c_dim=5, celeba_crop_size=178, celeba_image_dir='./lfw/', d_conv_dim=64, d_lr=0.0001, d_repeat_num=6, dataset='CelebA', g_conv_dim=64, g_lr=0.0001, g_repeat_num=6, image_size=256, lambda_cls=1, lambda_gp=10, lambda_rec=10, log_dir='stargan/logs', log_step=10, lr_update_step=1000, mode='test', model_save_dir='stargan_celeba_256/models', model_save_step=10000, n_critic=5, num_iters=200000, num_iters_decay=100000, num_workers=1, rafd_crop_size=256, rafd_image_dir='data/RaFD/train', result_dir='stargan_celeba_256/results_ext', resume_iters=None, sample_dir='stargan/samples', sample_step=1000, selected_attrs=['Black_Hair', 'Blond_Hair', 'Brown_Hair', 'Male', 'Young'], test_iters=200000, use_tensorboard=True)
Finished preprocessing the CelebA dataset...
Generator(
(main): Sequential(
(0): Conv2d(8, 64, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), bias=False)
(1): InstanceNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
(3): Conv2d(64, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(4): InstanceNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU(inplace=True)
(6): Conv2d(128, 256, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(7): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(8): ReLU(inplace=True)
(9): ResidualBlock(
(main): Sequential(
(0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
(3): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(4): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(10): ResidualBlock(
(main): Sequential(
(0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
(3): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(4): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(11): ResidualBlock(
(main): Sequential(
(0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
(3): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(4): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(12): ResidualBlock(
(main): Sequential(
(0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
(3): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(4): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(13): ResidualBlock(
(main): Sequential(
(0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
(3): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(4): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(14): ResidualBlock(
(main): Sequential(
(0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
(3): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(4): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(15): ConvTranspose2d(256, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(16): InstanceNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(17): ReLU(inplace=True)
(18): ConvTranspose2d(128, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(19): InstanceNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(20): ReLU(inplace=True)
(21): Conv2d(64, 3, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), bias=False)
(22): Tanh()
)
)
G
The number of parameters: 8430528
Discriminator(
(main): Sequential(
(0): Conv2d(3, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(1): LeakyReLU(negative_slope=0.01)
(2): Conv2d(64, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(3): LeakyReLU(negative_slope=0.01)
(4): Conv2d(128, 256, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(5): LeakyReLU(negative_slope=0.01)
(6): Conv2d(256, 512, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(7): LeakyReLU(negative_slope=0.01)
(8): Conv2d(512, 1024, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(9): LeakyReLU(negative_slope=0.01)
(10): Conv2d(1024, 2048, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(11): LeakyReLU(negative_slope=0.01)
)
(conv1): Conv2d(2048, 1, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(conv2): Conv2d(2048, 5, kernel_size=(4, 4), stride=(1, 1), bias=False)
)
D
The number of parameters: 44884928
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/dtypes.py:526: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/dtypes.py:527: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/dtypes.py:528: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/dtypes.py:529: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/dtypes.py:530: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/dtypes.py:535: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
Loading the trained models from step 200000...
Saved real and fake images into stargan_celeba_256/results_ext/1-images.jpg...
Saved real and fake images into stargan_celeba_256/results_ext/2-images.jpg...
Saved real and fake images into stargan_celeba_256/results_ext/3-images.jpg...
Saved real and fake images into stargan_celeba_256/results_ext/4-images.jpg...
Saved real and fake images into stargan_celeba_256/results_ext/5-images.jpg...
Saved real and fake images into stargan_celeba_256/results_ext/6-images.jpg...
Saved real and fake images into stargan_celeba_256/results_ext/7-images.jpg...
Saved real and fake images into stargan_celeba_256/results_ext/8-images.jpg...
Saved real and fake images into stargan_celeba_256/results_ext/9-images.jpg...
Saved real and fake images into stargan_celeba_256/results_ext/10-images.jpg...
Saved real and fake images into stargan_celeba_256/results_ext/11-images.jpg...
Saved real and fake images into stargan_celeba_256/results_ext/12-images.jpg...
Saved real and fake images into stargan_celeba_256/results_ext/13-images.jpg...
Saved real and fake images into stargan_celeba_256/results_ext/14-images.jpg...
Saved real and fake images into stargan_celeba_256/results_ext/15-images.jpg...
Saved real and fake images into stargan_celeba_256/results_ext/16-images.jpg...
Saved real and fake images into stargan_celeba_256/results_ext/17-images.jpg...
Saved real and fake images into stargan_celeba_256/results_ext/18-images.jpg...
Saved real and fake images into stargan_celeba_256/results_ext/19-images.jpg...
Saved real and fake images into stargan_celeba_256/results_ext/20-images.jpg...
Saved real and fake images into stargan_celeba_256/results_ext/21-images.jpg...
Saved real and fake images into stargan_celeba_256/results_ext/22-images.jpg...
Saved real and fake images into stargan_celeba_256/results_ext/23-images.jpg...
Saved real and fake images into stargan_celeba_256/results_ext/24-images.jpg...
Saved real and fake images into stargan_celeba_256/results_ext/25-images.jpg...
Saved real and fake images into stargan_celeba_256/results_ext/26-images.jpg...
Saved real and fake images into stargan_celeba_256/results_ext/27-images.jpg...
Saved real and fake images into stargan_celeba_256/results_ext/28-images.jpg...
Saved real and fake images into stargan_celeba_256/results_ext/29-images.jpg...
Saved real and fake images into stargan_celeba_256/results_ext/30-images.jpg...
Saved real and fake images into stargan_celeba_256/results_ext/31-images.jpg...
Saved real and fake images into stargan_celeba_256/results_ext/32-images.jpg...
Saved real and fake images into stargan_celeba_256/results_ext/33-images.jpg...
Saved real and fake images into stargan_celeba_256/results_ext/34-images.jpg...
###Markdown
**Extra Picture**In this section we add in another picture to be graphed and displayedReads thie beautiful URL image
###Code
urls = ["https://pbs.twimg.com/profile_images/1415397678096191498/O-qRRbdI_400x400.jpg"]
# Read and display the image
# loop over the image URLs, you could store several image urls in the list
for url in urls:
image = io.imread(url)
image_2 = cv.cvtColor(image, cv.COLOR_BGR2RGB)
final_frame = cv.hconcat((image, image_2))
cv2_imshow(final_frame)
print('\n')
###Output
_____no_output_____
###Markdown
Image Contours and Histograms
###Code
print(image.dtype)
print(image.shape[0])
print(image.shape[1])
print(image.shape[2])
###Output
uint8
400
400
3
###Markdown
Display the histogram of all the pixels in the color image
###Code
plt.hist(image.ravel(),bins = 256, range = [0,256])
plt.show()
###Output
_____no_output_____
###Markdown
Display the histogram of R, G, B channel We could observe that the green channel has many pixels in 255, which represents the white patch in the image
###Code
color = ('b','g','r')
for i,col in enumerate(color):
histr = cv.calcHist([image],[i],None,[256],[0,256])
plt.plot(histr,color = col)
plt.xlim([0,256])
plt.show()
###Output
_____no_output_____
###Markdown
Gray Scales the image
###Code
gray_image = cv.cvtColor(image, cv.COLOR_BGR2GRAY)
cv2_imshow(gray_image)
###Output
_____no_output_____ |
FeatureCollection/clipping.ipynb | ###Markdown
View source on GitHub Notebook Viewer Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://geemap.org). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.
###Code
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('Installing geemap ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
import ee
import geemap
###Output
_____no_output_____
###Markdown
Create an interactive map The default basemap is `Google Maps`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/basemaps.py) can be added using the `Map.add_basemap()` function.
###Code
Map = geemap.Map(center=[40,-100], zoom=4)
Map
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Add Earth Engine dataset
roi = ee.Geometry.Polygon(
[[[-73.99891354682285, 40.74560250077625],
[-73.99891354682285, 40.74053023068626],
[-73.98749806525547, 40.74053023068626],
[-73.98749806525547, 40.74560250077625]]])
fc = ee.FeatureCollection('TIGER/2016/Roads').filterBounds(roi)
clipped = fc.map(lambda f: f.intersection(roi))
Map.centerObject(ee.FeatureCollection(roi), 17)
Map.addLayer(ee.Image().paint(roi, 0, 2), {'palette': 'yellow'}, 'ROI')
# Map.setCenter(-73.9596, 40.7688, 12)
Map.addLayer(ee.Image().paint(clipped, 0, 3), {'palette': 'red'}, 'clipped')
Map.addLayer(fc, {}, 'Census roads', False)
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
###Code
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.pyL13) can be added using the `Map.add_basemap()` function.
###Code
Map = emap.Map(center=[40,-100], zoom=4)
Map.add_basemap('ROADMAP') # Add Google Map
Map
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Add Earth Engine dataset
roi = ee.Geometry.Polygon(
[[[-73.99891354682285, 40.74560250077625],
[-73.99891354682285, 40.74053023068626],
[-73.98749806525547, 40.74053023068626],
[-73.98749806525547, 40.74560250077625]]])
fc = ee.FeatureCollection('TIGER/2016/Roads').filterBounds(roi)
clipped = fc.map(lambda f: f.intersection(roi))
Map.centerObject(ee.FeatureCollection(roi), 17)
Map.addLayer(ee.Image().paint(roi, 0, 2), {'palette': 'yellow'}, 'ROI')
# Map.setCenter(-73.9596, 40.7688, 12)
Map.addLayer(ee.Image().paint(clipped, 0, 3), {'palette': 'red'}, 'clipped')
Map.addLayer(fc, {}, 'Census roads', False)
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine APIInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.The following script checks if the geehydro package has been installed. If not, it will install geehydro, which automatically install its dependencies, including earthengine-api and folium.
###Code
import subprocess
try:
import geehydro
except ImportError:
print('geehydro package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geehydro'])
###Output
_____no_output_____
###Markdown
Import libraries
###Code
import ee
import folium
import geehydro
###Output
_____no_output_____
###Markdown
Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once.
###Code
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function. The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`.
###Code
Map = folium.Map(location=[40, -100], zoom_start=4)
Map.setOptions('HYBRID')
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
roi = ee.Geometry.Polygon(
[[[-73.99891354682285, 40.74560250077625],
[-73.99891354682285, 40.74053023068626],
[-73.98749806525547, 40.74053023068626],
[-73.98749806525547, 40.74560250077625]]])
fc = ee.FeatureCollection('TIGER/2016/Roads').filterBounds(roi)
clipped = fc.map(lambda f: f.intersection(roi))
Map.centerObject(ee.FeatureCollection(roi), 17)
Map.addLayer(ee.Image().paint(roi, 0, 2), {'palette': 'yellow'}, 'ROI')
# Map.setCenter(-73.9596, 40.7688, 12)
Map.addLayer(ee.Image().paint(clipped, 0, 3), {'palette': 'red'}, 'clipped')
Map.addLayer(fc, {}, 'Census roads', False)
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True)
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
###Code
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.pyL13) can be added using the `Map.add_basemap()` function.
###Code
Map = emap.Map(center=[40,-100], zoom=4)
Map.add_basemap('ROADMAP') # Add Google Map
Map
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Add Earth Engine dataset
roi = ee.Geometry.Polygon(
[[[-73.99891354682285, 40.74560250077625],
[-73.99891354682285, 40.74053023068626],
[-73.98749806525547, 40.74053023068626],
[-73.98749806525547, 40.74560250077625]]])
fc = ee.FeatureCollection('TIGER/2016/Roads').filterBounds(roi)
clipped = fc.map(lambda f: f.intersection(roi))
Map.centerObject(ee.FeatureCollection(roi), 17)
Map.addLayer(ee.Image().paint(roi, 0, 2), {'palette': 'yellow'}, 'ROI')
# Map.setCenter(-73.9596, 40.7688, 12)
Map.addLayer(ee.Image().paint(clipped, 0, 3), {'palette': 'red'}, 'clipped')
Map.addLayer(fc, {}, 'Census roads', False)
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine APIInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.The magic command `%%capture` can be used to hide output from a specific cell.
###Code
# %%capture
# !pip install earthengine-api
# !pip install geehydro
###Output
_____no_output_____
###Markdown
Import libraries
###Code
import ee
import folium
import geehydro
###Output
_____no_output_____
###Markdown
Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once. Uncomment the line `ee.Authenticate()` if you are running this notebook for this first time or if you are getting an authentication error.
###Code
# ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function. The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`.
###Code
Map = folium.Map(location=[40, -100], zoom_start=4)
Map.setOptions('HYBRID')
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
roi = ee.Geometry.Polygon(
[[[-73.99891354682285, 40.74560250077625],
[-73.99891354682285, 40.74053023068626],
[-73.98749806525547, 40.74053023068626],
[-73.98749806525547, 40.74560250077625]]])
fc = ee.FeatureCollection('TIGER/2016/Roads').filterBounds(roi)
clipped = fc.map(lambda f: f.intersection(roi))
Map.centerObject(ee.FeatureCollection(roi), 17)
Map.addLayer(ee.Image().paint(roi, 0, 2), {'palette': 'yellow'}, 'ROI')
# Map.setCenter(-73.9596, 40.7688, 12)
Map.addLayer(ee.Image().paint(clipped, 0, 3), {'palette': 'red'}, 'clipped')
Map.addLayer(fc, {}, 'Census roads', False)
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True)
Map
###Output
_____no_output_____
###Markdown
Pydeck Earth Engine IntroductionThis is an introduction to using [Pydeck](https://pydeck.gl) and [Deck.gl](https://deck.gl) with [Google Earth Engine](https://earthengine.google.com/) in Jupyter Notebooks. If you wish to run this locally, you'll need to install some dependencies. Installing into a new Conda environment is recommended. To create and enter the environment, run:```conda create -n pydeck-ee -c conda-forge python jupyter notebook pydeck earthengine-api requests -ysource activate pydeck-eejupyter nbextension install --sys-prefix --symlink --overwrite --py pydeckjupyter nbextension enable --sys-prefix --py pydeck```then open Jupyter Notebook with `jupyter notebook`. Now in a Python Jupyter Notebook, let's first import required packages:
###Code
from pydeck_earthengine_layers import EarthEngineLayer
import pydeck as pdk
import requests
import ee
###Output
_____no_output_____
###Markdown
AuthenticationUsing Earth Engine requires authentication. If you don't have a Google account approved for use with Earth Engine, you'll need to request access. For more information and to sign up, go to https://signup.earthengine.google.com/. If you haven't used Earth Engine in Python before, you'll need to run the following authentication command. If you've previously authenticated in Python or the command line, you can skip the next line.Note that this creates a prompt which waits for user input. If you don't see a prompt, you may need to authenticate on the command line with `earthengine authenticate` and then return here, skipping the Python authentication.
###Code
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create MapNext it's time to create a map. Here we create an `ee.Image` object
###Code
# Initialize objects
ee_layers = []
view_state = pdk.ViewState(latitude=37.7749295, longitude=-122.4194155, zoom=10, bearing=0, pitch=45)
# %%
# Add Earth Engine dataset
roi = ee.Geometry.Polygon(
[[[-73.99891354682285, 40.74560250077625],
[-73.99891354682285, 40.74053023068626],
[-73.98749806525547, 40.74053023068626],
[-73.98749806525547, 40.74560250077625]]])
fc = ee.FeatureCollection('TIGER/2016/Roads').filterBounds(roi)
clipped = fc.map(lambda f: f.intersection(roi))
ee_layers.append(EarthEngineLayer(ee_object=ee.Image().paint(roi,0,2), vis_params={'palette':'yellow'}))
# Map.setCenter(-73.9596, 40.7688, 12)
ee_layers.append(EarthEngineLayer(ee_object=ee.Image().paint(clipped,0,3), vis_params={'palette':'red'}))
ee_layers.append(EarthEngineLayer(ee_object=fc, vis_params={}))
###Output
_____no_output_____
###Markdown
Then just pass these layers to a `pydeck.Deck` instance, and call `.show()` to create a map:
###Code
r = pdk.Deck(layers=ee_layers, initial_view_state=view_state)
r.show()
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
###Code
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as geemap
except:
import geemap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map The default basemap is `Google Maps`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/basemaps.py) can be added using the `Map.add_basemap()` function.
###Code
Map = geemap.Map(center=[40,-100], zoom=4)
Map
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Add Earth Engine dataset
roi = ee.Geometry.Polygon(
[[[-73.99891354682285, 40.74560250077625],
[-73.99891354682285, 40.74053023068626],
[-73.98749806525547, 40.74053023068626],
[-73.98749806525547, 40.74560250077625]]])
fc = ee.FeatureCollection('TIGER/2016/Roads').filterBounds(roi)
clipped = fc.map(lambda f: f.intersection(roi))
Map.centerObject(ee.FeatureCollection(roi), 17)
Map.addLayer(ee.Image().paint(roi, 0, 2), {'palette': 'yellow'}, 'ROI')
# Map.setCenter(-73.9596, 40.7688, 12)
Map.addLayer(ee.Image().paint(clipped, 0, 3), {'palette': 'red'}, 'clipped')
Map.addLayer(fc, {}, 'Census roads', False)
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine APIInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.The magic command `%%capture` can be used to hide output from a specific cell. Uncomment these lines if you are running this notebook for the first time.
###Code
# %%capture
# !pip install earthengine-api
# !pip install geehydro
###Output
_____no_output_____
###Markdown
Import libraries
###Code
import ee
import folium
import geehydro
###Output
_____no_output_____
###Markdown
Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once. Uncomment the line `ee.Authenticate()` if you are running this notebook for the first time or if you are getting an authentication error.
###Code
# ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function. The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`.
###Code
Map = folium.Map(location=[40, -100], zoom_start=4)
Map.setOptions('HYBRID')
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
roi = ee.Geometry.Polygon(
[[[-73.99891354682285, 40.74560250077625],
[-73.99891354682285, 40.74053023068626],
[-73.98749806525547, 40.74053023068626],
[-73.98749806525547, 40.74560250077625]]])
fc = ee.FeatureCollection('TIGER/2016/Roads').filterBounds(roi)
clipped = fc.map(lambda f: f.intersection(roi))
Map.centerObject(ee.FeatureCollection(roi), 17)
Map.addLayer(ee.Image().paint(roi, 0, 2), {'palette': 'yellow'}, 'ROI')
# Map.setCenter(-73.9596, 40.7688, 12)
Map.addLayer(ee.Image().paint(clipped, 0, 3), {'palette': 'red'}, 'clipped')
Map.addLayer(fc, {}, 'Census roads', False)
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True)
Map
###Output
_____no_output_____ |
Kata 3/kata3.ipynb | ###Markdown
Kata 3 Expresion de prueba
###Code
# Tip de práctica 1: Intenta ejecutarlo en un notebook.
a = 97
b = 55
# test expression / expresión de prueba
if a < b:
# statement to be run / instrucción a ejecutar
print(b)
###Output
_____no_output_____
###Markdown
Problema 1
###Code
velocidadBase = 0
limite = 25
velocidadBase = input("Introduce una velocidad para el asteroide:")
if int(velocidadBase) > limite:
print("Advertencia de colision de meteorito")
else:
print("Nada de que preocuparse")
###Output
Advertencia de colision de meteorito
###Markdown
Problema 2
###Code
velocidadAsteroide = 19
limite = 20
if velocidadAsteroide > limite:
print("Serás capaz de ver el rayo de luz desde la tierra")
elif velocidadAsteroide == limite:
print("Serás capaz de ver el rayo de luz desde la tierra")
else:
print("No serás capaz de ver el rayo de luz desde la tierra")
###Output
No serás capaz de ver el rayo de luz desde la tierra
###Markdown
Problema 3
###Code
sizeAsteroide = 0
velocidadAsteroide = 0
sizeMinimo = 25
sizeMaximo = 1000
speedLimit = 25
lightSpeed = 20
sizeAsteroide = input("Introduce el tamaño de un nuevo asteroide:")
velocidadAsteroide = input("Introduce la velocidad del asteroide:")
if int(velocidadAsteroide) > speedLimit and int(sizeAsteroide) > sizeMinimo and int(sizeAsteroide) < sizeMaximo:
print("Asteroide peligroso colisionará la tierra.")
elif int(velocidadAsteroide) > speedLimit and int(sizeAsteroide) >= sizeMaximo:
print("No se puede calcular la dimension del daño")
elif int(velocidadAsteroide) >= lightSpeed:
print("Verás una luz en el cielo desde la tierra.")
elif int(sizeAsteroide) < sizeMinimo:
print("Nada de que preocuparse.")
else:
print("Nada de que preocuparse.")
###Output
Verás una luz en el cielo desde la tierra.
|
.ipynb_checkpoints/[Transformer, BERT] An empirical investigation into the properties of standard word embeddings-checkpoint.ipynb | ###Markdown
IMdb Sentiment Analysis Task Importing usefull packages
###Code
import torch
import tensorflow as tf
!pip install transformers
import transformers as ppb
import string
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
import nltk
nltk.download('punkt')
nltk.download('stopwords')
import numpy as np
import matplotlib.pyplot as plt
import os
import numpy as np
import string
!pip install ipdb
import ipdb # deb
# from gensim.models.keyedvectors import KeyedVectors
# Spliting data
from sklearn.model_selection import train_test_split
from sklearn import metrics # For RUC
from nltk.stem import PorterStemmer
import pandas as pd
import re
import seaborn as sns
# from google.colab import files
from IPython import display
import logging
logging.getLogger('googleapiclient.discovery_cache').setLevel(logging.ERROR)
# link = "https://drive.google.com/file/d/1smGRs2g2HoI6VSvonoZmWKzXOP6uPUaW/view?usp=sharing"
# _, id_t = link.split('d/')
# id = id_t.split('/')[0]
# print (id) # Verify that you have everything after '='
# # Install the PyDrive wrapper & import libraries.
# # This only needs to be done once per notebook.
# !pip install -U -q PyDrive
# from pydrive.auth import GoogleAuth
# from pydrive.drive import GoogleDrive
# from google.colab import auth
# from oauth2client.client import GoogleCredentials
# # Authenticate and create the PyDrive client.
# # This only needs to be done once per notebook.
# auth.authenticate_user()
# gauth = GoogleAuth()
# gauth.credentials = GoogleCredentials.get_application_default()
# drive = GoogleDrive(gauth)
# file_id = id
# downloaded = drive.CreateFile({'id':file_id})
# downloaded.FetchMetadata(fetch_all=True)
# downloaded.GetContentFile(downloaded.metadata['title'])
# !ls
# !unzip -qq aclImdb.zip
# !ls
# # imdb_dir = './data/aclImdb'
# imdb_dir = './aclImdb'
# # Reading in the training folder
# train_dir = os.path.join(imdb_dir, 'train')
# texts_tr_ = []
# labels_tr = []
# for label_type in ['neg', 'pos']:
# dir_name = os.path.join(train_dir, label_type)
# for fname in os.listdir(dir_name):
# if fname[-4:] == '.txt':
# f = open(os.path.join(dir_name, fname), encoding="utf8")
# texts_tr_.append(f.read())
# f.close()
# if label_type == 'neg':
# labels_tr.append(0)
# else:
# labels_tr.append(1)
# # Reading in the testing folder
# train_dir = os.path.join(imdb_dir, 'test')
# texts_tst_ = []
# labels_tst = []
# for label_type in ['neg', 'pos']:
# dir_name = os.path.join(train_dir, label_type)
# for fname in os.listdir(dir_name):
# if fname[-4:] == '.txt':
# f = open(os.path.join(dir_name, fname), encoding="utf8")
# texts_tst_.append(f.read())
# f.close()
# if label_type == 'neg':
# labels_tst.append(0)
# else:
# labels_tst.append(1)
# # Make sure that we have only 1 and 2 in the label
# (np.unique(labels_tr), np.unique(labels_tst))
# len(labels_tr)
# # Looking at 2 examples
# print(texts_tr_[1])
# print(labels_tr[1])
# print(texts_tst_[-10])
# print(labels_tst[-10])
###Output
_____no_output_____
###Markdown
Getting started DataWe will try to solve the [Large Movie Review Dataset v1.0](http://ai.stanford.edu/~amaas/data/sentiment/) task from Mass et al. The dataset consists of IMDB movie reviews labeled by positivity from 1 to 10. The task is to label the reviews as **negative** or **positive**.
###Code
# Load all files from a directory in a DataFrame.
def load_directory_data(directory):
data = {}
data["sentence"] = []
for file_path in os.listdir(directory):
with tf.io.gfile.GFile(os.path.join(directory, file_path), "r") as f:
data["sentence"].append(f.read())
# data["sentiment"].append(re.match("\d+_(\d+)\.txt", file_path).group(1))#; ipdb.set_trace()
return pd.DataFrame.from_dict(data)
# Merge positive and negative examples, add a polarity column and shuffle.
def load_dataset(directory):
pos_df = load_directory_data(os.path.join(directory, "pos"))
neg_df = load_directory_data(os.path.join(directory, "neg"))
pos_df["sentiment"] = 1
neg_df["sentiment"] = 0
return pd.concat([pos_df, neg_df]).sample(frac=1).reset_index(drop=True)
# Download and process the dataset files.
def download_and_load_datasets(force_download=False):
dataset = tf.keras.utils.get_file(
fname="aclImdb.tar.gz",
origin="http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz",
extract=True)
train_df = load_dataset(os.path.join(os.path.dirname(dataset),
"aclImdb", "train"))
test_df = load_dataset(os.path.join(os.path.dirname(dataset),
"aclImdb", "test"))
return train_df, test_df
train_df, test_df = download_and_load_datasets()
# Double-check that we've done the right thing.
print("Training examples summary:")
display.display(train_df.describe())
display.display(train_df.head())
print("\n#############################\n")
print("Validation examples summary:")
display.display(test_df.describe())
display.display(test_df.head())
train_df['sentence'].apply(lambda x : len(x))
# mean lengh of review
train_df['sentence'].apply(lambda x : len(x)).mean()
train_df['sentence'][0]
###Output
_____no_output_____
###Markdown
Obtain Pre-trained BERT Model
###Code
!pip install torchtext
# # define VGG16 model
# # VGG16 = models.vgg16(pretrained=True)
# DistilBERT = models.transformers()
# For DistilBERT:
model_class, tokenizer_class, pretrained_weights = (ppb.DistilBertModel, ppb.DistilBertTokenizer, 'distilbert-base-uncased')
## Want BERT instead of distilBERT? Uncomment the following line:
#model_class, tokenizer_class, pretrained_weights = (ppb.BertModel, ppb.BertTokenizer, 'bert-base-uncased')
# check if CUDA is available
use_cuda = torch.cuda.is_available()
# Load pretrained model/tokenizer
tokenizer = tokenizer_class.from_pretrained(pretrained_weights)
model = model_class.from_pretrained(pretrained_weights)
# move model to GPU if CUDA is available
if use_cuda:
model = model.cuda()
# tokenizer = tokenizer.cuda()
class LossPrettifier(object):
STYLE = {
'green' : '\033[32m',
'red' : '\033[91m',
'bold' : '\033[1m',
}
STYLE_END = '\033[0m'
def __init__(self, show_percentage=False):
self.show_percentage = show_percentage
self.color_up = 'green'
self.color_down = 'red'
self.loss_terms = {}
def __call__(self, epoch=None, **kwargs):
if epoch is not None:
print_string = f'Epoch {epoch: 5d} '
else:
print_string = ''
for key, value in kwargs.items():
pre_value = self.loss_terms.get(key, value)
if value > pre_value:
indicator = '▲'
show_color = self.STYLE[self.color_up]
elif value == pre_value:
indicator = ''
show_color = ''
else:
indicator = '▼'
show_color = self.STYLE[self.color_down]
if self.show_percentage:
show_value = 0 if pre_value == 0 \
else (value - pre_value) / float(pre_value)
key_string = f'| {key}: {show_color}{value:3.4f}({show_value:+3.4%}) {indicator}'
else:
key_string = f'| {key}: {show_color}{value:.4f} {indicator}'
# Trim some long outputs
key_string_part = key_string[:32]
print_string += key_string_part+f'{self.STYLE_END}\t'
self.loss_terms[key] = value
print(print_string)
reporter = LossPrettifier(show_percentage=True)
def train(n_epochs, loaders, model, optimizer, criterion, use_cuda, save_path):
"""returns trained model"""
# initialize tracker for minimum validation loss
valid_loss_min = np.Inf
for epoch in range(1, n_epochs+1):
# initialize variables to monitor training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for batch_idx, (data, target) in enumerate(loaders['train']):
# move to GPU
if use_cuda:
data, target = data.cuda(), target.cuda()
## find the loss and update the model parameters accordingly
## record the average training loss, using something like
## train_loss = train_loss + ((1 / (batch_idx + 1)) * (loss.data - train_loss))
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update training loss
train_loss = train_loss + ((1 / (batch_idx + 1)) * (loss.data - train_loss))
######################
# validate the model #
######################
model.eval()
for batch_idx, (data, target) in enumerate(loaders['valid']):
# move to GPU
if use_cuda:
data, target = data.cuda(), target.cuda()
## update the average validation loss
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update average validation loss
valid_loss +=((1 / (batch_idx + 1)) * (loss.data - valid_loss))
# print training/validation statistics
reporter(epoch=epoch, LossA = train_loss, LossB = valid_loss)
## TODO: save the model if validation loss has decreased
if valid_loss <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
torch.save(model.state_dict(), save_path)
valid_loss_min = valid_loss
# return trained model
return model
# loaders_scratch = {'train' : train_loader, 'valid': valid_loader, 'test' :test_loader}
# # train the model
# model_scratch = train(15, loaders_scratch, model_scratch, optimizer_scratch,
# criterion_scratch, use_cuda, 'model_scratch.pt')
def test(loaders, model, criterion, use_cuda):
# monitor test loss and accuracy
test_loss = 0.
correct = 0.
total = 0.
model.eval()
for batch_idx, (data, target) in enumerate(loaders['train']):
# move to GPU
if use_cuda:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the loss
loss = criterion(output, target)
# update average test loss
test_loss = test_loss + ((1 / (batch_idx + 1)) * (loss.data - test_loss))
# convert output probabilities to predicted class
pred = output.data.max(1, keepdim=True)[1]
# compare predictions to true label
correct += np.sum(np.squeeze(pred.eq(target.data.view_as(pred))).cpu().numpy())
total += data.size(0)
print('Test Loss: {:.6f}\n'.format(test_loss))
print('\nTest Accuracy: %2d%% (%2d/%2d)' % (
100. * correct / total, correct, total))
# call test function
# test(loaders_scratch, model_scratch, criterion_scratch, use_cuda)
!ls
###Output
'An empirical investigation into the properties of standard word embeddings.ipynb'
CNN-RNN.ipynb
'Deep Neural Networks for Supervised Learning Classification.ipynb'
'Details NN Design.ipynb'
Hyperparameter_Tuning_Model_store.ipynb
'IMDB-Classification Movie Review.ipynb'
Images
Introduction.ipynb
LICENSE
One_hot_and_Embeddings.ipynb
README.md
RNN_1D-CONVNN.ipynb
'[Transformer, BERT] An empirical investigation into the properties of standard word embeddings.ipynb'
###Markdown
Right now, the variable `model` holds a pretrained distilBERT model -- a version of BERT that is smaller, but much faster and requiring a lot less memory. Model 1: Preparing the DatasetBefore we can hand our sentences to BERT, we need to so some minimal processing to put them in the format it requires. TokenizationOur first step is to tokenize the sentences -- break them up into word and subwords in the format BERT is comfortable with.Unfortunatly the bert tokenizer can only encode sequence lenght less or equal to 512
###Code
tokenized = train_df['sentence'][:1000].apply((lambda x: tokenizer.encode(x, add_special_tokens=True, max_length=50)))
# tokenized_test = test_df['sentence'][:500].apply((lambda x: tokenizer.encode(x, add_special_tokens=True, max_length=300)))
###Output
_____no_output_____
###Markdown
PaddingAfter tokenization, `tokenized` is a list of sentences -- each sentences is represented as a list of tokens. We want BERT to process our examples all at once (as one batch). It's just faster that way. For that reason, we need to pad all lists to the same size, so we can represent the input as one 2-d array, rather than a list of lists (of different lengths).
###Code
max_len = 0
for i in tokenized.values:
if len(i) > max_len:
max_len = len(i)
padded = np.array([i + [0]*(max_len-len(i)) for i in tokenized.values])
# max_len = 0
# for i in tokenized_train.values:
# if len(i) > max_len:
# max_len = len(i)
# padded = np.array([i + [0]*(max_len-len(i)) for i in tokenized_train.values])
###Output
_____no_output_____
###Markdown
Our dataset is now in the `padded` variable, we can view its dimensions below:
###Code
np.array(padded).shape
###Output
_____no_output_____
###Markdown
MaskingIf we directly send `padded` to BERT, that would slightly confuse it. We need to create another variable to tell it to ignore (mask) the padding we've added when it's processing its input. That's what attention_mask is:
###Code
attention_mask = np.where(padded != 0, 1, 0)
attention_mask.shape
###Output
_____no_output_____
###Markdown
Model 1: And Now, Deep Learning!Now that we have our model and inputs ready, let's run our model!The `model()` function runs our sentences through BERT. The results of the processing will be returned into `last_hidden_states`.
###Code
input_ids = torch.tensor(padded)
attention_mask = torch.tensor(attention_mask)
if use_cuda:
input_ids = input_ids.cuda()
attention_mask = attention_mask.cuda()
with torch.no_grad():
last_hidden_states = model(input_ids, attention_mask=attention_mask)
###Output
_____no_output_____
###Markdown
Let's slice only the part of the output that we need. That is the output corresponding the first token of each sentence. The way BERT does sentence classification, is that it adds a token called `[CLS]` (for classification) at the beginning of every sentence. The output corresponding to that token can be thought of as an embedding for the entire sentence.We'll save those in the `features` variable, as they'll serve as the features to our logitics regression model.
###Code
features = last_hidden_states[0][:,0,:].cpu().numpy()
###Output
_____no_output_____
###Markdown
The labels indicating which sentence is positive and negative now go into the `labels` variable
###Code
batch_1 = train_df[:1000]
batch_1.head()
labels = batch_1['sentiment']
###Output
_____no_output_____
###Markdown
Model 2: Train/Test SplitLet's now split our datset into a training set and testing set (even though we're using 2,000 sentences from the SST2 training set). Under the hood, the model is actually made up of two model.* DistilBERT processes the sentence and passes along some information it extracted from it on to the next model. DistilBERT is a smaller version of BERT developed and open sourced by the team at HuggingFace. It’s a lighter and faster version of BERT that roughly matches its performance.* The next model, a basic Logistic Regression model from scikit learn will take in the result of DistilBERT’s processing, and classify the sentence as either positive or negative (1 or 0, respectively).The data we pass between the two models is a vector of size 768. We can think of this of vector as an embedding for the sentence that we can use for classification.
###Code
!pip install transformers
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import cross_val_score
import torch
import transformers as ppb
import warnings
warnings.filterwarnings('ignore')
import random
SEED = 1234
random.seed(SEED)
np.random.seed(SEED)
torch.manual_seed(SEED)
torch.backends.cudnn.deterministic = True
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
len(tokenizer.vocab)
init_token = tokenizer.cls_token
eos_token = tokenizer.sep_token
pad_token = tokenizer.pad_token
unk_token = tokenizer.unk_token
print(init_token, eos_token, pad_token, unk_token)
init_token_idx = tokenizer.cls_token_id
eos_token_idx = tokenizer.sep_token_id
pad_token_idx = tokenizer.pad_token_id
unk_token_idx = tokenizer.unk_token_id
print(init_token_idx, eos_token_idx, pad_token_idx, unk_token_idx)
###Output
101 102 0 100
###Markdown
Another thing we need to handle is that the model was trained on sequences with a defined maximum length - it does not know how to handle sequences longer than it has been trained on. We can get the maximum length of these input sizes by checking the max_model_input_sizes for the version of the transformer we want to use. In this case, it is 512 tokens.
###Code
max_input_length = tokenizer.max_model_input_sizes['bert-base-uncased']
# max_input_length = 250
print(max_input_length)
###Output
512
###Markdown
Previously we have used the spaCy tokenizer to tokenize our examples. However we now need to define a function that we will pass to our TEXT field that will handle all the tokenization for us. It will also cut down the number of tokens to a maximum length. Note that our maximum length is 2 less than the actual maximum length. This is because we need to append two tokens to each sequence, one to the start and one to the end.
###Code
def tokenize_and_cut(sentence):
tokens = tokenizer.tokenize(sentence)
tokens = tokens[:max_input_length-2]
return tokens
###Output
_____no_output_____
###Markdown
Now we define our fields. The transformer expects the batch dimension to be first, so we set batch_first = True. As we already have the vocabulary for our text, provided by the transformer we set use_vocab = False to tell torchtext that we'll be handling the vocabulary side of things. We pass our tokenize_and_cut function as the tokenizer. The preprocessing argument is a function that takes in the example after it has been tokenized, this is where we will convert the tokens to their indexes. Finally, we define the special tokens - making note that we are defining them to be their index value and not their string value, i.e. 100 instead of [UNK] This is because the sequences will already be converted into indexes.We define the label field as before.
###Code
from torchtext import data
TEXT = data.Field(batch_first = True,
use_vocab = False,
tokenize = tokenize_and_cut,
preprocessing = tokenizer.convert_tokens_to_ids,
init_token = init_token_idx,
eos_token = eos_token_idx,
pad_token = pad_token_idx,
unk_token = unk_token_idx)
LABEL = data.LabelField(dtype = torch.float)
from torchtext import datasets
train_data, test_data = datasets.IMDB.splits(TEXT, LABEL)
train_data, valid_data = train_data.split(random_state = random.seed(SEED))
print(f"Number of training examples: {len(train_data)}")
print(f"Number of validation examples: {len(valid_data)}")
print(f"Number of testing examples: {len(test_data)}")
###Output
Number of training examples: 17500
Number of validation examples: 7500
Number of testing examples: 25000
###Markdown
We can check an example and ensure that the text has already been numericalized.
###Code
print(vars(train_data.examples[6]))
len(vars(train_data.examples[900])['text'])
###Output
_____no_output_____
###Markdown
We can use the convert_ids_to_tokens to transform these indexes back into readable tokens.
###Code
tokens = tokenizer.convert_ids_to_tokens(vars(train_data.examples[6])['text'])
print(tokens)
###Output
['another', 'british', 'cinema', 'flag', 'wave', '##r', '.', 'real', 'garbage', 'on', 'offer', 'here', 'once', 'again', '.', 'i', 'cannot', 'understand', '(', 'and', 'i', 'am', 'british', ')', 'why', 'this', 'over', 'the', 'top', ',', 'patriotic', 'nonsense', 'was', 'ever', 'made', '.', 'eight', 'years', 'mark', 'you', ',', 'from', 'when', 'the', 'second', 'world', 'war', 'had', 'actually', 'ended', '!', 'other', 'comment', '##er', "'", 's', 'here', 'have', 'remarked', 'on', 'the', 'editing', 'and', 'apparent', 'seam', '##less', 'use', 'of', 'archive', 'footage', '.', 'this', 'is', 'extremely', 'poorly', 'observed', '.', 'the', 'archive', 'footage', 'is', 'in', 'abundance', '.', 'model', 'aircraft', 'swing', 'from', 'wires', 'in', 'the', "'", 'action', 'scenes', "'", 'like', 'so', 'many', 'children', "'", 's', 'kite', '##s', 'in', 'the', 'wind', '.', 'the', 'usual', 'map', 'room', 'sequences', 'tattoo', 'the', 'movie', 'to', 'make', 'us', 'supposedly', 'drawn', 'into', 'the', 'whole', 'malta', 'event', '.', 'guinness', 'must', 'have', 'his', 'worst', 'acting', 'performance', 'ever', '.', 'the', 'shocking', 'back', 'drop', 'dog', 'fight', 'scenes', 'are', 'laugh', '##able', '.', 'hawkins', 'bore', '##s', 'us', 'all', 'to', 'death', 'in', 'the', 'map', 'room', 'area', '.', 'ea', '##ling', 'made', 'many', 'great', 'movies', '.', 'this', 'clearly', 'is', 'not', 'one', 'of', 'them', '.', 'they', 'should', 'have', 'stayed', 'away', 'from', 'such', 'un', '##con', '##vin', '##cing', 'rot', '!']
###Markdown
Although we've handled the vocabulary for the text, we still need to build the vocabulary for the labels.
###Code
LABEL.build_vocab(train_data)
BATCH_SIZE = 128
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
train_iterator, valid_iterator, test_iterator = data.BucketIterator.splits(
(train_data, valid_data, test_data),
batch_size = BATCH_SIZE,
device = device)
next(train_iterator.__iter__())
next(train_iterator.__iter__()).text.shape
next(valid_iterator.__iter__()).text.shape
next(test_iterator.__iter__()).text.shape
from transformers import BertTokenizer, BertModel
bert = BertModel.from_pretrained('bert-base-uncased')
import torch.nn as nn
class BERT_FNN_Sentiment(nn.Module):
def __init__(self,
bert,
hidden_dim,
number_classes):
super().__init__()
self.bert = bert
embedding_dim = bert.config.to_dict()['hidden_size']
self.fc = nn.Linear(393216, number_classes) # got this error from pytorch error :)
# self.dropout = nn.Dropout(dropout)
def forward(self, text):
#text = [batch size, sent len]
with torch.no_grad():
embedded = self.bert(text)[0]
# ipdb.set_trace()
#embedded = [batch size, sent len, emb dim]
batch_size = embedded.shape[0]
# x = self.linear(x.view(batch_size, -1))
output = self.fc(embedded.view(batch_size, -1))
return output
import torch.nn as nn
class BERTGRUSentiment(nn.Module):
def __init__(self,
bert,
hidden_dim,
output_dim,
n_layers,
bidirectional,
dropout):
super().__init__()
self.bert = bert
embedding_dim = bert.config.to_dict()['hidden_size']
self.rnn = nn.GRU(embedding_dim,
hidden_dim,
num_layers = n_layers,
bidirectional = bidirectional,
batch_first = True,
dropout = 0 if n_layers < 2 else dropout)
self.out = nn.Linear(hidden_dim * 2 if bidirectional else hidden_dim, output_dim)
self.dropout = nn.Dropout(dropout)
def forward(self, text):
#text = [batch size, sent len]
with torch.no_grad():
embedded = self.bert(text)[0]
#embedded = [batch size, sent len, emb dim]
_, hidden = self.rnn(embedded)
#hidden = [n layers * n directions, batch size, emb dim]
if self.rnn.bidirectional:
hidden = self.dropout(torch.cat((hidden[-2,:,:], hidden[-1,:,:]), dim = 1))
else:
hidden = self.dropout(hidden[-1,:,:])
#hidden = [batch size, hid dim]
output = self.out(hidden)
#output = [batch size, out dim]
return output
# HIDDEN_DIM = 256
# OUTPUT_DIM = 1
# model = BERT_FNN_Sentiment(bert,
# HIDDEN_DIM,
# OUTPUT_DIM)
HIDDEN_DIM = 256
OUTPUT_DIM = 1
N_LAYERS = 2
BIDIRECTIONAL = True
DROPOUT = 0.3
model = BERTGRUSentiment(bert,
HIDDEN_DIM,
OUTPUT_DIM,
N_LAYERS,
BIDIRECTIONAL,
DROPOUT)
###Output
_____no_output_____
###Markdown
We can check how many parameters the model has. Our standard models have under 5M, but this one has 112M! Luckily, 110M of these parameters are from the transformer and we will not be training those.
###Code
def count_parameters(model):
return sum(p.numel() for p in model.parameters() if p.requires_grad)
print(f'The model has {count_parameters(model):,} trainable parameters')
###Output
The model has 112,241,409 trainable parameters
###Markdown
In order to freeze paramers (not train them) we need to set their requires_grad attribute to False. To do this, we simply loop through all of the named_parameters in our model and if they're a part of the bert transformer model, we set requires_grad = False.
###Code
for name, param in model.named_parameters():
if name.startswith('bert'):
param.requires_grad = False
def count_parameters(model):
return sum(p.numel() for p in model.parameters() if p.requires_grad)
print(f'The model has {count_parameters(model):,} trainable parameters')
###Output
The model has 2,759,169 trainable parameters
###Markdown
We can double check the names of the trainable parameters, ensuring they make sense. As we can see, they are all the parameters of tthe linear layer (out).
###Code
for name, param in model.named_parameters():
if param.requires_grad:
print(name)
import torch.optim as optim
optimizer = optim.Adam(model.parameters())
criterion = nn.BCEWithLogitsLoss()
model = model.to(device)
criterion = criterion.to(device)
def binary_accuracy(preds, y):
"""
Returns accuracy per batch, i.e. if you get 8/10 right, this returns 0.8, NOT 8
"""
#round predictions to the closest integer
rounded_preds = torch.round(torch.sigmoid(preds))
correct = (rounded_preds == y).float() #convert into float for division
acc = correct.sum() / len(correct)
return acc
def train(model, iterator, optimizer, criterion):
epoch_loss = 0
epoch_acc = 0
model.train()
for batch in iterator:
optimizer.zero_grad()
predictions = model(batch.text).squeeze(1)
loss = criterion(predictions, batch.label)
acc = binary_accuracy(predictions, batch.label)
loss.backward()
optimizer.step()
epoch_loss += loss.item()
epoch_acc += acc.item()
return epoch_loss / len(iterator), epoch_acc / len(iterator)
def evaluate(model, iterator, criterion):
epoch_loss = 0
epoch_acc = 0
model.eval()
with torch.no_grad():
for batch in iterator:
predictions = model(batch.text).squeeze(1)
loss = criterion(predictions, batch.label)
acc = binary_accuracy(predictions, batch.label)
epoch_loss += loss.item()
epoch_acc += acc.item()
return epoch_loss / len(iterator), epoch_acc / len(iterator)
import time
def epoch_time(start_time, end_time):
elapsed_time = end_time - start_time
elapsed_mins = int(elapsed_time / 60)
elapsed_secs = int(elapsed_time - (elapsed_mins * 60))
return elapsed_mins, elapsed_secs
N_EPOCHS = 10
best_valid_loss = float('inf')
total_loss_train = []
total_acc_train = []
total_loss_valid = []
total_acc_valid = []
for epoch in range(N_EPOCHS):
start_time = time.time()
train_loss, train_acc = train(model, train_iterator, optimizer, criterion)
valid_loss, valid_acc = evaluate(model, valid_iterator, criterion)
total_loss_train.append(train_loss)
total_acc_train.append(train_acc)
total_loss_valid.append(valid_loss)
total_acc_valid.append(valid_acc)
end_time = time.time()
epoch_mins, epoch_secs = epoch_time(start_time, end_time)
if valid_loss < best_valid_loss:
best_valid_loss = valid_loss
torch.save(model.state_dict(), 'tut6-model.pt')
print(f'Epoch: {epoch+1:02} | Epoch Time: {epoch_mins}m {epoch_secs}s')
print(f'\tTrain Loss: {train_loss:.3f} | Train Acc: {train_acc*100:.2f}%')
print(f'\t Val. Loss: {valid_loss:.3f} | Val. Acc: {valid_acc*100:.2f}%')
acc = total_acc_train
val_acc = total_acc_valid
loss = total_loss_train
val_loss = total_loss_valid
epochs = range(1, N_EPOCHS + 1)
plt.figure()
plt.plot(epochs, loss, label='Training loss')
plt.plot(epochs, val_loss, 'r--', label='Validation loss', linewidth=3)
plt.title('loss')
plt.legend()
plt.savefig("Training and Validation loss.eps", format='eps', dpi=1200)
plt.show()
plt.figure()
plt.plot(epochs, acc, label='Training acc')
plt.plot(epochs, val_acc,'r--', label='Validation acc', linewidth=3)
plt.title('accuracy')
plt.legend()
plt.xlabel("epochs")
plt.ylabel("accuracy")
plt.savefig("Training and validation accuracy.eps", format='eps', dpi=1200)
plt.show()
###Output
The PostScript backend does not support transparency; partially transparent artists will be rendered opaque.
The PostScript backend does not support transparency; partially transparent artists will be rendered opaque.
###Markdown
We'll load up the parameters that gave us the best validation loss and try these on the test set - which gives us our best results so far!
###Code
model.load_state_dict(torch.load('tut6-model.pt'))
test_loss, test_acc = evaluate(model, test_iterator, criterion)
print(f'Test Loss: {test_loss:.3f} | Test Acc: {test_acc*100:.2f}%')
###Output
_____no_output_____
###Markdown
InferenceWe'll then use the model to test the sentiment of some sequences. We tokenize the input sequence, trim it down to the maximum length, add the special tokens to either side, convert it to a tensor, add a fake batch dimension and then pass it through our model.
###Code
def predict_sentiment(model, tokenizer, sentence):
model.eval()
tokens = tokenizer.tokenize(sentence)
tokens = tokens[:max_input_length-2]
indexed = [init_token_idx] + tokenizer.convert_tokens_to_ids(tokens) + [eos_token_idx]
tensor = torch.LongTensor(indexed).to(device)
tensor = tensor.unsqueeze(0)
prediction = torch.sigmoid(model(tensor))
return prediction.item()
predict_sentiment(model, tokenizer, "This film is terrible")
predict_sentiment(model, tokenizer, "This film is great")
###Output
_____no_output_____ |
3_face_recognition.ipynb | ###Markdown
Now, we reached the final phase of our project. Here, we will capture a fresh face on our camera and if this person had his face captured and trained before, our recognizer will make a “prediction” returning its id and an index, shown how confident the recognizer is with this match. SUDIP MITRA's AI DEVELOPMENT [email protected]/sudipmitraonline
###Code
import cv2
import numpy as np
import os
#@Author : Sudip Mitra
#[email protected]
#github.com/sudipmitraonline
recognizer = cv2.face.LBPHFaceRecognizer_create()
recognizer.read('trainer/trainer.yml')
cascadePath = "haarcascade_frontalface_default.xml"
faceCascade = cv2.CascadeClassifier(cascadePath);
font = cv2.FONT_HERSHEY_SIMPLEX
#iniciate id counter
id = 0
# names related to ids: example ==> Marcelo: id=1, etc
names = ['None', 'Sudip', 'Subrata', 'Shikha', 'Shree', 'ETC']
# Initialize and start realtime video capture
cam = cv2.VideoCapture(0)
cam.set(3, 640) # set video widht
cam.set(4, 480) # set video height
# Define min window size to be recognized as a face
minW = 0.1*cam.get(3)
minH = 0.1*cam.get(4)
while True:
ret, img =cam.read()
img = cv2.flip(img, -1) # Flip vertically
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
faces = faceCascade.detectMultiScale(
gray,
scaleFactor = 1.2,
minNeighbors = 5,
minSize = (int(minW), int(minH)),
)
for(x,y,w,h) in faces:
cv2.rectangle(img, (x,y), (x+w,y+h), (0,255,0), 2)
id, confidence = recognizer.predict(gray[y:y+h,x:x+w])
# If confidence is less them 100 ==> "0" : perfect match
if (confidence < 100):
id = names[id]
confidence = " {0}%".format(round(100 - confidence))
else:
id = "unknown"
confidence = " {0}%".format(round(100 - confidence))
cv2.putText(
img,
str(id),
(x+5,y-5),
font,
1,
(255,255,255),
2
)
cv2.putText(
img,
str(confidence),
(x+5,y+h-5),
font,
1,
(255,255,0),
1
)
cv2.imshow('camera',img)
k = cv2.waitKey(10) & 0xff # Press 'ESC' for exiting video
if k == 27:
break
# Do a bit of cleanup
print("\n [INFO] Exiting Program and cleanup stuff")
cam.release()
cv2.destroyAllWindows()
###Output
_____no_output_____ |
romansims/roman_snIa_hostgal_specz_efficiencies.ipynb | ###Markdown
Roman Time Domain deep-field spec-z efficiency plotThis notebook constructs a redshift efficiency figure from the .DUMP file output of a SNANA simulation of the Roman time domain survey.
###Code
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.patches import Patch
from matplotlib.ticker import (MultipleLocator, AutoMinorLocator)
import glob
import math
import yaml
import os
from astropy.table import Table
plt.rcParams['text.usetex'] = False
plt.rcParams['mathtext.fontset'] = 'dejavuserif'
plt.rcParams['figure.figsize'] = (10,10)
plt.rcParams['legend.frameon'] = False
plt.rcParams['legend.fontsize'] = 19
plt.rcParams['legend.borderpad'] = 0.1
plt.rcParams['legend.labelspacing'] = 0.1
plt.rcParams['legend.handletextpad'] = 0.1
#plt.rcParams['legend.markerscale'] = 0.1
plt.rcParams['font.family'] = 'stixgeneral'
plt.rcParams['font.size'] = 20
plt.rcParams['axes.labelsize'] = 15
plt.rcParams['xtick.labelsize'] = 15
plt.rcParams['ytick.labelsize'] = 15
plt.rcParams['xtick.minor.size'] = 1.5 # major tick size in points
plt.rcParams['ytick.minor.size'] = 1.5 # major tick size in points
%matplotlib inline
def read_dump_file(filename, maxlines=None):
"""Read a SNANA .DUMP file into an astropy Table object"""
if maxlines is not None:
tab = Table.read(filename, format='ascii.basic', header_start=0, data_start=1, data_end=maxlines)
else:
tab = Table.read(filename, format='ascii.basic', header_start=0, data_start=1)
return tab
###Output
_____no_output_____
###Markdown
Read in the SNANA data And limit to only the DEEP field data set
###Code
roman = read_dump_file("data/PIP_WFIRST_EFFICIENCY_WFIRST_ROMAN_DEEP_G10_SEARCHEFF_0.DUMP", maxlines=None)
subaru = read_dump_file("data/PIP_WFIRST_EFFICIENCY_WFIRST_SUBARU_G10_SEARCHEFF_0.DUMP", maxlines=None)
#combined = read_dump_file("data/PIP_WFIRST_STARTERKIT+SETEXP_WFIRST_SIMDATA_G10.DUMP", maxlines=None)
romanfieldmask = (roman['FIELD']=='DEEP')
roman = roman[romanfieldmask]
subarufieldmask = (subaru['FIELD']=='DEEP')
subaru = subaru[subarufieldmask]
###Output
_____no_output_____
###Markdown
View the Ia and CC Host populations
###Code
fig = plt.figure(figsize=[12,8])
for sntype, axrow in zip(['Ia','CC'],[0,1]):
ax1 = fig.add_subplot(2,4,axrow*4+1)
ax2 = fig.add_subplot(2,4,axrow*4+2)
ax3 = fig.add_subplot(2,4,axrow*4+3)
ax4 = fig.add_subplot(2,4,axrow*4+4)
if sntype=='Ia':
explmask = roman['NON1A_INDEX']==0
detmask = (roman['NON1A_INDEX']==0) & (roman['SIM_SEARCHEFF_MASK']>0)
hostzmask = (roman['NON1A_INDEX']==0) & (roman['SIM_SEARCHEFF_MASK']>4)
else:
explmask = roman['NON1A_INDEX']>0
detmask = (roman['NON1A_INDEX']>0) & (roman['SIM_SEARCHEFF_MASK']>0)
hostzmask = (roman['NON1A_INDEX']>0) & (roman['SIM_SEARCHEFF_MASK']>4)
for mask,color in zip([explmask, detmask, hostzmask], ['b','g','r']):
sfr = roman[mask]['logsfr']
ztruee = roman[mask]['ZTRUE']
zcmb = roman[mask]['ZCMB']
mass = roman[mask]['logmass']
ssfr = sfr - mass
ax1.plot(zcmb, sfr, color=color, ls=' ', alpha=0.2, marker='.', ms=3)
ax2.plot(zcmb, mass, color=color, ls=' ', alpha=0.2, marker='.', ms=3)
ax3.plot(zcmb, ssfr, color=color, ls=' ', alpha=0.2, marker='.', ms=3)
ax4.plot(sfr, mass, color=color, ls=' ', alpha=0.2, marker='.', ms=3)
ax1.set_xlabel('redshift')
ax1.set_ylabel('log(SFR)')
ax2.set_xlabel('redshift')
ax2.set_ylabel('log(M)')
ax3.set_xlabel('redshift')
ax3.set_ylabel('log(sSFR)')
ax4.set_xlabel('log(SFR)')
ax4.set_ylabel('log(M)')
ax1.text(0.05, 0.95, sntype, fontsize=20, ha='left', va='top', transform=ax1.transAxes)
ax2.text(0.05, 0.95, sntype, fontsize=20, ha='left', va='top', transform=ax2.transAxes)
ax3.text(0.05, 0.95, sntype, fontsize=20, ha='left', va='top', transform=ax3.transAxes)
ax4.text(0.05, 0.95, sntype, fontsize=20, ha='left', va='top', transform=ax4.transAxes)
ax1.set_xlim(0, 3)
ax1.set_ylim(-2, 4)
ax2.set_xlim(0, 3)
ax2.set_ylim(6, 12)
ax3.set_xlim(0, 3)
ax3.set_ylim(-11.5, -6.5)
ax4.set_xlim(-2.5, 3)
ax4.set_ylim(6, 12)
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Some quick counts of detected SN
###Code
ccmask = roman['NON1A_INDEX']>0
ccdetmask = (roman['NON1A_INDEX']>0) & (roman['SIM_SEARCHEFF_MASK']>0)
cchostzmask = (roman['NON1A_INDEX']>0) & (roman['SIM_SEARCHEFF_MASK']>4)
iamask = roman['NON1A_INDEX']==0
iadetmask = (roman['NON1A_INDEX']==0) & (roman['SIM_SEARCHEFF_MASK']>0)
iahostzmask = (roman['NON1A_INDEX']==0) & (roman['SIM_SEARCHEFF_MASK']>4)
nexpIa_roman = np.sum(iamask)
nexpCC_roman = np.sum(ccmask)
ndetIa_roman = np.sum(iadetmask)
ndetCC_roman = np.sum(ccdetmask)
nhostzCC_roman = np.sum(cchostzmask)
nhostzIa_roman = np.sum(iahostzmask)
print(" Explosions Detections GotSpecz")
print("Ia: {:10,d} {:10,d} {:10,d}".format(nexpIa_roman, ndetIa_roman, nhostzIa_roman))
print("CC: {:10,d} {:10,d} {:10,d}".format(nexpCC_roman, ndetCC_roman, nhostzCC_roman))
###Output
Explosions Detections GotSpecz
Ia: 8,742 7,313 6,208
CC: 38,628 16,616 12,061
###Markdown
exploratory summary plotShows two ways of computing the 'specz efficiency'.The grey histogram is all the simulated SNIa that SNANA reports.Blue is the photometric detections (SIM_SEARCHEFF_MASK>0)Red is those that are photometrically detected and also get a specz (SIM_SEARCHEFF_MASK==5)The solid line in black is the efficiency computed as Red / Blue : the fraction of detected SNIa that also get a hostz. (read the efficiency values from the y axis on the right side)The dashed line is the efficiency computed as Red / gray : the fraction of all SNIa explosions that are both detected and get a host specz.We see that the dashed line efficiency is always lower, and drops faster at high z, reflecting both the fact that it gets harder to get a specz, but also that its harder to find the SNIa in the first place. I think the curve we should be showing here is the solid line, b/c that is reflecting our estimate of the fraction of the detected SNIa sample that will have a specz, which is the measurable metric, and the one that would be widely understood as the "spectroscopic redshift recovery efficiency"Note that I don't really understand what is going on with the CC plot. Why would the roman grism be so much less efficient at measuring specz for CC SN host galaxies at high redshift?
###Code
def make_plot_summary(sntype='Ia', ax=None):
if ax is None:
ax = plt.gca()
if sntype=='Ia':
explmask = roman['NON1A_INDEX']==0
detmask = (roman['NON1A_INDEX']==0) & (roman['SIM_SEARCHEFF_MASK']>0)
specmask = (roman['NON1A_INDEX']==0) & (roman['SIM_SEARCHEFF_MASK']>1)
sntypetext = 'SNIa'
elif sntype=='CC':
explmask = roman['NON1A_INDEX']>0
detmask = (roman['NON1A_INDEX']>0) & (roman['SIM_SEARCHEFF_MASK']>0)
specmask = (roman['NON1A_INDEX']>0) & (roman['SIM_SEARCHEFF_MASK']>1)
sntypetext = 'CCSN'
bins = np.arange(0,3.04,0.2)
nexpl = int(np.sum(explmask))
ndet = int(np.sum(detmask))
nspecz = int(np.sum(specmask))
histout0 = ax.hist(roman['ZCMB'][explmask], color='k', alpha=0.1, bins=bins, label='all explosions')
histout1 = ax.hist(roman['ZCMB'][detmask], color='b', alpha=0.2, bins=bins, label='phot detections')
histout2 = ax.hist(roman['ZCMB'][specmask], color='r', alpha=0.2, bins=bins, label='got host specz')
ax.set_title(f'Out of {nexpl:d} {sntypetext} explosions {ndet:d} are detected,\n and {nspecz:d} get a Roman grism specz', fontsize=16)
ax.legend(loc='upper left')
ax2 = ax.twinx()
speczeff0 = histout2[0] / histout0[0] # no.specz / no.explosions
speczeff1 = histout2[0] / histout1[0] # no.specz / no.phot-detections
binmidpts = (histout2[1][1:] + histout2[1][:-1])/2.
ax2.plot(binmidpts, speczeff1, marker='d', ls='-', color='k', lw=3, label=' #specz/#det')
ax2.plot(binmidpts, speczeff0, marker='s', ls='--', color='k', lw=3, label=' #specz/#expl')
ax2.legend(loc='upper right')
ax2.set_ylabel("fraction with host specz", rotation=-90, labelpad=20)
ax.set_ylabel(f"Number of {sntypetext}")
ax.set_xlabel('Redshift')
ax.set_xlim(0, 2.95)
ax2.set_ylim(0, 1.5)
fig = plt.figure(figsize=[10,8])
axIa = fig.add_subplot(2,1,1)
make_plot_summary(sntype='Ia', ax=axIa)
axCC = fig.add_subplot(2,1,2)
make_plot_summary(sntype='CC', ax=axCC)
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Making the Redshift Efficiency Figure:
###Code
def plot_efficiency_vs_z(dat, bins=np.arange(0,3.01,0.1), ax=None, **kwargs):
"""Plot the redshift measurement efficiency vs redshift for
the given (subset) of the SNANA sim data.
dat : an astropy Table, read from a SNANA .DUMP file This function presumes
the user has made all desired selection cuts to the data, such as limiting
to the FIELD of interest or selecting a specific SN subclass using the NON1A_INDEX
ax : the axes to plot on. Use None to use current axes or make new.
bins : passed to np.histogram()
kwargs : passed to plt.plot()
"""
if ax is None:
ax = plt.gca()
detmask = dat['SIM_SEARCHEFF_MASK']>0 # All photometrically detected SNe
speczmask = dat['SIM_SEARCHEFF_MASK']>4 # All detected SNe that get a specz
num_det = np.histogram(dat['ZCMB'][detmask], bins=bins)[0]
num_specz, bin_edges = np.histogram(dat['ZCMB'][speczmask], bins=bins)
midpt = (bin_edges[1:]+bin_edges[:-1])/2.
speczefficiency = num_specz / num_det
ax.plot(midpt, speczefficiency, **kwargs)
return
def make_hostz_efficiency_fig(roman, subaru, sntype='Ia', field='DEEP', scalefactor=1,
showseechange=True):
"""Construct a figure showing the host spectroscopic redshift measuremnet
efficiency curves for Type Ia (sntype='Ia') or CC SN (sntype='CC').
scalefactor : fudge factor to rescale the reported counts of SNe in the survey. For example,
so that it matches the count of total SNe produced in a different survey sim.
"""
fig = plt.figure(figsize=[8,4])
ax1 = fig.add_subplot(1,1,1)
ax2 = ax1.twinx()
if field != 'DEEP' and field != 'SHALLOW':
raise RuntimeError(f"field={field} is not known.")
# Limit to only the field of interest DEEP or SHALLOW (meaning 'wide')
# and the SN sub-class of interest
if sntype == 'Ia':
hist_tick_step = 200
xgrismtext=2.08
ygrismtext=1.05
xalltext=0.95
yalltext=0.25
mfc=None
typestr = 'SNIa'
romanmask = (roman['NON1A_INDEX']==0) & (roman['FIELD']==field)
subarumask = (subaru['NON1A_INDEX']==0) & (subaru['FIELD']==field)
elif sntype == 'CC':
hist_tick_step = 800
xgrismtext=1.2
ygrismtext=0.85
xalltext=1.45
yalltext=0.55
mfc='w'
typestr = 'CCSN'
romanmask = (roman['NON1A_INDEX']!=0) & (roman['FIELD']==field)
#romanwidemask = (roman['NON1A_INDEX']!=0) & (roman['FIELD']==field)
subarumask = (subaru['NON1A_INDEX']!=0) & (subaru['FIELD']==field)
else:
raise RuntimeError(f"sntype={sntype} is not known.")
roman = roman[romanmask]
subaru = subaru[subarumask]
# get approximate counts of detections and spectroscopic redshifts (to nearest 100)
ndet_roman = int(np.round(scalefactor * len(roman)/1e2))*100
ndet_subaru = int(np.round(scalefactor * len(subaru)/1e2))*100
# only count roman host specz above z=0.8 and only count subaru host specz below 0.8
nhostz_roman = int(np.round(scalefactor * np.sum(
(roman['ZCMB']>0.8) & (roman['SIM_SEARCHEFF_MASK']==5))/100.))*100
nhostz_subaru = int(np.round(scalefactor * np.sum(
(subaru['ZCMB']<0.8) & (subaru['SIM_SEARCHEFF_MASK']==5))/100.))*100
# plot the subaru efficiency only up to z=0.85 where we lose Halpha
plot_efficiency_vs_z(subaru, ax=ax1, bins=np.arange(0.1, 0.85, 0.15),
color='blue', marker='d', mfc=mfc,
ms=6, ls='-', label='Subaru+PFS', zorder=3)
# plot the roman grism efficiency starting at z=0.3, all the way to 3.0
plot_efficiency_vs_z(roman, ax=ax1, bins=np.arange(0.3, 3.05, 0.15),
color='firebrick', marker='o', mfc=mfc,
ms=6, ls='-', label='Roman Grism',
zorder=1)
# make a "squished down" histogram showing all detections
histvals, binvals, patches = ax2.hist(roman['ZCMB'],
bins=np.arange(0., 3.25, 0.15),
weights=scalefactor * np.ones(len(roman['ZCMB'])),
color='k', alpha=0.1, zorder=0, density=False)
maxhistval = int(np.max(histvals))
ax2.set_ylim(0,2.5*maxhistval)
ax2.set_yticks(range(0,maxhistval,hist_tick_step))
ax2.text(1.12, 0.07, "# Detected", size=16, rotation=-90, transform=ax2.transAxes)
ax1.set_xlabel(r'$z$')
ax1.set_ylabel(r'fraction with attainable redshift')
# Show the See Change hostz efficiency estimate
if showseechange:
ax1.plot([0.97,1.5], [0.75,0.75], marker=' ', color='teal', lw=4, alpha=0.3)
ax1.text(0.84, 0.73,
"Ground-based specz for\nSee Change low-SFR hosts\n(Williams+ 2020)",
ha='left', va='top', color='teal', fontsize=14)
# Add text reporting the approximate counts
ax1.text(0.1, 1.1, f'~{nhostz_subaru:,d} {typestr} host redshifts from Subaru+PFS',
size=16, color='blue')
ax1.text(xgrismtext, ygrismtext, f'~{nhostz_roman:,d} {typestr} host-z\nfrom Roman Grism',
ha='left', va='top',
size=16, color='firebrick', backgroundcolor='w')
hostz_pct = int(((nhostz_subaru + nhostz_roman )/ndet_roman ) * 100)
ax1.text(xalltext, yalltext,
f'from a total of ~{ndet_roman:,d}\n {typestr} detections\n(net efficiency ~{hostz_pct:d}%)',
ha='left', va='top',
size=16, color='k')
ax1.set_ylim(0,1.19)
ax1.set_xlim(0,2.99)
ax1.xaxis.set_major_locator(MultipleLocator(1))
#ax1.xaxis.set_major_formatter('{x:.0f}')
# For the minor ticks, use no labels; default NullFormatter.
ax1.xaxis.set_minor_locator(MultipleLocator(0.2))
plt.tight_layout()
return
make_hostz_efficiency_fig(roman, subaru, sntype='Ia', field='DEEP', scalefactor=1.35, showseechange=True)
plt.savefig('SNIa_host_z_efficiency_v2.1.pdf',bbox_inches='tight')
#plt.savefig('SNIa_host_z_efficiency.png',bbox_inches='tight')
###Output
_____no_output_____
###Markdown
Don't trust the CC SN figureI don't understand why the specz efficiency for CCSN hosts drops so much faster than it does for the Ia host galaxies
###Code
make_hostz_efficiency_fig(roman, subaru, sntype='CC')
plt.savefig('CCSN_host_z_efficiency.pdf',bbox_inches='tight')
plt.savefig('CCSN_host_z_efficiency.png',bbox_inches='tight')
###Output
_____no_output_____ |
Missions_to_Mars/app/mission_to_mars_analysis.ipynb | ###Markdown
Web Scrapping Activities
###Code
# Import Dependencies
from bs4 import BeautifulSoup as bs
from splinter import Browser
from webdriver_manager.chrome import ChromeDriverManager
import pandas as pd
###Output
_____no_output_____
###Markdown
1. NASA Mars News
###Code
# Create an executable path
executable_path = {'executable_path': ChromeDriverManager().install()}
browser = Browser('chrome', **executable_path, headless=False)
# Visit the NASA mars news site
url = 'https://mars.nasa.gov/news/'
#print(browser)
browser.visit(url)
# Convert the browser html to a soup object and then quit the browser
html = browser.html
soup = bs(html, 'html.parser')
slide_element = soup.select_one('ul.item_list li.slide')
slide_element.find("div", class_='content_title')
news_title = slide_element.find("div", class_='content_title').get_text()
print(news_title)
news_p = slide_element.find("div", class_='article_teaser_body').get_text()
print(news_p)
###Output
[WDM] - ====== WebDriver manager ======
[WDM] - Current google-chrome version is 89.0.4389
[WDM] - Get LATEST driver version for 89.0.4389
###Markdown
2. JPL Mars Space Images - Featured Image
###Code
executable_path = {'executable_path': ChromeDriverManager().install()}
browser = Browser('chrome', **executable_path, headless=False)
url = "https://data-class-jpl-space.s3.amazonaws.com/JPL_Space/index.html"
browser.visit(url)
#use splinter code to click the second button for the full image
full_image_element = browser.find_by_tag('button')[1]
full_image_element.click()
html = browser.html
image_soup = bs(html, 'html.parser')
img_url_rel = image_soup.find('img', class_='fancybox-image').get('src')
img_url = f'https://data-class-jpl-space.s3.amazonaws.com/JPL_Space/{img_url_rel}'
###Output
[WDM] - ====== WebDriver manager ======
[WDM] - Current google-chrome version is 89.0.4389
[WDM] - Get LATEST driver version for 89.0.4389
[WDM] - Driver [/Users/gigijones/.wdm/drivers/chromedriver/mac64/89.0.4389.23/chromedriver] found in cache
###Markdown
3. Mars Facts
###Code
df = pd.read_html('https://space-facts.com/mars/')[0]
df.columns=['Description', 'Mars']
df.set_index('Description', inplace=True)
df
###Output
_____no_output_____
###Markdown
4. Mars Hemispheres
###Code
executable_path = {'executable_path': ChromeDriverManager().install()}
browser = Browser('chrome', **executable_path, headless=False)
url = 'https://astrogeology.usgs.gov/search/results?q=hemisphere+enhanced&k1=target&v1=Mars'
browser.visit(url)
hemisphere_image_urls = []
links = browser.find_by_css("a.product-item h3")
for index in range(len(links)):
hemisphere = {}
browser.find_by_css("a.product-item h3")[index].click()
sample_element = browser.links.find_by_text('Sample').first
# title = browser.find_by_css("h2.title").text
# link = sample_element["href"]
hemisphere['title'] = browser.find_by_css("h2.title").text
hemisphere['link'] = sample_element['href']
hemisphere_image_urls.append(hemisphere)
print("Retrieve the title and link")
browser.back()
print(hemisphere_image_urls)
browser.quit()
###Output
_____no_output_____ |
SG/pipeline/ipynb/covd_view-notClustered_ilastik.ipynb | ###Markdown
Rotate the pointcloud
###Code
f='../npy/clusterable_embedding.merge.38-52-53-57.ilastik.-intensity.+xy.npy'
embedding = np.load(f,allow_pickle=True)
df = pd.DataFrame(data=embedding,columns=['x','y','z','label'])
df['label'] = df['label'].apply(round).apply(str)
colorsIdx = {"38": "red","52": "green","53": "blue","57": "goldenrod"}
cols = df['label'].map(colorsIdx)
trace=[go.Scatter3d(x=df['x'], y=df['y'], z=df['z'], mode='markers',marker=dict(size=1,opacity=0.5,color=cols))]
x_eye = -1.25
y_eye = 2
z_eye = 0.5
layout = go.Layout(
title='Animation Test',
width=600,
height=600,
scene=dict(camera=dict(eye=dict(x=x_eye, y=y_eye, z=z_eye))),
updatemenus=[dict(type='buttons',
showactive=False,
y=1,
x=0.8,
xanchor='left',
yanchor='bottom',
pad=dict(t=45, r=10),
buttons=[dict(label='Play',
method='animate',
args=[None, dict(frame=dict(duration=2, redraw=False),
transition=dict(duration=0),
fromcurrent=True,
mode='immediate'
)]
)
]
)
]
)
def rotate_z(x, y, z, theta):
w = x+1j*y
return np.real(np.exp(1j*theta)*w), np.imag(np.exp(1j*theta)*w), z
frames=[]
for t in np.arange(0, 6.26, 0.05):
xe, ye, ze = rotate_z(x_eye, y_eye, z_eye, -t)
frames.append(dict(layout=dict(scene=dict(camera=dict(eye=dict(x=xe, y=ye, z=ze))))))
fig = go.Figure(data=trace, layout=layout, frames=frames)
fig.write_html(f+'.html', auto_open=True)
###Output
_____no_output_____
###Markdown
Plot samples with a discrete color sequence (different plots will have different color order)
###Code
from plotly.graph_objs import *
import pandas as pd
import plotly.express as px
for f in glob.glob(r'../npy/clusterable_embedding.merge.38-52-53-57.ilastik.*.npy'):
print(f)
data = np.load(f,allow_pickle=True)
df = pd.DataFrame(data=data,columns=['x','y','z','label'])
df['label'] = df['label'].apply(round).apply(str)
fig = px.scatter_3d(df, x="x", y="y", z="z", color="label", hover_name="label",
color_discrete_sequence=px.colors.qualitative.Set2)
fig.update_traces(marker=dict(size=1,opacity=0.75),selector=dict(mode='markers'))
fig.update_layout(margin=dict(l=0, r=0, b=0, t=0))
fig.write_html(f+'.html', auto_open=True)
###Output
_____no_output_____
###Markdown
We can try to cluster the points in the multisample cloud.
###Code
from plotly.graph_objs import *
import pandas as pd
import plotly.express as px
from sklearn.cluster import OPTICS
from sklearn.cluster import DBSCAN
import numpy as np
X = data[:,:3]
#clustering = OPTICS(min_samples=50).fit(X)
clustering = DBSCAN(eps=0.6, min_samples=50).fit(X)
df['label_optic'] = clustering.labels_
df['label_optic'] = df['label_optic'].apply(str)
fig = px.scatter_3d(df, x="x", y="y", z="z", color="label_optic", hover_name="label",
color_discrete_sequence=px.colors.qualitative.Set1)
fig.update_traces(marker=dict(size=2,opacity=0.25),selector=dict(mode='markers'))
fig.update_layout(margin=dict(l=0, r=0, b=0, t=0))
fig.write_html('merge-samples.html', auto_open=True)
###Output
_____no_output_____
###Markdown
See how the point cloud changes with sampling size
###Code
from numpy.random import normal as normal
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.animation as animation
import matplotlib
####################################################################################################
# load logvec from different samples
###################################################################################################
import numpy as np
import glob
import umap
import hdbscan
import sklearn.cluster as cluster
from sklearn.cluster import OPTICS
import pandas as pd
import plotly.express as px
xs = []
ys = []
zs = []
lista = [ f for f in glob.glob(r'../npy/52.txt.r1.s*.logvec.npy') ]
'''Use this to see individual random clusters'''
for f in lista:
X = np.load(f,allow_pickle=True) # create the array of vectorized covd data
clusterable_embedding = umap.UMAP(min_dist=0.0,n_components=3,random_state=42).fit_transform(X) # this is used to identify clusters
xs.append(list(clusterable_embedding[:,0]))
ys.append(list(clusterable_embedding[:,1]))
zs.append(list(clusterable_embedding[:,2]))
'''Use this to see random sampling expansion'''
for idx in range(2,len(lista)):
print(idx)
logvec_list = [ np.load(f,allow_pickle=True) for f in lista[:idx] ]
X = np.vstack(logvec_list) # create the array of vectorized covd data
clusterable_embedding = umap.UMAP(min_dist=0.0,n_components=3,random_state=42).fit_transform(X) # this is used to identify clusters
xs.append(list(clusterable_embedding[:,0]))
ys.append(list(clusterable_embedding[:,1]))
zs.append(list(clusterable_embedding[:,2]))
nfr = len(xs) # Number of frames
fps = 1 # Frame per sec
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
sct, = ax.plot([], [], [], ".", markersize=1)
def update(ifrm, xa, ya, za):
sct.set_data(xa[ifrm], ya[ifrm])
sct.set_3d_properties(za[ifrm])
ax.set_xlim(-10,10)
ax.set_ylim(-10,10)
ax.set_zlim(-10,10)
ani = animation.FuncAnimation(fig, update, nfr, fargs=(xs,ys,zs), interval=1000)
fn = 'plot_3d_scatter_pooling_samples'
ani.save(fn+'.mp4',writer='ffmpeg',fps=fps)
ani.save(fn+'.gif',writer='imagemagick',fps=fps)
plt.rcParams['animation.html'] = 'html5'
ani
###Output
_____no_output_____ |
Notebooks/Supervised/Supervised Example.ipynb | ###Markdown
Supervised Machine Learning ExamplesSome examples of supervised machine learning examples in Python.First, load up a ton of modules...
###Code
import pandas as pd
import itertools
import matplotlib.pyplot as plt
import numpy as np
from sklearn import svm
from sklearn.ensemble import RandomForestClassifier
from sklearn.cross_validation import train_test_split
from sklearn.model_selection import cross_val_score
from sklearn.metrics import confusion_matrix
from sklearn import metrics
pd.options.mode.chained_assignment = None
###Output
_____no_output_____
###Markdown
Load the dataNext, we have to load the data into a dataframe. In order to have a balanced dataset, we will use 10000 records from Alexa which will represent the not malicious domains, and 10000 records from `gameoverdga` representing the malicious domains. You can see that at the end we have 10000 of each.
###Code
df = pd.read_csv( '../../data/dga-full.csv' )
#Filter to alexo and game over
df = df[df['dsrc'].isin(['alexa','gameoverdga'])]
df.dsrc.value_counts()
###Output
_____no_output_____
###Markdown
Add a Target ColumnFor our datasets, we need a numeric column to represent the classes. In our case we are going to call the column `isMalicious` and assign it a value of `0` if it is not malicious and `1` if it is.
###Code
df['isMalicious'] = df['dsrc'].apply( lambda x: 0 if x == "alexa" else 1 )
###Output
_____no_output_____
###Markdown
Perform the Train/Test SplitFor this, let’s create a rather small training data se as it will reduce the time to train up a model.Feel free to try a 15%, 20% or even a 30% portion for the training data (lower percentages for slower machines).In this example, we will split 30% for train and 70% for test.Normally you would want most of the data in the training data, but more training data can considerably extend the time neede to train up a model.We're also going to need a list of column names for the feature columns as well as the target column.
###Code
train, test = train_test_split(df, test_size = 0.7)
features = ['length', 'dicts', 'entropy','numbers', 'ngram']
target = 'isMalicious'
###Output
_____no_output_____
###Markdown
Create the ClassifiersThe next step is to create the classifiers. What you'll see is that scikit-learn maintains a constant interface for every machine learning algorithm. For a supervised model, the steps are:1. Create the classifier object2. Call the `.fit()` method with the training data set and the target 3. To make a prediction, call the `.predict()` method
###Code
#Create the Random Forest Classifier
random_forest_clf = RandomForestClassifier(n_estimators=10,
max_depth=None,
min_samples_split=2,
random_state=0)
random_forest_clf = random_forest_clf.fit( train[features], train[target])
#Next, create the SVM classifier
svm_classifier = svm.SVC()
svm_classifier = svm_classifier.fit(train[features], train[target])
###Output
_____no_output_____
###Markdown
Comparing the ClassifiersNow that we have two different classifiers, let's compare them and see how they perform. Fortunately, Scikit has a series of functions to generate metrics for you. The first is the cross validation score.
###Code
scores = cross_val_score(random_forest_clf, train[features], train[target])
scores.mean()
###Output
_____no_output_____
###Markdown
We'll need to to get the predictions from both classifiers, so we add columns to the test and training sets for the predictions.
###Code
test['predictions'] = random_forest_clf.predict( test[features] )
train['predictions'] = random_forest_clf.predict( train[features] )
test['svm-predictions'] = svm_classifier.predict( test[features])
train['svm-predictions'] = svm_classifier.predict( train[features])
test.head()
###Output
_____no_output_____
###Markdown
Confusion MatrixThese are a little confusing (yuk yuk), but are a very valuable tool in evaluating your models. Scikit-learn has a function to generate a confusion matrix as shown below.
###Code
confusion_matrix( test['isMalicious'], test['predictions'])
###Output
_____no_output_____
###Markdown
The code below generates a nicer presentation of the confusion matrix for the random forest classifer.From: http://scikit-learn.org/stable/auto_examples/model_selection/plot_confusion_matrix.htmlsphx-glr-auto-examples-model-selection-plot-confusion-matrix-py
###Code
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, cm[i, j],
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
# Compute confusion matrix
cnf_matrix = confusion_matrix( test['isMalicious'], test['predictions'])
np.set_printoptions(precision=2)
# Plot non-normalized confusion matrix
plt.figure()
plot_confusion_matrix(cnf_matrix, classes=['Not Malicious', 'Malicious'],
title='RF Confusion matrix, without normalization')
# Plot normalized confusion matrix
plt.figure()
plot_confusion_matrix(cnf_matrix, classes=['Not Malicious', 'Malicious'], normalize=True,
title='RF Normalized confusion matrix')
plt.show()
###Output
_____no_output_____
###Markdown
And again for the SVM classifier.
###Code
# Compute confusion matrix
svm_cnf_matrix = confusion_matrix( test['isMalicious'], test['svm-predictions'])
np.set_printoptions(precision=2)
# Plot non-normalized confusion matrix
plt.figure()
plot_confusion_matrix(svm_cnf_matrix, classes=['Not Malicious', 'Malicious'],
title='SVM Confusion matrix, without normalization')
# Plot normalized confusion matrix
plt.figure()
plot_confusion_matrix(svm_cnf_matrix, classes=['Not Malicious', 'Malicious'], normalize=True,
title='SVM Normalized confusion matrix')
plt.show()
###Output
_____no_output_____
###Markdown
Feature ImportanceRandom Forest has a feature which can calculate the importance for each feature it uses in building the forest. This can be calculated with this property:`random_forest_clf.feature_importances_`.
###Code
importances = random_forest_clf.feature_importances_
importances
###Output
_____no_output_____
###Markdown
You can also visualize this with the following code from: From: http://scikit-learn.org/stable/auto_examples/ensemble/plot_forest_importances.html
###Code
std = np.std([random_forest_clf.feature_importances_ for tree in random_forest_clf.estimators_],
axis=0)
indices = np.argsort(importances)[::-1]
# Print the feature ranking
print("Feature ranking:")
for f in range(test[features].shape[1]):
print("%d. feature %d (%f)" % (f + 1, indices[f], importances[indices[f]]))
# Plot the feature importances of the forest
plt.figure()
plt.title("Feature importances")
plt.bar(range(test[features].shape[1]), importances[indices],
color="r", yerr=std[indices], align="center")
plt.xticks(range(test[features].shape[1]), indices)
plt.xlim([-1, test[features].shape[1]])
plt.show()
###Output
Feature ranking:
1. feature 2 (0.410171)
2. feature 0 (0.304949)
3. feature 3 (0.197083)
4. feature 1 (0.082497)
5. feature 4 (0.005300)
###Markdown
You can calculate the accuracy with the `metrics.accuracy()` method, and finally, there is the `metrics.classification-report()` which will calculate all the metrics except accuracy at once.
###Code
pscore = metrics.accuracy_score(test['isMalicious'], test['predictions'])
pscore_train = metrics.accuracy_score(train['isMalicious'], train['predictions'])
print( metrics.classification_report(test['isMalicious'], test['predictions'], target_names=['Malicious', 'Not Malicious'] ) )
svm_pscore = metrics.accuracy_score(test['isMalicious'], test['svm-predictions'])
svm_pscore_train = metrics.accuracy_score(train['isMalicious'], train['svm-predictions'])
print( metrics.classification_report(test['isMalicious'], test['svm-predictions'], target_names=['Malicious', 'Not Malicious'] ) )
print( svm_pscore, svm_pscore_train)
print( pscore, pscore_train)
###Output
0.999928571429 1.0
|
docs/tutorials/OutCoef-Tutorial.ipynb | ###Markdown
OutCoef AnalysisAn example notebook to look at coefficient outputs.
###Code
# standard python modules
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
import matplotlib.cm as cm
import matplotlib.colors as colors
%matplotlib inline
import time
import scipy.interpolate as interpolate
# exptool classes
from exptool.io import outcoef
from exptool.utils import style
import exptool
import pkg_resources
###Output
_____no_output_____
###Markdown
First, give ourselves access to the default files, which ship with exptool.
###Code
cyl_coef_file = pkg_resources.resource_filename('exptool', 'tests/outcoef.star.run0.dat')
sph_coef_file = pkg_resources.resource_filename('exptool', 'tests/outcoef.dark.run0.dat')
###Output
_____no_output_____
###Markdown
The first file we will read in are the coefficients for a cylindrical component. The output follows a specific format, listed in the documentation:
###Code
O1 = outcoef.OutCoef(cyl_coef_file)
print(O1.read_binary_eof_coefficients.__doc__)
###Output
OutCoef: reading Cylinder coefficients . . .
read_binary_eof_coefficients
definitions to read EXP-generated binary coefficient files (generated by EmpOrth9thd.cc dump_coefs)
the file is self-describing, so no other items need to be supplied.
inputs
----------------------
coeffile : input coefficient file to be parsed
returns
----------------------
times : vector, time values for which coefficients are sampled
coef_array : (rank 4 matrix)
0: times
1: cos/0, sin/1 (note all m=0 sine terms are 0)
2: azimuthal order
3: radial order
###Markdown
Similarly for a spherical component, we can read in and check the documentation for what each dimension means.
###Code
O2 = outcoef.OutCoef(sph_coef_file)
print(O2.read_binary_sl_coefficients.__doc__)
###Output
OutCoef: reading SphereSL coefficients . . .
read_binary_sl_coefficients
definitions to read EXP-generated binary coefficient files (generated by SphericalBasis.cc dump_coefs)
the file is self-describing, so no other items need to be
supplied.
this is for NEW yaml coefficients
inputs
----------------------
coeffile : input coefficient file to be parsed
returns
----------------------
times : vector, time values for which coefficients are sampled
coef_array : (rank 3 matrix)
0: times
1: azimuthal (L) order (unrolled, so l, then m (first cos, then sin)
2: radial order
###Markdown
The default output will depend on the geometry of the component. Let's compare the lowest-order function for each:
###Code
plt.plot(O1.T,O1.coefs[:,0,0,0])
plt.plot(O2.T,O2.coefs[:,0,0])
###Output
_____no_output_____
###Markdown
There is also a hidden set of definitions that will repackage the coefficients in a more dictionary-oriented package for clearer organisation.
###Code
O1._repackage_cylindrical_coefficients()
O2._repackage_spherical_coefficients()
l = 0
m = 0
n = 0
p = 'cos'
plt.plot(O1.T,O1.C[m][p][n])
plt.plot(O2.T,O2.C[l][m][p][n])
###Output
_____no_output_____
###Markdown
And the corresponding documentation...
###Code
print(O1._repackage_cylindrical_coefficients.__doc__)
print(O2._repackage_spherical_coefficients.__doc__)
###Output
redefine the dictionary of cylindrical coefficients to be more interpretable, by sorting on
-m order
-cos/sin
-n order
-time
this formulation has the advantage of being more straightforward to read:
e.g. to plot the lowest-order cosine function, one could:
plt.plot(self.T,self.C[0]['cos'][0])
redefine the dictionary of spherical coefficients to be more interpretable, by sorting on
-l order
-morder
-cos/sin
-n order
-time
this formulation has the advantage of being more straightforward to read and following spherical harmonic conventions
e.g. to plot the lowest-order cosine function, one could:
l = 1
m = 1
p = 'cos'
n = 0
plt.plot(self.T,self.C[l][m][p][n])
|
notebooks/bias/skin-color.ipynb | ###Markdown
Bias in Skin Colors* We basically all have the some "color" of skin: redTODO:* based on histogram alter brightness to match appearance instead of doing it by hand
###Code
import cv2 as cv
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import matplotlib as mpl
mpl.rcParams['figure.figsize'] = (30, 8)
def martianize(img):
r, g, b = cv.split(img)
new_img = cv.merge((g, r, b))
return new_img
def smurfify(img):
r, g, b = cv.split(img)
new_img = cv.merge((b, g, r))
return new_img
def brighter(img, value):
hsv = cv.cvtColor(img, cv.COLOR_BGR2HSV)
h, s, v = cv.split(hsv)
# to avoid over- or underflow
v = v.astype('int16')
v += value
v = np.clip(v, 0, 255)
v = v.astype('uint8')
final_hsv = cv.merge((h, s, v))
bright_img = cv.cvtColor(final_hsv, cv.COLOR_HSV2BGR)
return bright_img
def hist(img):
hsv = cv.cvtColor(img, cv.COLOR_BGR2HSV)
# to avoid false values due to low light, low light values are discarded using cv.inRange() function.
mask = cv.inRange(hsv, np.array((0., 60.,32.)), np.array((180.,255.,255.)))
# for histogram, only Hue is considered
# channels = [0]
channels = [0]
bins = 180
ranges = [0,180]
# find histogram if roi so that we can backproject the target on each frame for calculation of meanshift
hist = cv.calcHist([hsv], channels, mask, [bins], ranges)
cv.normalize(hist,hist,0,255,cv.NORM_MINMAX);
plt.hist(hist, bins=bins);
def load(path):
img = cv.imread(path)
img = cv.cvtColor(img, cv.COLOR_BGR2RGB)
return img
###Output
_____no_output_____
###Markdown
Examples
###Code
!ls
img = load('olli.jpg')
img = brighter(img, -50)
img = martianize(img)
plt.imshow(img);
img = load('obama-tutu.jpg')
img = brighter(img, 50)
img = martianize(img)
plt.imshow(img);
img = load('Tenzin_Gyatso.jpg')
img = brighter(img, 50)
img = martianize(img)
plt.imshow(img);
img = load('kalle-schwensen.jpg')
img = brighter(img, 50)
img = martianize(img)
plt.imshow(img);
img = load('Archbishop-Tutu.jpg')
img = brighter(img, 50)
img = martianize(img)
plt.imshow(img);
hist(img)
img = load('genscher.jpg')
img = brighter(img, -50)
img = martianize(img)
# img = smurfify(img)
plt.imshow(img);
# this is almost BW (artifically), so no chance here
# img = load('dark-skin-model-melanin-goddess-khoudia-diop-14.jpg')
img = load('dark-skin-model-melanin-goddess-khoudia-diop-15.jpg')
img = brighter(img, 80)
img = martianize(img)
# img = smurfify(img)
plt.imshow(img);
img = load('mikio.jpg')
img = brighter(img, 50)
img = martianize(img)
# img = smurfify(img)
plt.imshow(img);
img = load('kd.jpg')
img = brighter(img, -100)
img = martianize(img)
# img = smurfify(img)
plt.imshow(img);
###Output
_____no_output_____
###Markdown
Just Eyes
###Code
img = load('eyes-1.jpg')
img = brighter(img, 50)
img = martianize(img)
# img = smurfify(img)
plt.imshow(img);
img = load('eyes-2.jpg')
img = brighter(img, -70)
img = martianize(img)
# img = smurfify(img)
plt.imshow(img);
img = load('eyes-3.jpg')
img = brighter(img, -100)
img = martianize(img)
# img = smurfify(img)
plt.imshow(img);
###Output
_____no_output_____ |
downloaded_kernels/university_rankings/kernel_177.ipynb | ###Markdown
**Teeme läbi väikesed harjutused, et hiljem oleks lihtsam kodutööd teha.** 1) Leia kaggle’st dataset ‘World University Rankings’2) Tee uus kernel (notebook)3) Loe andmed dataseti failist ‘cwurData.csv’4) Kuva andmed tabelina 5) Kuva tabelist read, mis käivad Eesti ülikoolide kohta 6) Kuva keskmine hariduse kvaliteedi näitaja grupeerituna riikide kaupa 7) Järjesta saadud andmed keskmise hariduse kvaliteedi näitaja järgi kahanevaltVihjed: Pane eelmise ülesande andmed uude DataFrame ning sorteeri uus DataFrame
###Code
###Output
_____no_output_____ |
notebooks/__work_in_progress/models_inprogress/1_12_Internally_Heated_Convection.ipynb | ###Markdown
Thermal Convection Regimes for an Internally Heated MantleThis is an example of two modes of mantle convection: stagnant lid and isoviscous, both driven by internal heating.**Relevant reading:**- *Stagnant Lid Convection on Venus*, Solomatov and Moresi 1996 (http://onlinelibrary.wiley.com/doi/10.1029/95JE03361/full)- *Geodynamics, Turcotte and Schubert*, 6-21 in 2nd edition - *Mantle Convection in the Earth and Planets*, Schubert, Turcotte and Olson, Chapter 7
###Code
import underworld as uw
import numpy
import glucifer
import matplotlib.pyplot as plt
from IPython import display
import os
rank = uw.rank()
size = uw.nProcs()
uw.matplotlib_inline()
plt.ion()
###Output
_____no_output_____
###Markdown
Do we want to read a previous temperature field in? The answer is usually yes
###Code
readTemperature = True
###Output
_____no_output_____
###Markdown
Do we want to save figures? If set to true, stagnant lid and isoviscous figures are saved seperately in the current directory. Otherwise, you can look at plots in this notebook.
###Code
writefigures = False
###Output
_____no_output_____
###Markdown
in Underworld, the heat-source term is entered as $H_c = \frac{H}{ c_p}$, where $H$ is the heat-production in $W\ kg^{-1}$ and $c_p$ is the heat capacity in $J\ kg^{-1}\ K^{-1}$. $H_c$ subsequently has units of $K\ s^{-1}$The Rayleigh number for internally heated convection can be written as:$Ra = \frac{\rho_0 g \alpha H_c L^5}{\kappa^2 \eta}$Temperature can be scaled as:$T' = T \frac{\kappa}{H_c L^2}$----- Set the Rayleigh number
###Code
Ra = 1e6
# Choose Ra by varying alpha and setting other parameters to 1
H = 1.
diffusivity = 1.
rho0 = 1.
eta0 = 1.
alpha = Ra
###Output
_____no_output_____
###Markdown
The temperature between the surface and the interior of the convective cell ($\Delta T$), for internally heated convection, is well approximated as $\Delta T = \beta H Ra^{-\frac{1}{4}}$, where $\beta$ is found empirically (6-346, Geodynamics, Turcotte and Schubert 2007). This is used for benchmarking and for comparison.
###Code
predTemp = 2.45 * H * Ra**-0.25
if rank == 0:
print("For Ra = %.2e, the temperature jump is %.2e" %(Ra,predTemp))
###Output
For Ra = 1.00e+06, the temperature jump is 7.75e-02
###Markdown
**As an example,** if we assume that $\kappa = 10^{-6}\ m^2s^{-1}$, $H = 10^{-11}\ W\ kg^{-1}$, $C_p = 1200\ J\ kg^{-1}\ K^{-1}$:
###Code
if rank == 0:
print("dimensional interior temperature: %.2f K" %(predTemp / 1e-6 * 9e-12 / 1200. * (2900e3)**2.))
###Output
dimensional interior temperature: 4886.79 K
###Markdown
How does this compare to the expected value of $\sim 1700 K$ for the Earth? This might give you an idea of the Earth's effective $Ra$ (though the thickness of boundary layer is also important) and the degree to which an internally heated, isoviscous convection model approximates the Earth's heat loss. ---**We'll decide on the style of convection here**Setting the entire mantle to be isoviscous (```isoviscous = True```) results in a surface which can be recycled into the mantle interior. This approximates 'mobile lid convection'. Once surface heat loss and internal heat generation are in equilibrium, we would expect the internal temperature to be well approximated by the calculation of $\Delta T$ above.Setting the mantle viscosity to be exponentially temperature dependent (```isoviscous = False```) results in 'stagnant lid' convection. In this mode, the surface and a thick 'lid' are stationary, while convection occurs below. Heat loss is much less efficient (relatively lower Nusselt number) and so the interior temperature should be relatively higher for the same heat generation.
###Code
isoviscous = False
###Output
_____no_output_____
###Markdown
----*Set key parameters*
###Code
elementType = "Q1/dQ0"
resX = 64
resY = 64
mesh = uw.mesh.FeMesh_Cartesian( elementType = (elementType),
elementRes = (resX, resY),
minCoord = (0., 0.),
maxCoord = (1., 1.))
temperatureField = mesh.add_variable( nodeDofCount=1 )
temperatureDotField = mesh.add_variable( nodeDofCount=1 )
pressureField = mesh.subMesh.add_variable( nodeDofCount=1 )
velocityField = mesh.add_variable( nodeDofCount=2 )
HField = mesh.add_variable( nodeDofCount=1 )
HField.data[:] = H
IWalls = mesh.specialSets["MinI_VertexSet"] + mesh.specialSets["MaxI_VertexSet"]
JWalls = mesh.specialSets["MinJ_VertexSet"] + mesh.specialSets["MaxJ_VertexSet"]
BottomWall = mesh.specialSets["MinJ_VertexSet"]
TopWall = mesh.specialSets["MaxJ_VertexSet"]
LeftWall = mesh.specialSets["MinI_VertexSet"]
RightWall = mesh.specialSets["MaxI_VertexSet"]
freeslipBC = uw.conditions.DirichletCondition( variable=velocityField,
indexSetsPerDof=(IWalls,JWalls) )
# Top wall is set to constant temperature, the others are insulating
tempBC = uw.conditions.DirichletCondition( variable=temperatureField,
indexSetsPerDof=(TopWall) )
# Un-comment if you want to be really sure that the walls are insulating, though this seems to happen by default.
# neumannBC = uw.conditions.NeumannCondition( fn_flux=0., variable=temperatureField, indexSetsPerDof=IWalls+BottomWall)
mSwarm = uw.swarm.Swarm( mesh=mesh)
nParticles = 12
layout = uw.swarm.layouts.GlobalSpaceFillerLayout( swarm=mSwarm, particlesPerCell=nParticles )
mSwarm.populate_using_layout( layout=layout )
tracerSwarm = uw.swarm.Swarm (mesh=mesh)
tracerSwarm.add_particles_with_coordinates(numpy.array([(0.3,0.5)]))
tracerTrackSwarm = uw.swarm.Swarm (mesh=mesh)
advDiff = uw.systems.AdvectionDiffusion( temperatureField, temperatureDotField,
velocityField,
fn_diffusivity=diffusivity, fn_sourceTerm=HField, conditions=[tempBC])#,neumannBC])
advector = uw.systems.SwarmAdvector( swarm=mSwarm, velocityField=velocityField, order=2 )
traceradvector = uw.systems.SwarmAdvector( swarm=tracerSwarm, velocityField=velocityField, order=2 )
if isoviscous:
mname = "isovisc"
else:
mname = "stag"
if readTemperature:
temperatureField.load("input/1_12_Internally_Heated_Convection/temperature_%s.h5" %mname, interpolate=True)
else:
CoordFn = uw.function.input()
surfGradFn = 10.*(1. - (1.+0.0*uw.function.math.cos(CoordFn[0] * 3.14))* CoordFn[1])
maxTemp = 0.1
initialFn = uw.function.branching.conditional( [(surfGradFn < 0.,0.),( surfGradFn < maxTemp , surfGradFn),(True,maxTemp)])
temperatureField.data[:] = initialFn.evaluate(mesh)
refTemp = uw.function.misc.constant(1.)
maxTemp = numpy.max(temperatureField.data[:,0])
refTemp.value = maxTemp
mIVar = mSwarm.add_variable( dataType="int", count=1)
mIVar.data[:] = 0
MrhoFn = rho0 * (1. - alpha* ( temperatureField ))
dicDensity = {0:MrhoFn}
densityFn = uw.function.branching.map( fn_key = mIVar,
mapping = dicDensity)
figMaterial = glucifer.Figure( figsize=(800,400), title="Initial Temperature Field" )
figMaterial.append( glucifer.objects.Surface(mesh,temperatureField ))
# figMaterial.show()
###Output
_____no_output_____
###Markdown
Set the maxmimum viscosity contrast between material at the maximum and mininum temperatures, which by default is five orders of magnitude.
###Code
surfEta = 1e5
cEta = numpy.log(surfEta) / refTemp
if isoviscous:
ArrFunction = 1.
else:
#Frank-Kamenetskii Temperature-Dependent Rheology
ArrFunction = uw.function.math.exp(cEta *(refTemp-temperatureField))
dicViscosity = {0:ArrFunction}
viscosityMapFn = uw.function.branching.map( fn_key = mIVar,
mapping = dicViscosity)
figMaterial = glucifer.Figure( figsize=(800,400), title="Initial Viscosity Distribution (Log)" )
figMaterial.append( glucifer.objects.Points(mSwarm,fn_colour = uw.function.math.log10(viscosityMapFn )))
if size == 1:
figMaterial.show()
figMaterial = glucifer.Figure( figsize=(800,400), title="Initial Density Distribution" )
figMaterial.append( glucifer.objects.Points(mSwarm,fn_colour = densityFn ))
if size == 1:
figMaterial.show()
stokesPIC = uw.systems.Stokes(velocityField=velocityField,
pressureField=pressureField,
conditions=[freeslipBC,],
fn_viscosity=1.,
fn_bodyforce=densityFn*(0.,-1.))
solver=uw.systems.Solver(stokesPIC)
solver.solve()
stokesPIC = uw.systems.Stokes(velocityField=velocityField,
pressureField=pressureField,
conditions=[freeslipBC,],
fn_viscosity=viscosityMapFn,
fn_bodyforce=densityFn*(0.,-1.))
solver=uw.systems.Solver(stokesPIC)
surfaceHF = uw.utils.Integral( fn = temperatureField.fn_gradient[1], mesh = mesh, integrationType = "surface", surfaceIndexSet = TopWall)
bottomHF = uw.utils.Integral( fn = temperatureField.fn_gradient[1], mesh = mesh, integrationType = "surface", surfaceIndexSet = BottomWall)
leftHF = uw.utils.Integral( fn = temperatureField.fn_gradient[0], mesh = mesh, integrationType = "surface", surfaceIndexSet = LeftWall)
rightHF = uw.utils.Integral( fn = temperatureField.fn_gradient[0], mesh = mesh, integrationType = "surface", surfaceIndexSet = RightWall)
step = 0
maxsteps = 3
yielding = False
if rank == 0:
arrMeanTemp = numpy.zeros(maxsteps+1)
arrSurfHF = numpy.zeros(maxsteps+1)
arrOtherWallsHF = numpy.zeros(maxsteps+1)
for step in range(maxsteps+1):
solver.solve(nonLinearIterate=yielding)
cFactor = 0.5
dt = numpy.min([cFactor * advDiff.get_max_dt(),cFactor * advector.get_max_dt()])
advDiff.integrate(dt)
avTemperature = mesh.integrate(temperatureField)[0]
if rank ==0:
arrMeanTemp[step] = avTemperature
traceradvector.integrate(dt)
tracerTrackSwarm.add_particles_with_coordinates(tracerSwarm.particleCoordinates.data[:])
surfHF = -1. * surfaceHF.evaluate()[0]
wallsHF = abs(bottomHF.evaluate()[0])+abs(leftHF.evaluate()[0])+abs(rightHF.evaluate()[0])
if rank == 0:
arrSurfHF[step] = surfHF
arrOtherWallsHF[step] = wallsHF
refTemp.value = numpy.max(temperatureField.data[:])
if rank == 0:
plt.plot(range(maxsteps),arrMeanTemp[:step])
plt.scatter(range(maxsteps),arrMeanTemp[:step])
plt.xlabel("Time Step")
plt.ylabel("Average Temperature")
if writefigures:
plt.savefig("output/%s_temperaturetime.pdf" %mname, bbox_inches="tight")
###Output
_____no_output_____
###Markdown
**Now we can look at our model output**If the average temperature plotted above is still varying considerably, the system is not in a thermal steady-state and you should run more time-steps.
###Code
#Write to h5 file
savedmesh = mesh.save("output/mesh_%s.h5" %mname)
temperatureField.save("output/temperature_%s.h5" %mname, meshHandle=savedmesh)
###Output
_____no_output_____
###Markdown
We'll plot the heat-loss over time. If our system is in steady-state, the heat-loss should be equivalent to the heat-generation. We have set all walls other than the top to be insulating, so there should be negligible heat loss through these surfaces.
###Code
if rank == 0:
plt.clf()
plt.plot(range(step),arrSurfHF[:step]/H,"--",label="Top Wall Heat Loss")
plt.plot(range(step),arrOtherWallsHF[:step],"--",label="Other Walls Heat Loss")
plt.plot(range(maxsteps),numpy.ones(maxsteps),label="Heat Generation")
plt.ylim([0,max([1.1,max(arrSurfHF[:step]/H)])])
plt.title("Heat Generation and Loss Through Time")
plt.xlabel("Time Step")
plt.ylabel("Integrated Heat Loss or Gradient")
plt.legend(loc='best')
if writefigures:
plt.savefig("output/%s_heatlossandgeneration.pdf" %mname)
###Output
_____no_output_____
###Markdown
Plot the temperature and velocity vector fields and not the contrasts between isoviscous and stagnant lid convection.Stagnant lid convection should be characterised by a thick thermal boundary layer, with negligible flow near the surface. Isoviscous convection should have a thin boundary layer and significant surface flow.
###Code
figMaterial = glucifer.Figure( figsize=(800,400), title="Temperature Field" )
figMaterial.append( glucifer.objects.Surface(mesh,temperatureField ))
figMaterial.append( glucifer.objects.VectorArrows(mesh,1e3/Ra*velocityField))
figMaterial.append( glucifer.objects.Points(swarm=tracerTrackSwarm,colourBar=False,fn_size=5,colours="purple"))
if size == 1:
figMaterial.show()
if writefigures:
figMaterial.save_image("output/%s_TemperatureField" %mname)
###Output
_____no_output_____
###Markdown
Let's have a closer look at the surface velocity
###Code
n = 100
topWallX = numpy.linspace(mesh.minCoord[0],mesh.maxCoord[0],n)
topWallVelocity = numpy.zeros(n)
for i in range(n):
topWallVelocity[i] = velocityField[0].evaluate_global((topWallX[i],mesh.maxCoord[1]))
if rank == 0:
plt.clf()
plt.plot(topWallX,topWallVelocity)
plt.title("Surface Velocity")
plt.ylabel("Horizontal Velocity")
plt.xlabel("Distance")
if writefigures:
plt.savefig("output/%s_surface_velocity.pdf" %mname)
###Output
_____no_output_____
###Markdown
Calculate the vertical temperature gradient at the surface of the model, to see where the highest heat-loss is. We're running these models to steady-state, so the integrated surface heat-loss should not depend on the convective regime.
###Code
n = 100
topWallX = numpy.linspace(mesh.minCoord[0],mesh.maxCoord[0],n)
topWalldTdZ = numpy.zeros(n)
for i in range(n):
topWalldTdZ[i] = temperatureField.fn_gradient[1].evaluate_global((topWallX[i],mesh.maxCoord[1]))
if rank == 0:
plt.clf()
plt.plot(topWallX,abs(topWalldTdZ))
plt.title("Top Wall Temperature Gradients")
plt.ylabel("Temperature gradient")
plt.xlabel("Distance")
if writefigures:
plt.savefig("output/%s_wall_gradients.pdf" %mname)
###Output
_____no_output_____
###Markdown
Let's calculate geotherms for three vertical cross-sections and compare to the predicted internal temperature for isoviscous convection.Because stagnant lid has a thick boundary layer, its heat-loss is significantly less efficient than for isoviscous convection. Running to steady-state should result in extremely high internal temperatures, which you can compare to the predicted isoviscous convection temperature (dashed).
###Code
if rank == 0:
plt.clf()
n = 100
arrY = numpy.linspace(0,1,n)
for x in [0.,0.5,1.]:
arrT = numpy.zeros(n)
for i in range(n):
arrT[i] = temperatureField.evaluate_global((x,arrY[i]))
if rank == 0 :
plt.plot(arrT,arrY,label="Temperature at x=%.1f" %x)
if rank ==0:
plt.title("Geotherm")
plt.plot(numpy.ones(n)*predTemp, numpy.linspace(0,1,n),"--",c="black",label="Predicted " + r"$\Delta T$")
plt.ylabel("Temperature")
plt.xlabel("Depth")
plt.legend(loc='best')
if writefigures:
plt.savefig("output/%s_geotherm.pdf" %mname)
###Output
_____no_output_____ |
notebook/Neptune_Ontology_Example.ipynb | ###Markdown
Neptune Ontology ExampleThis notebook shows the use of a semantic ontology in Neptune. We use the organizational ontology (https://www.w3.org/TR/vocab-org/) defined using OWL. For more context, read the AWS blog post TODO-link for more!Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.SPDX-License-Identifier: MIT-0 Loading the Ontology and Examples into NeptuneFirst, load the organizational ontology into Neptune. The ontology is written as a set of RDF triples in Turtle form. Load it using Neptune's loader; modify the -s argument if the S3 bucket name does not match yours. You will be prompted with a submit form. Click Submit to run the loader, and check it completes successfully.
###Code
%load -s s3://__S3_BUCKET__/org.ttl -f turtle --named-graph-uri=http://www.w3.org/ns/org
###Output
_____no_output_____
###Markdown
Next load the sample data set, which depicts a fictional organization and member structure. Load using the same approach as above. Check the S3 bucket and modify if necessary.
###Code
%load -s s3://__S3_BUCKET__/example_org.ttl -f turtle --named-graph-uri=http://amazonaws.com/db/neptune/examples/ontology/org
###Output
_____no_output_____
###Markdown
Finally load a contrived ontology meant to test edge cases not covered by the org ontology. Modify S3 bucket if necessary.
###Code
%load -s s3://__S3_BUCKET__/tester_ontology.ttl -f turtle --named-graph-uri=http://amazonaws.com/db/neptune/examples/ontology/tester
###Output
_____no_output_____
###Markdown
Querying Org OntologyLet's query the organizational ontology to discover classes and properties. Let's first get a high-level picture of the classes. The first query finds OWL classes as well as keys, equivalent classes and subclasses. Among the classes shown in the results are expected ones like http://www.w3.org/ns/orgOrganization and http://www.w3.org/ns/orgRole. But we also see peculiar classes that are blank nodes, which begin with the letter b. We will make sense of these later in the notebook when we build a model.
###Code
%%sparql
# You will notice some of the classes or related classes are blank nodes.
# We need to drill down and see that they include.
# Not here, though.
PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
PREFIX owl: <http://www.w3.org/2002/07/owl#>
select ?class
(GROUP_CONCAT(distinct ?subOf;SEPARATOR=",") AS ?subsOf)
(GROUP_CONCAT(distinct ?equiv;SEPARATOR=",") AS ?equivs)
(GROUP_CONCAT(distinct ?key;SEPARATOR=",") AS ?keys) where {
?class rdf:type owl:Class .
OPTIONAL { ?class rdfs:subClassOf ?subOf . } .
OPTIONAL { ?class owl:equivalentClass ?equiv . } .
OPTIONAL { ?class owl:hasKey ?keylist . ?keylist rdf:rest*/rdf:first ?key . } .
} group by ?class
order by ?class
###Output
_____no_output_____
###Markdown
Now let's connect properties to classes. We list properties whose domain is one of the classes from the results above. For each we also get the range and the property type. The results mostly make sense, but we continue to see blank nodes. For example, the class associated with the http://www.w3.org/ns/orgrole property is blank. We make sense of this later in the notebook.
###Code
%%sparql
PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
PREFIX owl: <http://www.w3.org/2002/07/owl#>
select ?class ?prop ?range
(GROUP_CONCAT(distinct ?propType;SEPARATOR=",") AS ?propTypes) where {
?class rdf:type owl:Class .
?prop rdfs:domain ?class .
?prop rdf:type ?propType .
OPTIONAL {?prop rdfs:range ?range } .
}
group by ?class ?prop ?range
order by ?class ?prop
###Output
_____no_output_____
###Markdown
Querying Example DataNow let's query the example organization to discover orgs, suborgs, employees and roles. First, we list organizations, suborganizations, and organizational units, as well as the sites of the organizations.
###Code
%%sparql
PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
PREFIX org: <http://www.w3.org/ns/org#>
select ?orgName ?subName ?unitName ?siteName where {
?org rdf:type org:Organization .
?org rdfs:label ?orgName .
OPTIONAL { ?org org:hasSubOrganization/rdfs:label ?subName } .
OPTIONAL { ?org org:hasUnit/rdfs:label ?unitName . } .
OPTIONAL { ?org org:hasSite/rdfs:label ?siteName . }
} order by ?orgName
###Output
_____no_output_____
###Markdown
Let's also check organizational history. Run the next query to see a change event.
###Code
%%sparql
PREFIX org: <http://www.w3.org/ns/org#>
select ?event ?prop ?obj where {
?event rdf:type org:ChangeEvent .
?event ?prop ?obj .
} order by ?event ?prop
###Output
_____no_output_____
###Markdown
Now let's list some of the people in these organizations. Notice in the query results the org:memberOf and org:basedAt relationships, which tie the person to an organization and a site.
###Code
%%sparql
PREFIX foaf: <http://xmlns.com/foaf/0.1/>
select ?person ?prop ?obj where {
?person rdf:type foaf:Person .
?person ?prop ?obj .
} order by ?person ?prop
###Output
_____no_output_____
###Markdown
Let's run a path query to see the hierarchical structure of OrgFinancial.
###Code
%%sparql
PREFIX org: <http://www.w3.org/ns/org#>
PREFIX ex: <http://amazonaws.com/db/neptune/examples/ontology/org/>
select ?personName ?boss (GROUP_CONCAT(?superiorName;SEPARATOR=",") AS ?superiors) where {
?person org:memberOf ex:Org-MegaFinancial .
?person rdfs:label ?personName .
OPTIONAL {
?person org:reportsTo/rdfs:label ?boss .
?person org:reportsTo+ ?superior .
?superior rdfs:label ?superiorName .
} .
} group by ?personName ?boss
###Output
_____no_output_____
###Markdown
Finally, let's see roles and posts in the MegaSystems organization. Run the next two queries.
###Code
%%sparql
PREFIX org: <http://www.w3.org/ns/org#>
PREFIX ex: <http://amazonaws.com/db/neptune/examples/ontology/org/>
select ?post ?postHolder where {
?post rdf:type org:Post .
?post org:postIn ex:Org-MegaSystems .
OPTIONAL {
?postHolder org:holds ?post .
}
}
%%sparql
PREFIX org: <http://www.w3.org/ns/org#>
PREFIX ex: <http://amazonaws.com/db/neptune/examples/ontology/org/>
select ?role ?roleHolder where {
?role rdf:type org:Role .
?membership rdf:type org:Membership .
?membership org:role ?role .
?membership org:organization ex:Org-MegaSystems .
?membership org:member ?roleHolder
}
###Output
_____no_output_____
###Markdown
Enforcing the Ontology!Now let's bring things together. We need to understand the purpose of those blank nodes above! We also need to check whether our sample data matches the structure expected by the ontology. Finally, let's make use of that structure to insert new members and orgs, guided by a boilerplate structure. Build the ModelThe first step is to gather a bit more information from the ontology. We need to "fill in the blanks!". Run the next cell to obtain a complete picture of the ontology. The code that follows runs several queries and brings them together into an opinionated interface, or model, of classes and expected properties.
###Code
from IPython.utils import io
# check if uri is bnode or not
def is_bnode(uri):
return uri.startswith("b")
# check if list contains the val
def list_has_value(list, val):
try:
list.index(val)
return True
except ValueError:
return False
# run sparql magic on the specified query. return the results
def run_query(q):
with io.capture_output() as captured:
ipython = get_ipython()
mgc = ipython.run_cell_magic
mgc(magic_name = "sparql", line = "--store-to query_res", cell=q)
return query_res["results"]["bindings"]
# build our model
def build_model():
# Out of scope OWL stuff for this example:
# AllDisjointClases, disjointUnionOf
# assertions - same/diff ind, obj/data prop assertion, neg obj/data prop assertion
# annotations
# top/bottom property
# restriction onProperties;
# but restriction onProperty IS supported
# cardinality
# but will consider FunctionalProperty
# Datatype and data ranges
# Limitation: for datatype properties, consider only strings.
CLASS_QUERY = """
PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
PREFIX owl: <http://www.w3.org/2002/07/owl#>
select ?class
(GROUP_CONCAT(distinct ?subOf;SEPARATOR=",") AS ?subsOf)
(GROUP_CONCAT(distinct ?equiv;SEPARATOR=",") AS ?equivs)
(GROUP_CONCAT(distinct ?complement;SEPARATOR=",") AS ?complements)
(GROUP_CONCAT(distinct ?keyList;SEPARATOR=",") AS ?keys)
(GROUP_CONCAT(distinct ?kentry;SEPARATOR=",") AS ?keyEntries)
(GROUP_CONCAT(distinct ?uList;SEPARATOR=",") AS ?unions)
(GROUP_CONCAT(distinct ?iList;SEPARATOR=",") AS ?intersections)
(GROUP_CONCAT(distinct ?ientry;SEPARATOR=",") AS ?intersectionEntries)
(GROUP_CONCAT(distinct ?oneList;SEPARATOR=",") AS ?oneOfs)
(GROUP_CONCAT(distinct ?disj;SEPARATOR=",") AS ?disjoints)
where {
?class rdf:type owl:Class .
OPTIONAL { ?class rdfs:subClassOf+ ?subOf . } .
OPTIONAL { ?class owl:equivalentClass+ ?equiv . } .
OPTIONAL { ?class owl:complementOf ?complement . } .
OPTIONAL { ?class owl:hasKey ?keyList . } .
OPTIONAL { ?class owl:hasKey ?kl . ?kl rdf:rest*/rdf:first ?kentry . } .
OPTIONAL { ?class owl:unionOf ?uList . } .
OPTIONAL { ?class owl:intersectionOf ?iList . } .
OPTIONAL { ?class owl:intersectionOf ?il . ?il rdf:rest*/rdf:first ?ientry . } .
OPTIONAL { ?class owl:oneOf ?oneList . } .
OPTIONAL { ?class owl:disjointWith ?disj . } .
} group by ?class
"""
PROP_QUERY = """
PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
PREFIX owl: <http://www.w3.org/2002/07/owl#>
select ?prop
(GROUP_CONCAT(distinct ?subPropOf;SEPARATOR=",") AS ?subsOf)
(GROUP_CONCAT(distinct ?equiv;SEPARATOR=",") AS ?equivs)
(GROUP_CONCAT(distinct ?domain;SEPARATOR=",") AS ?domains)
(GROUP_CONCAT(distinct ?du;SEPARATOR=",") AS ?domainUs)
(GROUP_CONCAT(distinct ?range;SEPARATOR=",") AS ?ranges)
(GROUP_CONCAT(distinct ?ru;SEPARATOR=",") AS ?rangeUs)
(GROUP_CONCAT(distinct ?disj;SEPARATOR=",") AS ?disjoints)
(GROUP_CONCAT(distinct ?inv;SEPARATOR=",") AS ?inverses)
(GROUP_CONCAT(distinct ?type;SEPARATOR=",") AS ?types)
where {
{ ?prop rdf:type rdf:Property . }
UNION
{ ?prop rdf:type owl:ObjectProperty . }
UNION
{ ?prop rdf:type owl:DatatypeProperty . } .
OPTIONAL { ?prop rdfs:subPropertyOf+ ?subPropOf . } .
OPTIONAL { ?prop rdfs:equivalentProperty+ ?equiv . } .
OPTIONAL { ?prop rdfs:domain ?domain } .
OPTIONAL { ?prop rdfs:domain/owl:unionOf ?u . ?u rdf:rest*/rdf:first ?du . } .
OPTIONAL { ?prop rdfs:range ?range } .
OPTIONAL { ?prop rdfs:range/owl:unionOf ?u1 . ?u1 rdf:rest*/rdf:first ?ru . } .
OPTIONAL { ?prop owl:propertyDisjointWith ?disj . } .
OPTIONAL { { ?prop owl:inverseOf ?inv } UNION { ?inv owl:inverseOf ?prop } } .
?prop rdf:type ?type . # allows us to check functional, transitive, etc
}
group by ?prop
"""
RESTRICTION_QUERY ="""
PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
PREFIX owl: <http://www.w3.org/2002/07/owl#>
select ?restriction ?prop
(GROUP_CONCAT(distinct ?allClass;SEPARATOR=",") AS ?allFromClasses)
(GROUP_CONCAT(distinct ?someClass;SEPARATOR=",") AS ?someFromClasses)
(GROUP_CONCAT(distinct ?lval;SEPARATOR=",") AS ?lvals)
(GROUP_CONCAT(distinct ?ival;SEPARATOR=",") AS ?ivals)
where {
?restriction rdf:type owl:Restriction .
?restriction owl:onProperty ?prop .
OPTIONAL { ?restriction owl:allValuesFrom ?allClass . } .
OPTIONAL { ?restriction owl:someValuesFrom ?someClass . } .
OPTIONAL { ?restriction owl:hasValue ?lval . FILTER(isLiteral(?lval)) . } .
OPTIONAL { ?restriction owl:hasValue ?ival . FILTER(!isLiteral(?ival)) . } .
} group by ?restriction ?prop
"""
LIST_QUERY = """
PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
PREFIX owl: <http://www.w3.org/2002/07/owl#>
select ?list (GROUP_CONCAT(distinct ?entity;SEPARATOR=",") AS ?entities) where {
?subject owl:unionOf|owl:intersectionOf|owl:oneOf|owl:onProperties|owl:members|owl:disjoinUnionOf|owl:propertyChainAxioms|owl:hasKey ?list .
OPTIONAL {?list rdf:rest*/rdf:first ?entity . } .
} group by ?list
"""
# sub-function to run a sparql query and transform it
# the transform works like this
# sparql result: [ { "col1": { value "a"}, "col2": { value: "b,c"}, "col3 : { value: "d"}"}]
# transformed: [ "a": { "col2": ["b", "c"], "col3", "d"}]
# Here "col1" is the key, so the "a" becomes the key
# "b,c" is comma-sep value and is transformed to list ["b", "c"]
# "col3" is a single, so it its val is "d" rather than ["d"]
def run_model_query(q, key, singles):
res = run_query(q)
result_dict = {}
for rec in res:
this_rec = {"visited": False, "visitedForProps": False, "discoveredProps": [], "restrictedProps": []}
for rec_key in rec:
val = str(rec[rec_key]["value"])
if rec_key == key:
this_rec[rec_key] = val
result_dict[val] = this_rec
elif list_has_value(singles, rec_key) :
this_rec[rec_key] = val
elif val == "":
this_rec[rec_key] = []
else:
toks = val.split(",")
this_rec[rec_key] = toks
return result_dict
# run the queries
class_res = run_model_query(CLASS_QUERY, "class", [])
prop_res = run_model_query(PROP_QUERY, "prop", [])
restriction_res = run_model_query(RESTRICTION_QUERY, "restriction", ["prop"])
list_res = run_model_query(LIST_QUERY, "list", [])
classes = list(class_res.keys())
props = list(prop_res.keys())
restrictions = list(restriction_res.keys())
lists = list(list_res.keys())
#
# Walk functions. If a class/prop refers to a bnode, let's drill down and see what that bnode is.
# Walk the bnode too, and capture its structure in the parent class/prop.
# Example, suppose a class has a subClassOf b, where b is a bnode.
# What is that bnode? It might be a class that is a restriction on a property.
# That's useful to know, so we capture that expanded view in the parent class.
#
def make_walked_node(b, v):
return {"bnode": b, "obj": v}
def expand_list(rec, keys):
for list_type in keys:
new_list = []
for entry in rec[list_type]:
if is_bnode(entry):
new_list.append(make_walked_node(entry, walk(entry)))
else:
new_list.append(entry)
rec[list_type+"_expand"] = new_list
def walk(entry):
if list_has_value(classes, entry):
return walk_class(entry)
elif list_has_value(restrictions, entry):
return walk_restriction(entry)
elif list_has_value(props, entry):
return walk_prop(entry)
elif list_has_value(lists, entry):
return walk_list(entry)
else:
return entry
def walk_list(l):
#print("visit list " + l)
if list_has_value(lists, l):
rec = list_res[l]
if rec["visited"]:
return rec
else:
new_list = []
expand_list(rec, ["entities"])
rec["visited"] = True
return rec
else:
return l
def walk_class(clazz):
#print("visit class " + clazz)
if list_has_value(classes, clazz):
rec = class_res[clazz]
if rec["visited"]:
return rec
else:
expand_list(rec, ["keys", "subsOf", "equivs", "complements", "disjoints", "unions", "intersections"])
rec["visited"] = True
return rec
else:
return clazz
def walk_prop(prop):
#print("visit prop " + prop)
if list_has_value(props, prop):
rec = prop_res[prop]
if rec["visited"]:
return rec
else:
expand_list(rec, ["subsOf", "equivs", "inverses", "disjoints", "domains", "ranges"])
rec["functional"] = list_has_value(rec["types"], "http://www.w3.org/2002/07/owl#FunctionalProperty")
if list_has_value(rec["types"], "http://www.w3.org/2002/07/owl#ObjectProperty"):
rec["propType"] = "ObjectProperty"
elif list_has_value(rec["types"], "http://www.w3.org/2002/07/owl#DatatypeProperty"):
rec["propType"] = "DatatypeProperty"
elif list_has_value(rec["types"], "http://www.w3.org/1999/02/22-rdf-syntax-ns#Property"):
rec["propType"] = "Property"
rec["visited"] = True
return rec
else:
return clazz
def walk_restriction(restriction) :
#print("visit restriction " + restriction)
if list_has_value(restrictions, restriction):
rec = restriction_res[restriction]
if rec["visited"]:
return rec
else:
if is_bnode(rec["prop"]):
rec["prop"] = make_walked_node(rec["prop"], walk(rec["prop"]))
expand_list(rec, ["allFromClasses", "someFromClasses"])
rec["visited"] = True
return rec
else:
return restriction
# walk the properties and classes, bringing in dependencies like lists, restrictions, and related classes
for entry in prop_res:
walk_prop(entry)
for entry in class_res:
walk_class(entry)
# for the given prop, if it belongs to expected_clazz, return the prop plus super-props
def get_props(prop, expected_clazz):
if list_has_value(props, prop):
candidate = False
if expected_clazz == None:
candidate = True
else:
# class is domain
for dom in prop_res[prop]["domains"]:
if dom == expected_clazz:
candidate = True
break
# domain is union and includes class
for dom in prop_res[prop]["domainUs"]:
if dom == expected_clazz:
candidate = True
break
if candidate:
# return this prop and props of which the prop is subsOf
return list(set([prop] + prop_res[prop]["subsOf"]))
else:
return []
else:
return []
# recursively walk the class, looking for properties.
def walk_class_for_props(clazz):
#print("visit " + clazz)
# am i a class or a restriction?
if list_has_value(restrictions, clazz):
if not(restriction_res[clazz]["visitedForProps"]):
#print(" restriction visit " + clazz)
restriction_res[clazz]["visitedForProps"] = True
prop_uri = restriction_res[clazz]["prop"]
restriction_res[clazz]["restrictedProps"] = [{
"prop": prop_uri,
"restriction": clazz,
"all" : restriction_res[clazz]["allFromClasses"],
"some": restriction_res[clazz]["someFromClasses"],
"lvals": restriction_res[clazz]["lvals"],
"ivals": restriction_res[clazz]["ivals"] }]
return restriction_res[clazz]
elif list_has_value(classes, clazz):
if not(class_res[clazz]["visitedForProps"]):
#print(" class visit " + clazz)
# if i'm not a bnode, get all props that apply to me
if not(is_bnode(clazz)):
for prop in props:
class_res[clazz]["discoveredProps"] = list(set(class_res[clazz]["discoveredProps"] + get_props(prop, clazz)))
for list_type in ["subsOf", "intersectionEntries", "equivs"]:
for entry in class_res[clazz][list_type]:
can_use = list_has_value(classes, entry) or list_has_value(restrictions, entry)
if list_type == 'equivs' and is_bnode(entry) == False:
can_use = False
if can_use:
# recurse for subsOf, intersectionEntries, equivs (restrictions only)
recurse_result = walk_class_for_props(entry)
class_res[clazz]["discoveredProps"] = list(set( class_res[clazz]["discoveredProps"] + recurse_result["discoveredProps"]))
class_res[clazz]["restrictedProps"] += recurse_result["restrictedProps"]
class_res[clazz]["visitedForProps"] = True
return class_res[clazz]
else:
print(" VERY BAD visit " + clazz)
return None
# for each class determine the properties by walking
for entry in class_res:
if not(is_bnode(entry)):
walk_class_for_props(entry)
# return the model - the classes and properties discovered
return {
"classes": class_res,
"props": prop_res
}
# Print the model
def print_model_summary(model) :
for clazz in model["classes"]:
if is_bnode(clazz) == False:
print("Class " + clazz)
print("\tkeys " + str(model["classes"][clazz]["keyEntries"]))
print("\n")
for r in model["classes"][clazz]["restrictedProps"]:
print("\tRestriction on prop " + r["prop"])
print("\t\tall " + str(r["all"]))
print("\t\tsome " + str(r["some"]))
print("\t\tliteral values " + str(r["lvals"]))
print("\t\tobject values " + str(r["ivals"]))
for prop in model["classes"][clazz]["discoveredProps"]:
print("\tProp " + prop)
if prop in model["props"]:
prop_def = model["props"][prop]
print("\t\ttype " + prop_def["propType"])
print("\t\tfunctional " + str(prop_def["functional"]))
print("\t\tinverses " + str(prop_def["inverses"]))
print("\t\trange " + str(prop_def["ranges"]))
print("\t\trangeUnionOf " + str(prop_def["rangeUs"]))
model = build_model()
print_model_summary(model)
###Output
_____no_output_____
###Markdown
GenerationFinally, given the interface we determined above, let's generate sample Turtle. This acts as our boilerplate for new data.
###Code
counter = {"current": 0}
# Prefixes for generated Turtle
SAMPLE_HEADER = """
@base <http://amazonaws.com/db/neptune/examples/ontology/gensample/> .
@prefix ex: <http://amazonaws.com/db/neptune/examples/ontology/gensample/> .
@prefix owl: <http://www.w3.org/2002/07/owl#> .
@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
"""
# generate samples instances for clazz based on model
def generate_sample(model, clazz):
# start building Turtle
gen_result = {"ttl": ""}
# Generate sample URI
def sample_uri(clazz):
#clazz is an IRI. Get the last token, which follows either the last / or a #
counter["current"]+= 1
inst_num = counter["current"]
clazz_name = clazz.split("/")[-1].split("#")[-1]
return clazz_name + "-sample-" + str(inst_num)
inst_name = sample_uri(clazz)
class_def = model["classes"][clazz]
props = model["props"]
keys = class_def["keyEntries"]
discovered_props = class_def["discoveredProps"]
restricted_props = class_def["restrictedProps"]
last_idx = 0
# In Turtle, instance has rdf:type that is clazz
gen_result["ttl"] += f"""
#
# Sample for class {clazz}
#
# Instantiate
ex:{inst_name} rdf:type <{clazz}> .
"""
#
# finder helpers
#
def find_restricted(prop):
for entry in restricted_props:
if entry["prop"] == prop:
# could there be more than one entry with prop;
# not sure how; take the first one
return entry
return None
def find_discovered(prop):
if list_has_value(discovered_props, prop):
if prop in props:
return props[prop]
else:
return None
else:
return None
# Based on the model, generate Turtle properties of instance
def generate_props(prop, inst_name, comment):
r = find_restricted(prop)
d = find_discovered(prop)
if r==None:
if d==None:
# Generic case. We have neither a restriction nor a property def.
# Just assign it a string value
gen_result["ttl"] += f"""
# {comment}
ex:{inst_name} <{prop}> "some value" .
# Don't have property definition on hand. Using sample string value.
"""
else:
# It's not a restriction and we have a property def.
# Turtle uses facts about the prop. If-then for object vs datatype
just_one = d["functional"]
sample_obj_type = None
sample_obj_prefix = None
all_ranges = []
for r in d["ranges"] + d["rangeUs"]:
if is_bnode(r) == False:
if sample_obj_type == None:
sample_obj_type = "<" + r + ">"
sample_obj_prefix = r
all_ranges.append(r)
if sample_obj_type == None:
# no range! use a default
sample_obj_type = "owl:Thing"
sample_obj_prefix = "not/sure/Anything"
extra_comment = "This is functional: only one " if d["functional"] else "Multiple values allowed"
if d["propType"] == "ObjectProperty":
uri = sample_uri(sample_obj_prefix)
gen_result["ttl"] += f"""
# {comment} - {extra_comment}
ex:{inst_name} <{prop}> ex:{uri} .
ex:{uri} rdf:type {sample_obj_type} .
# ... and fill in the details of ex:{uri}
# all ranges {all_ranges}
"""
else:
# will keep it simple with non-objects: everything is just a string
# so no other literal types, no value constaints, etc
gen_result["ttl"] += f"""
# {comment} - {extra_comment}
ex:{inst_name} <{prop}> "sample value" .
# actual ranges {all_ranges}
"""
else:
# It's a restriction
functional = False if d==None else d["functional"]
if len(r["lvals"]) > 0:
gen_result["ttl"] += f"""
# {comment} - restricted on value; value is literal
ex:{inst_name} <{r["prop"]}> "{r["lvals"][0]}" .
# allowed values: {r["lvals"]}
"""
elif len(r["ivals"]) > 0:
gen_result["ttl"] += f"""
# {comment} - restricted on value; value is IRI
ex:{inst_name} <{r["prop"]}> <{r["ivals"][0]}> .
# allowed values: {r["ivals"]}
"""
elif len(r["some"]) > 0:
uri = sample_uri(r["some"][0])
gen_result["ttl"] += f"""
# {comment} - restricted: some values from
ex:{inst_name} <{r["prop"]}> <{uri}> .
<{uri}> rdf:type <{r["some"][0]}> .
# values: {r["some"]}
"""
elif len(r["all"]) > 0:
uri = sample_uri(r["all"][0])
gen_result["ttl"] += f"""
# {comment} - restricted: all values from
ex:{inst_name} <{r["prop"]}> <{uri}> .
<{uri}> rdf:type <{r["all"][0]}> .
# values: {r["all"]}
"""
# In Turtle, need one property for each key
for key in keys:
generate_props(key, inst_name, "Add key")
# In Turtle, need restrictions. If key, don't do
for r in restricted_props:
if list_has_value(keys, r) == False:
generate_props(r["prop"], inst_name, "Add a restriction")
# In Turtle, for all other props (non-keys, non-restrictions), add prop to instance.
for d in discovered_props:
if list_has_value(keys, d) == False and find_restricted(d) == None:
generate_props(d, inst_name, "Add prop in domain")
# Return the Turtle
return gen_result["ttl"]
print(SAMPLE_HEADER)
for clazz in model["classes"]:
if is_bnode(clazz) == False:
runnable_ttl = generate_sample(model, clazz)
print(runnable_ttl)
###Output
_____no_output_____
###Markdown
ValidationNow let's validate. We will compare the structure of our example org with the expected interface determined above.
###Code
# validate instances against model
def validate_instances(model):
# pull instances and their triples
INSTANCE_QUERY = """
PREFIX owl: <http://www.w3.org/2002/07/owl#>
PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
select * where {
?class rdf:type owl:Class .
?inst rdf:type ?class .
?inst ?prop ?obj .
OPTIONAL { ?obj rdf:type ?objType . } .
BIND (isLiteral(?obj) as ?lit)
} order by ?class ?inst
"""
# validation ignores the typical naming sutff
IGNORES = [
"http://www.w3.org/1999/02/22-rdf-syntax-ns#type",
"http://www.w3.org/2000/01/rdf-schema#label",
"http://www.w3.org/2000/01/rdf-schema#comment",
"http://www.w3.org/2004/02/skos/core#prefLabel",
"http://www.w3.org/2004/02/skos/core#altLabel"
]
# run the instance query and transform into hierarchical result
# hierarchy: class - instance - prop
# easier to validate in that form
def run_inst_query():
res = run_query(INSTANCE_QUERY)
hier_result = {}
for rec in res:
clazz = rec["class"]["value"]
inst = rec["inst"]["value"]
prop = rec["prop"]["value"]
obj = rec["obj"]["value"]
obj_type = rec["objType"]["value"] if "objType" in rec else ""
lit = True if rec["lit"]["value"] == "true" else False
if not(clazz in hier_result):
hier_result[clazz] = { "clazz": clazz, "instances": {} }
if not(inst in hier_result[clazz]["instances"]):
hier_result[clazz]["instances"][inst] = { "instance": inst, "props": [] }
hier_result[clazz]["instances"][inst]["props"].append({
"prop": prop,
"object": obj,
"objectType": obj_type,
"literal": lit
})
return hier_result
# print a finding for validation summary
def print_finding(clazz, inst, prop_assignment, finding):
print(f"""
Finding in class: {clazz}
Instance: {inst}.
Prop assignment: {prop_assignment}
Finding: {finding}
""")
# pull the instances and validate! notice we navigate the hierarchy form
# The logic is clear if you focus on each call to print_finding.
inst_summary = run_inst_query()
for clazz in inst_summary:
if clazz in model["classes"]:
class_spec = model["classes"][clazz]
for inst in inst_summary[clazz]["instances"]:
# track stuff instance wide. want to check it has keys, has at most one functional, has at last one restrictSome
tracker = {
"keys": {},
"functionals": {},
"restrictSome": {}
}
for k in class_spec["keyEntries"]:
tracker["keys"][k] = 0
dprops = class_spec["discoveredProps"]
rprops = class_spec["restrictedProps"]
for prop_assignment in inst_summary[clazz]["instances"][inst]["props"]:
prop = prop_assignment["prop"]
obj = prop_assignment["object"]
obj_type = prop_assignment["objectType"]
literal = prop_assignment["literal"]
if list_has_value(IGNORES, prop) == False:
# key usage
if prop in tracker["keys"]:
tracker["keys"][prop] += 1
# check against restriction
checked_as_restriction = False
for r in rprops:
lvals = r["lvals"]
ivals = r["ivals"]
alls = r["all"]
somes = r["some"]
if r["prop"] == prop:
checked_as_restriction = True
if len(lvals) > 0:
if literal == False:
print_finding(clazz, inst, prop_assignment, f"Restriction requires literal value {lvals} but obj is not literal")
elif list_has_value(lvals, obj) == False:
print_finding(clazz, inst, prop_assignment, f"Restriction requires literal value {lvals} but obj not among these")
elif len(ivals) > 0:
if literal:
print_finding(clazz, inst, prop_assignment, f"Restriction requires object value {ivals} but obj is literal")
elif list_has_value(ivals, obj) == False:
print_finding(clazz, inst, prop_assignment, f"Restriction requires object value {ivals} but obj not among these")
elif len(alls) > 0:
if list_has_value(alls, obj_type) == False:
print_finding(clazz, inst, prop_assignment, f"Restriction requires all values from {alls} but obj type is not among these")
elif len(somes) > 0:
# for the someValues, just keep a count; will deal with it below
if not(prop in tracker["restrictSome"]):
tracker["restrictSome"][prop] = {}
for s in somes:
tracker["restrictSome"][prop][s] = 0
if list_has_value(somes, obj_type):
tracker["restrictSome"][prop][obj_type] += 1
# discovered prop match - check
if checked_as_restriction == False and list_has_value(dprops, prop):
prop_def = model["props"][prop]
prop_type= prop_def["propType"]
all_ranges = []
for rg in model["props"][prop]["ranges"] + model["props"][prop]["rangeUs"]:
if is_bnode(rg) == False:
all_ranges.append(rg)
if literal and prop_type == "ObjectProperty":
print_finding(clazz, inst, prop_assignment, f"Prop type is {prop_type} but object is literal")
if literal==False and prop_type == "DatatypeProperty":
print_finding(clazz, inst, prop_assignment, f"Prop type is {prop_type} but object is not a literal")
if len(all_ranges) > 0 and list_has_value(all_ranges, obj_type) == False:
print_finding(clazz, inst, prop_assignment, f"Prop ranges are {all_ranges} but object type is not among these")
if prop_def["functional"]:
# for functional, keep a count and deal with it below
if not(prop in tracker["functionals"]):
tracker["functionals"][prop] = 0
tracker["functionals"][prop] += 1
if checked_as_restriction == False and list_has_value(dprops, prop) ==False:
print_finding(clazz, inst, prop_assignment, f"Unrecognized prop")
# now check tracker
for ko in tracker["keys"]:
num_occ = tracker["keys"][ko]
if num_occ != 1:
print_finding(clazz, inst, None, f"Key property {ko} appears {num_occ} times. Should be once.")
for f in tracker["functionals"]:
num_occ = tracker["functionals"][f]
if num_occ > 1:
print_finding(clazz, inst, None, f"Functional property {f} appears {num_occ} times. Should be once.")
for p in tracker["restrictSome"]:
for s in tracker["restrictSome"][p]:
num_occ = tracker["restrictSome"][p][s]
if num_occ < 1:
print_finding(clazz, inst, None, f"Restriction on property {p} having some values from {s} not met.")
validate_instances(model)
###Output
_____no_output_____
###Markdown
CleanupIf you messed up and need to reload the ontology or sample data .. be careful because there are lots of blank nodes! Because of this the reload is not idempotent. It's better to clean slate before reloading. The script below has several options: dropping one of the three named graphs loaded above, or delete all triples. We recommend dropping each of the three named graphs.
###Code
%%sparql
# Delete the org ontoloy
drop graph <http://www.w3.org/ns/org>
# Delete the examples
#drop graph <http://amazonaws.com/db/neptune/examples/ontology/org>
# Delete the tester ontology
#drop graph <http://amazonaws.com/db/neptune/examples/ontology/tester>
# Delete all triples
#delete {?s ?p ?o} where {
# ?s ?p ?o
#}
###Output
_____no_output_____ |
Data Analysis/01.Data Analysis Process/01.Reading CSV Files.ipynb | ###Markdown
Quiz 1Use read_csv() to read in cancer_data.csv and use an appropriate column as the index. Then, use .head() on your dataframe to see if you've done this correctly. Hint: First call read_csv() without parameters and then head() to see what the data looks like.
###Code
import pandas as pd
df = pd.read_csv("../00.Data/cancer_data.csv", index_col=["id"])
df.head()
###Output
_____no_output_____
###Markdown
Quiz 2Use read_csv() to read in powerplant_data.csv with more descriptive column names based on the description of features on this [website](http://archive.ics.uci.edu/ml/datasets/combined+cycle+power+plant). Then, use .head() on your dataframe to see if you've done this correctly. Hint: Like in the previous quiz, first call read_csv() without parameters and then head() to see what the data looks like.
###Code
df = pd.read_csv("../00.Data/powerplant_data.csv")
df.head()
col_names = ["Temperature (T)", "Exhaust Vacuum (V)", "Ambient Pressure (AP)", "Relative Humidity (RH)", "Net hourly electrical energy output (EP)"]
df.columns = col_names
df.head()
###Output
_____no_output_____ |
08/08_claret.ipynb | ###Markdown
Partical Work 08 - Clustering algorithms- Author: *Romain Claret*- Due-date: *12.11.2018* Exercice 1 Getting the data a) Load the two given datasets
###Code
#import pickle
# 'ascii' codec can't decode byte 0xee in position 6: ordinal not in range(128)
# X1,label1 = pickle.load(open("dataset_1.pkl","rb"))
import pandas as pd
X1,label1 = pd.read_pickle("dataset_1.pkl")
X2,label2 = pd.read_pickle("dataset_2.pkl")
###Output
_____no_output_____
###Markdown
b) Visualize the data using various color for each unique labels
###Code
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
def get_plot_colors(label):
plot_color_label = []
for i in label:
if i == 0:
plot_color_label.append("blue")
elif i == 1:
plot_color_label.append("orange")
else:
plot_color_label.append("green")
return plot_color_label
def plot_clusters(X, colors, size, m="o"):
plt.scatter([X[i][0] for i in range(len(X))],
[X[i][1] for i in range(len(X))],
c=colors,
s=size,
marker=m)
plot_clusters(X1, label1, 1)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Exercice 2 The k-means algorithm a) Initialise the centroids μ1 ,μ2 , ..., μK
###Code
import random
def get_rand_centroids(X, K):
random_centroids = []
xmin = np.min([X[i][0] for i in range(len(X))])
ymin = np.min([X[i][1] for i in range(len(X))])
xmax = np.max([X[i][0] for i in range(len(X))])
ymax = np.max([X[i][1] for i in range(len(X))])
for i in range(K):
random_centroids.append([random.uniform(xmin, xmax),
random.uniform(ymin, ymax)])
return random_centroids
K=10
random_centroids = get_rand_centroids(X2, K)
plot_clusters(X2, get_plot_colors(label2), 1)
plot_clusters(random_centroids, "red", 50, "x")
plt.show()
###Output
_____no_output_____
###Markdown
b) Find the closest centroid for each point and reevaluate the centroids
###Code
from scipy.spatial import distance
def my_kmean(k_X, k_centroids, k_current_predict):
centroids_updated = 0
for i in range(len(k_X)):
current_cluster = 0
current_dist = distance.euclidean(k_X[i], k_centroids[0])
for k in range(1, len(k_centroids)):
dist = distance.euclidean(k_X[i], k_centroids[k])
if dist < current_dist:
current_dist = dist
current_cluster = k
if k_current_predict[i] != current_cluster:
centroids_updated += 1
k_current_predict[i] = current_cluster
for k in range(len(k_centroids)):
x_move, y_move, count = 0, 0, 0
for i in range(len(k_current_predict)):
if k_current_predict[i] == k:
x_move += k_X[i][0]
y_move += k_X[i][1]
count += 1
if count != 0:
centroids[k][0] = x_move / count
centroids[k][1] = y_move / count
#print(k_centroids)
return centroids_updated, k_centroids, k_current_predict
###Output
_____no_output_____
###Markdown
c) Return the centroids and the label predicted.
###Code
def plot_update(X, updates, centro, predict):
plot_clusters(X, get_plot_colors(predict), 1)
plot_clusters(centro, "red", 50, "x")
plt.title("Updates: "+ str(updates))
plt.show()
current_predict = np.zeros(len(X1))
centroids = get_rand_centroids(X1, 3)
updates_hist = []
predict_hist = []
centroids_hist = []
init_predict = current_predict
init_centroids = centroids
plot_update(X1, "Initial", init_centroids, init_predict)
while True:
updates, centroids, current_predict = my_kmean(X1,
centroids,
current_predict)
updates_hist.append(updates)
predict_hist.append(current_predict) # this is BS, doesn't append
centroids_hist.append(centroids) # this is BS, doesn't append
plot_update(X1, updates, centroids, current_predict)
if updates == 0:
break
#print(centroids_hist[0]==centroids_hist[1])
###Output
_____no_output_____
###Markdown
Exercice 3 Evaluate your model - Visualize your convergence criteria over the epochs (One epoch is a complete visit of the training set.) using the dataset 1.
###Code
plt.plot(np.arange(0,len(updates_hist)), updates_hist)
plt.title("Convergence over Epochs")
plt.xlabel("Epoch")
plt.ylabel("Updates")
plt.show()
###Output
_____no_output_____
###Markdown
- Visualize the output of your k-means on the dataset 1. THIS IS BS: WHY THE LIST IS NOT APPENDING CORRECTLY? I HAVE TO DO IT IN THE LOOP ABOVE TO MAKE IT WORK...
###Code
#plot_update(X1, "Initial", init_centroids, init_predict)
#print(centroids_hist[4])
#for i in range(0, len(centroids_hist)):
#print(updates_hist[i])
#print(centroids_hist[i])
#print(predict_hist[i])
#plot_update(X1,
# updates_hist[i],
# centroids_hist[i],
# predict_hist[i])
#plt.show()
###Output
_____no_output_____
###Markdown
- Do you experience sensitivity to the initial values of the centroids ? Is your strategy for initialization working well in most cases ? - It works in most of the cases on this dataset. The convergence over epochs varies however from run to run. - Document your convergence criteria. Could you think about other convergence criteria ? - Updates per epochs based on the distance. - Visualize your convergence criteria over time using the dataset 2.
###Code
current_predict = np.zeros(len(X2))
centroids = get_rand_centroids(X2, 3)
updates_hist = []
plot_update(X2, "Initial", centroids, current_predict)
while True:
updates, centroids, current_predict = my_kmean(X2,
centroids,
current_predict)
updates_hist.append(updates)
if updates == 0:
break
plot_update(X2, updates, centroids, current_predict)
plt.plot(np.arange(0,len(updates_hist)), updates_hist)
plt.title("Convergence over Epochs")
plt.xlabel("Epoch")
plt.ylabel("Updates")
plt.show()
###Output
_____no_output_____ |
Phyton_Exercise_1.ipynb | ###Markdown
Matrix and its Operations
###Code
import numpy as np
a = np.array (([-5,0,],[4,1]))
b = np.array (([6,-3],[2,3]))
print("a+b")
print(a+b)
print()
print("b-a")
print(b-a)
print()
print("a-b")
print(a-b)
###Output
a+b
[[ 1 -3]
[ 6 4]]
b-a
[[11 -3]
[-2 2]]
a-b
[[-11 3]
[ 2 -2]]
###Markdown
Matrix and Its Operations
###Code
import numpy as np
a = np.array([[-5,0],[4,1]])
b = np.array([[6,-3],[2,3]])
print(a)
print()
print(b)
print()
print (a+b)
print()
print (b-a)
print()
print (a-b)
print()
###Output
[[-5 0]
[ 4 1]]
[[ 6 -3]
[ 2 3]]
[[ 1 -3]
[ 6 4]]
[[11 -3]
[-2 2]]
[[-11 3]
[ 2 -2]]
|
Contrib_RNN.ipynb | ###Markdown
Read File
###Code
def convert_datetime(df):
df['Date_Hour'] = pd.to_datetime(df.Date) + pd.to_timedelta(df.Hour, unit='h')
return df
df = pd.read_csv("data/ZonalDemands_2003-2017.csv")
df['Date'] = pd.to_datetime(df['Date'])
df = convert_datetime(df)
df.set_index('Date_Hour', inplace =True)
#remove zones
df.drop(df.columns[3:],axis = 1, inplace=True)
df.head()
df.drop('Date',axis = 1, inplace = True)
df.drop('Hour',axis = 1, inplace = True)
df.as_matrix()
###Output
_____no_output_____
###Markdown
EDA
###Code
df.plot()
#Remove pre 2004
df = df[8761:]
df.plot()
df.describe()
sns.boxplot(df, palette='deep')
sns.violinplot(df,palette='Dark2')
###Output
_____no_output_____
###Markdown
RNN Preprocessing
###Code
#log to shrink values and induce stationairty
df_log = pd.Series(df['Total Ontario'].apply(lambda x: np.log(x)))
df_log.head()
ts = np.array(df_log)
num_periods = 24
f_horizon = 1
x_data = ts[:(len(ts)-(len(ts) % num_periods))]
x_batches = x_data.reshape(-1,24,1)
y_data = ts[1:(len(ts)-(len(ts) % num_periods))+f_horizon]
y_batches = y_data.reshape(-1,24,1)
print(len(x_batches))
print(x_batches.shape)
print(x_batches[0:2])
print(y_batches[0:1])
print(y_batches.shape)
def test_data(series,forecast,num_periods):
test_x_setup = ts[-(num_periods + forecast):]
testX = test_x_setup[:num_periods].reshape(-1,24,1)
testY = ts[-(num_periods):].reshape(-1,24,1)
return testX, testY
X_test, Y_test = test_data(ts,f_horizon,num_periods)
print(X_test.shape)
print(X_test)
###Output
(1, 24, 1)
[[[9.71679646]
[9.684585 ]
[9.65771488]
[9.64374485]
[9.63508509]
[9.64212279]
[9.65425667]
[9.67388569]
[9.72112599]
[9.7562049 ]
[9.77343581]
[9.78013277]
[9.78790834]
[9.78143291]
[9.77029896]
[9.76995616]
[9.78526692]
[9.83037908]
[9.89490079]
[9.87734889]
[9.83659961]
[9.81378164]
[9.78537946]
[9.75429125]]]
###Markdown
Build RNN Model https://mapr.com/blog/deep-learning-tensorflow/
###Code
import tensorflow as tf
import tensorflow.contrib.learn as tflearn
import tensorflow.contrib.layers as tflayers
from tensorflow.contrib.learn.python.learn import learn_runner
import tensorflow.contrib.metrics as metrics
import tensorflow.contrib.rnn as rnn
tf.reset_default_graph()
num_periods = 24
inputs = 1
hidden = 200
output = 1
X = tf.placeholder(tf.float32,[None, num_periods,inputs])
y = tf.placeholder(tf.float32,[None, num_periods,output])
basic_cell = tf.contrib.rnn.BasicRNNCell(num_units=hidden,activation=tf.nn.relu)
rnn_output, states = tf.nn.dynamic_rnn(basic_cell, X, dtype = tf.float32)
learning_rate = 0.001
stacked_rnn_output = tf.reshape(rnn_output, [-1,hidden])
stacked_outputs = tf.layers.dense(stacked_rnn_output,output)
outputs = tf.reshape(stacked_outputs,[-1,num_periods,output])
loss = tf.reduce_sum(tf.square(outputs - y))
optimizer = tf.train.AdamOptimizer(learning_rate = learning_rate)
training_op = optimizer.minimize(loss)
init = tf.global_variables_initializer()
###Output
_____no_output_____
###Markdown
Run Model
###Code
epochs = 1000
with tf.Session() as sess:
init.run()
for ep in range(epochs):
sess.run(training_op,feed_dict={X: x_batches, y: y_batches})
if ep % 100 == 0:
mse = loss.eval(feed_dict={X: x_batches, y: y_batches})
print(ep,"\tMSE", mse)
y_pred = sess.run(outputs,feed_dict={X: X_test})
print(y_pred)
#Remove log
y_pred_trans= np.exp(y_pred)
Y_test_trans = np.exp(Y_test)
y_pred_trans
plt.title("Forecast vs Actual (HL = 100, Epochs = 1000, MSE = 113)",fontsize=14)
plt.plot(pd.Series(np.ravel(Y_test_trans)), label = "Actual")
plt.plot(pd.Series(np.ravel(y_pred_trans)), label = "Forecast")
plt.legend()
plt.title("Forecast vs Actual (HL = 200, Epochs = 1000, MSE = 70)",fontsize=14)
plt.plot(pd.Series(np.ravel(Y_test_trans)), label = "Actual")
plt.plot(pd.Series(np.ravel(y_pred_trans)), label = "Forecast")
plt.legend()
###Output
_____no_output_____ |
conflowgen/visual_validation/inspect_container_dwell_times.ipynb | ###Markdown
Inspect container dwell times
###Code
import os
import pathlib
import ipywidgets as widgets
import pandas as pd
from IPython.display import Markdown
import matplotlib.pyplot as plt
from matplotlib import gridspec
folder_of_this_jupyter_notebook = pathlib.Path.cwd()
export_folder = os.path.join(
folder_of_this_jupyter_notebook,
os.pardir,
"data",
"exports"
)
folders = [
folder
for folder in os.listdir(export_folder)
if os.path.isdir(
os.path.join(
export_folder,
folder
)
)
]
dropdown_field = widgets.Dropdown(
options=list(reversed(folders)), # always show the newest first
description='',
layout={'width': 'max-content'}
)
dropdown_label = widgets.Label(value="Select the exported output: ")
display(widgets.HBox([dropdown_label, dropdown_field]))
path_to_selected_exported_content = os.path.join(
export_folder,
dropdown_field.value
)
print("Working with directory " + path_to_selected_exported_content)
###Output
_____no_output_____
###Markdown
Load containers
###Code
path_to_containers = os.path.join(
path_to_selected_exported_content,
"containers.csv"
)
print(f"Opening {path_to_containers}")
df_containers = pd.read_csv(path_to_containers, index_col="id", dtype={
"delivered_by_truck": "Int64",
"picked_up_by_truck": "Int64",
"delivered_by_large_scheduled_vehicle": "Int64",
"picked_up_by_large_scheduled_vehicle": "Int64"
})
df_containers
df_containers.groupby(by="delivered_by_large_scheduled_vehicle").count()
###Output
_____no_output_____
###Markdown
Load scheduled vehicles
###Code
path_to_deep_sea_vessels = os.path.join(
path_to_selected_exported_content,
"deep_sea_vessels.csv"
)
path_to_feeders = os.path.join(
path_to_selected_exported_content,
"feeders.csv"
)
path_to_barges = os.path.join(
path_to_selected_exported_content,
"barges.csv"
)
path_to_trains = os.path.join(
path_to_selected_exported_content,
"trains.csv"
)
scheduled_vehicle_file_paths = {
"deep_sea_vessels": path_to_deep_sea_vessels,
"feeders": path_to_feeders,
"barges": path_to_barges,
"trains": path_to_trains
}
for name, path in scheduled_vehicle_file_paths.items():
print("Check file exists for vehicle " + name + ".")
assert os.path.isfile(path)
print("All files exist.")
for name, path in list(scheduled_vehicle_file_paths.items()):
print("Check file size for vehicle " + name)
size_in_bytes = os.path.getsize(path)
if size_in_bytes <= 4:
print(" This file is empty, ignoring it in the analysis from now on")
del scheduled_vehicle_file_paths[name]
scheduled_vehicle_dfs = {
name: pd.read_csv(path, index_col=0, parse_dates=["scheduled_arrival"])
for name, path in scheduled_vehicle_file_paths.items()
}
for name, df in scheduled_vehicle_dfs.items():
display(Markdown("#### " + name))
scheduled_vehicle_dfs[name]["vehicle_type"] = name
display(scheduled_vehicle_dfs[name].sort_values(by="scheduled_arrival"))
df_large_scheduled_vehicle = pd.concat(
scheduled_vehicle_dfs.values()
)
df_large_scheduled_vehicle.sort_index(inplace=True)
df_large_scheduled_vehicle.info()
df_large_scheduled_vehicle
###Output
_____no_output_____
###Markdown
Plot arrival pattern.
###Code
plt.figure(figsize=(15, 3))
x, y, z = [], [], []
y_axis = []
y_scaling_factor = 2
for i, (name, df) in enumerate(scheduled_vehicle_dfs.items()):
y_axis.append((i/y_scaling_factor, name))
if len(df) == 0:
continue
arrivals_and_capacity = df[["scheduled_arrival", "moved_capacity"]]
for _, row in arrivals_and_capacity.iterrows():
event = row["scheduled_arrival"]
moved_capacity = row["moved_capacity"]
x.append(event)
y.append(i / y_scaling_factor)
z.append(moved_capacity / 20)
plt.xticks(rotation=45)
plt.yticks(*list(zip(*y_axis)))
plt.scatter(x, y, s=z, color='gray')
plt.ylim([-0.5, 1.5])
plt.show()
vehicle_to_teu_to_deliver = {}
vehicle_to_teu_to_pickup = {}
for i, container in df_containers.iterrows():
teu = container["length"] / 20
assert 1 <= teu <= 2.5
if container["delivered_by"] != "truck":
vehicle = container["delivered_by_large_scheduled_vehicle"]
if vehicle not in vehicle_to_teu_to_deliver.keys():
vehicle_to_teu_to_deliver[vehicle] = 0
vehicle_to_teu_to_deliver[vehicle] += teu
if container["picked_up_by"] != "truck":
vehicle = container["picked_up_by_large_scheduled_vehicle"]
if vehicle not in vehicle_to_teu_to_pickup.keys():
vehicle_to_teu_to_pickup[vehicle] = 0
vehicle_to_teu_to_pickup[vehicle] += teu
vehicle_to_teu_to_deliver, vehicle_to_teu_to_pickup
s_delivery = pd.Series(vehicle_to_teu_to_deliver)
s_pickup = pd.Series(vehicle_to_teu_to_pickup)
df_large_scheduled_vehicle["capacity_delivery"] = s_delivery
df_large_scheduled_vehicle["capacity_pickup"] = s_pickup
df_large_scheduled_vehicle
###Output
_____no_output_____
###Markdown
Let's visualize in red if our transportation capacities were exceeded
###Code
ax = df_large_scheduled_vehicle.plot.scatter(
x="capacity_in_teu",
y="capacity_delivery"
)
df_large_scheduled_vehicle.loc[
df_large_scheduled_vehicle["capacity_in_teu"] < df_large_scheduled_vehicle["capacity_delivery"]
].plot.scatter(
x="capacity_in_teu",
y="capacity_delivery",
ax=ax,
color="r"
)
plt.show()
ax = df_large_scheduled_vehicle.plot.scatter(
x="moved_capacity",
y="capacity_delivery"
)
df_large_scheduled_vehicle.loc[
df_large_scheduled_vehicle["moved_capacity"] < df_large_scheduled_vehicle["capacity_delivery"]
].plot.scatter(
x="moved_capacity",
y="capacity_delivery",
color="r",
ax=ax
)
plt.show()
free_delivery_capacity = df_large_scheduled_vehicle["moved_capacity"] - df_large_scheduled_vehicle["capacity_delivery"]
free_delivery_capacity.plot.hist()
plt.show()
ax = df_large_scheduled_vehicle.plot.scatter(
x="capacity_in_teu",
y="capacity_pickup"
)
plt.show()
df_large_scheduled_vehicle.loc[
df_large_scheduled_vehicle["capacity_in_teu"] < df_large_scheduled_vehicle["capacity_pickup"]
].plot.scatter(
x="capacity_in_teu",
y="capacity_pickup",
ax=ax,
color="r"
)
plt.show()
ax = df_large_scheduled_vehicle.plot.scatter(
x="moved_capacity",
y="capacity_pickup"
)
transportation_buffer = 1.2
df_large_scheduled_vehicle.loc[
df_large_scheduled_vehicle["moved_capacity"] * transportation_buffer < df_large_scheduled_vehicle["capacity_pickup"]
].plot.scatter(
x="moved_capacity",
y="capacity_pickup",
color="r",
ax=ax
)
plt.show()
free_delivery_capacity = df_large_scheduled_vehicle["moved_capacity"] * 1.2 - df_large_scheduled_vehicle["capacity_pickup"]
plt.xlabel("Difference between moved capacity and the capacity of picked up containers")
free_delivery_capacity.plot.hist()
plt.show()
###Output
_____no_output_____
###Markdown
If there was no red dot in any of the graphs above, the following should work smoothly.
###Code
for large_scheduled_vehicle_id in df_large_scheduled_vehicle.index:
delivered_teu = vehicle_to_teu_to_deliver.get(large_scheduled_vehicle_id, 0)
picked_up_teu = vehicle_to_teu_to_pickup.get(large_scheduled_vehicle_id, 0)
capacity_in_teu = df_large_scheduled_vehicle.loc[large_scheduled_vehicle_id, "capacity_in_teu"]
assert delivered_teu <= capacity_in_teu, f"{delivered_teu} is more than {capacity_in_teu} for vehicle "\
f"with id {large_scheduled_vehicle_id}"
assert picked_up_teu <= capacity_in_teu, f"{picked_up_teu} is more than {capacity_in_teu} for vehicle "\
f"with id {large_scheduled_vehicle_id}"
###Output
_____no_output_____
###Markdown
Load trucks
###Code
path_to_trucks = os.path.join(
path_to_selected_exported_content,
"trucks.csv"
)
assert os.path.isfile(path_to_trucks)
df_truck = pd.read_csv(
path_to_trucks, index_col=0,
parse_dates=[
# Pickup
"planned_container_pickup_time_prior_berthing",
"realized_container_pickup_time",
# Delivery
"planned_container_delivery_time_at_window_start",
"realized_container_delivery_time"
])
df_truck
assert len(df_truck[df_truck["picks_up_container"] & pd.isna(df_truck["realized_container_pickup_time"])]) == 0, \
"If a truck picks up a container, it should always have a realized container pickup time"
assert len(df_truck[df_truck["delivers_container"] & pd.isna(df_truck["realized_container_delivery_time"])]) == 0, \
"If a truck deliver a container, it should always have a realized container delivery time"
assert len(df_truck[~(df_truck["delivers_container"] | df_truck["picks_up_container"])]) == 0, \
"There is no truck that neither delivers or picks up a container"
arrivals = pd.DataFrame({"x": x, "y": y})
arrivals.set_index("x").plot(marker=".", linestyle="None")
plt.show()
arrivals = arrivals.set_index("x")
plt.figure(figsize=(15, 5))
container_deliveries_by_truck = df_truck.groupby(
pd.Grouper(key='realized_container_delivery_time', freq='H')
).count().fillna(0)
ax = container_deliveries_by_truck["delivers_container"].plot()
ax.set_title("Number of trucks arriving in each hour that deliver a container")
ax2 = arrivals.plot(color='red', ax=ax, marker=".", linestyle="None", secondary_y=True)
ticks, labels = list(zip(*y_axis))
ax2.set_yticks(ticks)
ax2.set_yticklabels(labels)
plt.show()
fig = plt.figure(figsize=(10, 5))
# set height ratios for subplots
gs = gridspec.GridSpec(2, 1, height_ratios=[2, 1])
# the upper subplot
ax1 = plt.subplot(gs[0])
plt.title("Relationship of vessels and truck arrivals")
ax1.set_ylabel("Number trucks per hour")
ax12 = container_deliveries_by_truck["delivers_container"].plot(ax=ax1, color="dimgray")
ax12.set_xlim([pd.Timestamp("2021-06-15"), pd.Timestamp(pd.Timestamp("2021-08-15"))])
# the lower subplot
ax2 = plt.subplot(gs[1], sharex=ax12)
arrivals.plot(color='gray', ax=ax2, marker=".", linestyle="None", legend=False)
ax2.scatter(x, y, s=z, color='gray')
ticks, labels = list(zip(*y_axis))
ax2.set_yticks(ticks)
ax2.set_yticklabels([l.capitalize().replace("_", " ") for l in labels])
ax2.set_ylim([-0.5, 2])
ax2.set_xlabel("")
plt.show()
plt.figure(figsize=(15, 5))
container_pickups = df_truck.groupby(
pd.Grouper(key='realized_container_pickup_time', freq='H')
).count().fillna(0)
scaling_factor = 7
y_pos_scaled = [y_i * scaling_factor for y_i in y]
ax = container_pickups["delivers_container"].plot()
ax.set_title("Number of trucks arriving in each hour that pick up a container")
ax2 = ax.twinx()
ax.scatter(x, y, color='y', s=70)
ticks, labels = list(zip(*y_axis))
ax2.set_yticks(ticks)
ax2.set_yticklabels(labels)
plt.show()
container_pickups.groupby(container_pickups.index.hour).mean()["picks_up_container"].plot()
plt.title("Container pickups at each hour of the day")
plt.show()
container_deliveries_by_truck.groupby(container_deliveries_by_truck.index.hour).mean()["delivers_container"].plot()
plt.title("Container deliveries at each hour of the day")
plt.show()
###Output
_____no_output_____
###Markdown
This is the probability of the truck to show up at any given hour of the week (by index).
###Code
ax = container_pickups.groupby(container_pickups.index.weekday).mean()["picks_up_container"].plot.bar()
ax.set_xlabel("")
plt.title("Container pickups at each day of the week")
plt.show()
ax = container_deliveries_by_truck.groupby(container_deliveries_by_truck.index.weekday).mean()["delivers_container"].plot.bar()
ax.set_xlabel("")
plt.title("Container deliveries at each day of the week")
plt.show()
plt.plot([0.0, 0.0, 0.0, 0.0, 0.0, 0.004136582430806258, 0.008498796630565584, 0.007295427196149218,
0.008348375451263539, 0.011131167268351384, 0.01286101083032491, 0.015418170878459687, 0.012936221419975932,
0.015568592057761732, 0.01391395908543923, 0.011732851985559567, 0.013161853188929002, 0.011206377858002407,
0.008498796630565584, 0.007069795427196149, 0.0053399518652226235, 0.003309265944645006, 0.00210589651022864,
0.002331528279181709, 0.0019554753309265946, 0.002030685920577617, 0.0017298435619735259, 0.002331528279181709,
0.0025571600481347776, 0.00631768953068592, 0.009401323706377859, 0.008197954271961492, 0.009852587244283995,
0.012033694344163659, 0.01338748495788207, 0.01233453670276775, 0.014290012033694344, 0.014666064981949459,
0.014741275571600482, 0.012409747292418772, 0.01105595667870036, 0.01000300842358604, 0.007295427196149218,
0.005941636582430806, 0.0053399518652226235, 0.0030836341756919376, 0.0019554753309265946,
0.0024819494584837547, 0.0018050541516245488, 0.0019554753309265946, 0.0013537906137184115,
0.0009777376654632973, 0.002331528279181709, 0.007821901323706379, 0.009100481347773767, 0.00947653429602888,
0.009927797833935019, 0.01263537906137184, 0.011958483754512635, 0.015192539109506619, 0.014741275571600482,
0.01647111913357401, 0.015342960288808664, 0.01233453670276775, 0.011883273164861612, 0.009626955475330927,
0.008348375451263539, 0.006167268351383875, 0.004362214199759326, 0.004061371841155234, 0.002331528279181709,
0.002331528279181709, 0.001128158844765343, 0.0012785800240673888, 0.00105294825511432, 0.0009025270758122744,
0.0018802647412755715, 0.005490373044524669, 0.01022864019253911, 0.011206377858002407, 0.01105595667870036,
0.01210890493381468, 0.015568592057761732, 0.01654632972322503, 0.019780385078219012, 0.018050541516245487,
0.01782490974729242, 0.016095066185318894, 0.014064380264741275, 0.012560168471720819, 0.008047533092659447,
0.006468110709987966, 0.006092057761732852, 0.0042870036101083035, 0.003234055354993983, 0.00157942238267148,
0.002707581227436823, 0.002030685920577617, 0.002331528279181709, 0.00157942238267148, 0.0030084235860409147,
0.00631768953068592, 0.009551744885679904, 0.01210890493381468, 0.011356799037304452, 0.016170276774969915,
0.01647111913357401, 0.01752406738868833, 0.01759927797833935, 0.01752406738868833, 0.015944645006016847,
0.014365222623345367, 0.01210890493381468, 0.008874849578820699, 0.009551744885679904, 0.004512635379061372,
0.004663056558363418, 0.0029332129963898917, 0.0017298435619735259, 0.002331528279181709,
0.0025571600481347776, 0.00210589651022864, 0.0019554753309265946, 0.0009777376654632973,
0.0009777376654632973, 0.001128158844765343, 0.0013537906137184115, 0.001654632972322503, 0.001654632972322503,
0.0013537906137184115, 0.0006768953068592057, 0.00052647412755716, 0.0004512635379061372, 0.0, 0.0, 0.0, 0.0,
0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] + ([0] * 24)
)
plt.xlim([0, 168])
plt.show()
df_truck.loc[
df_truck["realized_container_pickup_time"].dt.weekday == 6
]
df_truck.loc[
df_truck["realized_container_delivery_time"].dt.weekday == 6
]
###Output
_____no_output_____
###Markdown
How much percent is that?
###Code
len(df_truck.loc[
df_truck["realized_container_delivery_time"].dt.weekday == 6
]) / len(df_truck.loc[
df_truck["realized_container_delivery_time"].dt.weekday != 6
]) * 100
delivered_and_picked_up_by_large_vessels_df = df_containers.loc[
~pd.isna(df_containers["picked_up_by_large_scheduled_vehicle"])
].join(
df_large_scheduled_vehicle, on="picked_up_by_large_scheduled_vehicle", rsuffix="_picked_up"
).loc[
~pd.isna(df_containers["delivered_by_large_scheduled_vehicle"])
].join(
df_large_scheduled_vehicle, on="delivered_by_large_scheduled_vehicle", rsuffix="_delivered_by"
)
delivered_and_picked_up_by_large_vessels_df
dwell_time = (
delivered_and_picked_up_by_large_vessels_df["scheduled_arrival"]
- delivered_and_picked_up_by_large_vessels_df["scheduled_arrival_delivered_by"]
)
dwell_time
dwell_time.describe()
dwell_time.astype("timedelta64[h]").plot.hist(bins=30, color="gray")
plt.xlabel("Hours between delivery and onward transportation (except trucks)")
plt.ylabel("Number container in July")
plt.show()
###Output
_____no_output_____ |
ALBA.ipynb | ###Markdown
Analysis of LSST Batch Activity at CC-IN2P3*Source: [https://github.com/airnandez/alba](https://github.com/airnandez/alba)**Author: Fabio Hernandez, CC-IN2P3* Introduction The purpose of this notebook is to analyse accounting information emitted by GridEngine about the activity of LSST batch jobs executed at CC-IN2P3.
###Code
import pathlib
import datetime
import sys
import re
import collections
import IPython.display
print_md = IPython.display.Markdown
import pandas as pd
import numpy as np
import bokeh
import bokeh.plotting as bkh
import bokeh.models as bkhmodels
bkh.output_notebook()
###Output
_____no_output_____
###Markdown
DependenciesThis notebook uses the packages below listed below. The links point to their documentation:* bokeh* numpy* pandas* python
###Code
table = f"""
These are the versions of those dependencies you are currently using:
| Component | Version |
| --------: | -------------------- |
| bokeh | {bokeh.__version__} |
| numpy | {np.version.version} |
| pandas | {pd.__version__} |
| python | {sys.version_info.major}.{sys.version_info.minor}.{sys.version_info.micro} |
"""
print_md(table)
###Output
_____no_output_____
###Markdown
Load the accounting records into a Pandas dataframe Set the data directory, where JSON accounting files are located. The repository of accounting data at CC-IN2P3 is `/sps/hep/ccin2p3/data/BatchFiles`. In that directory one can find one JSON file per month (e.g. `GE.accounting.2018.11.json`) which contains records of batch jobs of all groups using CC-IN2P3's batch service.Records for jobs relevant for LSST can be found at CC-IN2P3 at `/sps/lsst/groups/accounting`. Those records are copied at the LSST Science Platform under `/path/to/location/at/ncsa`.There is a JSON file per month (e.g. `GE.accounting.2018.11.lsst.json.gz`) which only contains accounting records for jobs submitted by members of the Unix group `lsst`:
###Code
# The locations below are searched for accounting data. The first location found is used.
search_paths = (
# For development, we assume that data files are located in the './data' directory,
# relative to the location of this notebook
pathlib.Path.cwd().joinpath('data'),
# Location at CC-IN2P3
pathlib.Path('/sps/lsst/groups/accounting'),
# Location at the LSST Science Platform at NCSA
pathlib.Path('/lsst/jhome/fabioh/DATA/accounting'),
# Location at NERSC
pathlib.Path('/global/projecta/projectdirs/lsst/global/in2p3/accounting'),
)
data_dir = None
for p in search_paths:
if p.exists():
data_dir = p
break
if not data_dir:
raise ValueError("Could not find the location of the accounting data")
d = data_dir if data_dir != pathlib.Path.cwd().joinpath('data') else './data'
print_md(f'Data directory: `{d}`')
###Output
_____no_output_____
###Markdown
Import all the files available in the data directory:
###Code
# To analyse ALL the accounting files in the directory use
# pattern = 'GE.accounting.*.json*'
# To analyse data of all months for a single year use:
# pattern = 'GE.accounting.2018.*.json*'
# To analyse data of some months of a given year use:
# pattern = 'GE.accounting.2018.1[0-1]*.json*'
patterns = ('GE.accounting.2018.*.json*', 'GE.accounting.2019.*.json*')
df = pd.DataFrame()
for p in sorted(patterns):
for path in sorted(data_dir.glob(p)):
print(f'Loading data file {pathlib.PurePosixPath(path).name}')
df = df.append(pd.read_json(path, lines=True))
rows = df.shape[0]
if rows == 0:
raise Exception("no job records could be loaded")
print_md(f'Loaded **{rows}** job accounting records')
###Output
Loading data file GE.accounting.2018.01.lsst.json.gz
Loading data file GE.accounting.2018.02.lsst.json.gz
Loading data file GE.accounting.2018.03.lsst.json.gz
Loading data file GE.accounting.2018.04.lsst.json.gz
Loading data file GE.accounting.2018.05.lsst.json.gz
Loading data file GE.accounting.2018.06.lsst.json.gz
Loading data file GE.accounting.2018.07.lsst.json.gz
Loading data file GE.accounting.2018.08.lsst.json.gz
Loading data file GE.accounting.2018.09.lsst.json.gz
Loading data file GE.accounting.2018.10.lsst.json.gz
Loading data file GE.accounting.2018.11.lsst.json.gz
Loading data file GE.accounting.2018.12.lsst.json.gz
Loading data file GE.accounting.2019.01.lsst.json.gz
Loading data file GE.accounting.2019.02.lsst.json.gz
Loading data file GE.accounting.2019.03.lsst.json.gz
###Markdown
Dataframe column renamingIn this section we modify the dataframe to rename some columns so to make their contents explicit and intuitive in the remaining of this notebook, if needed.
###Code
if False:
df.rename(inplace=True, columns={
# Modify column names as below
'old_name': 'new_name',
# Rename some fields to make explicit it is an extension to the raw data
'x_worker_might_sectohs06sec': 'x_worker_hs06sec_factor',
})
###Output
_____no_output_____
###Markdown
Dataframe contents overview Each row in the dataframe contains information about a single job. This information is emitted by GridEngine, preprocessed, extended and reformatted in JSON to make it easier to analyse (see [here](https://gitlab.in2p3.fr/Batch-tools/GE-batch-system)).Below is the list of columns in the dataframe, shown in alphabetical order. The columns prefixed by `x_` (e.g. `x_cputime_hs06sec`) are extensions to the data provided by GridEngine.
###Code
num_rows, num_columns = df.shape
columns = sorted(list(df))
print_md(f'Dataframe size: **{num_rows} rows, {num_columns} columns:**\n\n `{", ".join(columns)}`')
###Output
_____no_output_____
###Markdown
The semantics of each column is described in the table below:| Column | Semantics ||:-------------------------------|:---------------------------------------------------------------------------------------------|| **`account`** | account string associated to the job, specified at submission time via `qsub(1)`, see `accounting(5)` || **`ar_submission_time`** | advance reservation submission time, see `accounting(5)` || **`arid`** | advance reservation identifier, see `accounting(5)` || **`category`** | a string specifying the job category, see `accounting(5)` || **`cpu`** | number of **scaled CPU seconds (integrated over all the CPU slots allocated for this job)** this job was in executing state. Note that the scaling factor is specific for each worker node and can be retrieved via `qconf -se ` (field `usage_scaling`). The value of the scaling factor is available in column `x_worker_hs06sec_factor`. The number of slots allocated for this job is available in the column `slots` (see below). See `accounting(5)`| **`cwd`** | working directory the job ran in, see `accounting(5)` || **`department`** | the department which was assigned to the job, see `accounting(5)` || **`exit_status`** | exit status of the job script, see `accounting(5)` || **`failed`** | indicates the problem which occurred in case a job could not be started, see `accounting(5)` || **`granted_pe`** | parallel environment which was selected for that job, see `accounting(5)` || **`group`** | effective group id of the job owner when executing the job, see `accounting(5)` || **`hostname`** | name of the execution host, see `accounting(5)` || **`io`** | amount of read and written by the job, in GBytes, see `accounting(5)` || **`ioops`** | number of io operations, see `accounting(5)` || **`iow`** | I/O wait time in seconds, see `accounting(5)` || **`job_class`** | if the job has been running in a job class, the name of the job class, see `accounting(5)` || **`job_name`** | job name (or `QLOGIN` if this is an interactive job), see `accounting(5)` || **`job_number`** | job identifier, see `accounting(5)` || **`maxpss_GB`** | maximum proportional set size in GBytes, see `accounting(5)` || **`maxrss_GB`** | maximum resident set size in GBytes, see `accounting(5)` || **`maxvmem_GB`** | the maximum vmem size in GBytes, see `accounting(5)` || **`mem`** | integral memory usage in GBytes CPU seconds, see `accounting(5)` || **`slots`** | number of (job) slots which were dispatched to the job by the scheduler, see `accounting(5)` || **`owner`** | user name who submitted the job, see `accounting(5)` || **`pe_taskid`** | if this identifier is set the task was part of a parallel job, see `accounting(5)` || **`priority`** | priority value assigned to the job, see `accounting(5)` || **`project`** | the project which was assigned to the job, see `accounting(5)` || **`qdel_info`** | if the job has been deleted via `qdel`, username@hostname, else `NONE`, see `accounting(5)` || **`queue_name`** | name of the cluster queue in which the job has run, see `qname` in `accounting(5)` || **`ru_*`** | standard UNIX rusage structure as described in `getrusage(2)`. Fields are: `ru_idrss`, `ru_inblock`, `ru_ismrss`, `ru_isrss`, `ru_ixrss`, `ru_majflt`, `ru_maxrss_GB`, `ru_minflt`, `ru_msgrcv`, `ru_msgsnd`, `ru_nicsw`, `ru_nsignals`, `ru_nswap`, `ru_nvcsw`, `ru_oublock`, `ru_stime`, `ru_utime`. See `accounting(5)` || **`ru_wallclock_sec`** | wallclock execution time of the job, in seconds, see `accounting(5)` || **`submit_cmd`** | the command line used for job submission, see `accounting(5)` || **`submit_host`** | submit host name, see `accounting(5)` || **`task_number`** | array job task index number, see `accounting(5)` || **`wallclock`** | wallclock time this job spent in running state. Unit is seconds. See `accounting(5)` || || **EXTENDED COLUMN** | **SEMANTICS** || **`x_cputime_hs06sec`** | number of normalized seconds this job was in execution state, integrated over all the slots allocated for this job (see `slots`). The unit is HS06.sec || **`x_cputime_sec`** | CPU time (in seconds) consumed by this job, integrated over all the slots allocated to the job (see `slots`). **Warning**: this CPU time is not normalized. See also `x_cputime_hs06sec` || **`x_limits_s_cpu`** | ??? || **`x_limits_s_rss_gb`** | ??? || **`x_limits_s_vmem_gb`** | ??? || **`x_limits_s_vmem_gb`** | ??? || **`x_mem_max_pss_gb`** | maximum proportional set size in GBytes, see `accounting(5)` and `getrusage(2)` || **`x_mem_max_rss_gb`** | maximum resident set size in GBytes, see `accounting(5)` and `getrusage(2)` || **`x_mem_max_vmem_gb`** | the maximum vmem size in GBytes, see `accounting(5)` and `getrusage(2)` || **`x_mem_ru_maxrss_gb`** | ??? || **`x_scheduler`** | Identifier of the batch job scheduler (typically 'UGE') || **`x_worker_might_sectohs06sec`** | CPU scaling factor for the worker node this job executed on. This factor is used to compute the execution time of this job in normalized units, namely in HS06.sec. || **`x_worker_model_type`** | CPU model of the worker node where the job executed, e.g. 'Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz' || **`x_worker_physical_virtual`** | is the worker node a 'physical' or 'virtual' machine? || **`x_worker_processors_thread_count`** | number of (maybe virtual) CPU cores in this worker node || **`x_worker_product_name`** | worker node product name, as assigned by vendor e.g. 'PowerEdge C6220 II' |
###Code
df.head()
###Output
_____no_output_____
###Markdown
Dataframe extension Extend the dataframe by computing convenient values and make sure the columns referring to a timestamp are interpreted by Pandas as a timestamp:
###Code
# Make sure columns 'submission_time', 'start_time' and 'end_time' are timestamps
for col in ('submission_time', 'start_time', 'end_time'):
df[col] = df[col].astype('datetime64[s]')
# Compute a new column which contains the waiting time in queue for each job
# We prefix each new column name with '_' to visually distinguish it from the columns in the
# original data files
df['_waiting_time_sec'] = (df['start_time'] - df['submission_time']).apply(lambda d: d.total_seconds())
# Modify the CPU model to remove '(R)'
df['x_worker_model_type'] = df['x_worker_model_type'].str.replace('(R)', '', regex=False)
###Output
_____no_output_____
###Markdown
Filter dataframe contentsOnly consider jobs by owners and groups of interest. At CC-IN2P3, jobs for the organized data processing activities of LSST and DESC are executed as users `lsstprod` and `descprod`, respectively. The accounts for members of LSST/DESC belong to the group `lsst`.
###Code
# Select the users and groups of interest by specifying them in a list. Use an empty list
# if no selection is needed
users = ('descprod',)
groups = ('lsst',)
if users:
df = df[df['owner'].isin(users)]
if groups:
df = df[df['group'].isin(groups)]
def user_filter_description(users, groups):
res = '(all users' if not users else f"(users: {','.join(users)}"
res += ', all groups)' if not groups else f", groups: {','.join(groups)})"
return res
print_md(f'Number of jobs for {user_filter_description(users, groups)}: **{df.shape[0]}**')
###Output
_____no_output_____
###Markdown
Select the time interval of jobs of interest. We only consider jobs submitted within that interval.
###Code
def filter_by_date(dframe, start=None, end=None):
"""Select the job records in the dataframe 'dframe' which submission time is between 'start' and 'end'
"""
if start:
dframe = dframe[dframe['submission_time'] >= start]
if end:
dframe = dframe[dframe['submission_time'] <= end]
return dframe
# Use a specific date such as
# datetime.datetime(2018, 10, 25)
# to specify the start and end dates of interest. Alternatively, use None for processing
# all the available job records.
start_date = None
end_date = None
df = filter_by_date(df, start_date, end_date)
print_md(f'Number of job records in the selected period: **{df.shape[0]}**')
###Output
_____no_output_____
###Markdown
Helper functions and variables
###Code
def format_pct_hours(v):
"""Format v to be displayed as a percentile, in hours. Argument v is expected to be seconds.
"""
return "<1" if v < 3600.0 else f'{int(v/3600.0)}'
def format_pct_min(v):
"""Format v to be displayed as a percentile, in minutes. Argument v is expected to be seconds.
"""
return "<1" if v < 60.0 else f'{int(v/60.0)}'
def style_figure(fig, x_font_style='bold', x_label_standoff=15, y_font_style='bold', y_label_standoff=15):
"""Style some attributes of a Bokeh figure
"""
# Figure size (in screen units)
fig.plot_width = 800
fig.plot_height = 600
# Autohide figure toolbar
fig.toolbar.autohide = True
# Figure background fill color and alpha
fig.background_fill_color = 'whitesmoke'
fig.background_fill_alpha = 0.8
# Axis properties
fig.xaxis.axis_label_text_font_style = x_font_style
fig.xaxis.axis_label_standoff = x_label_standoff
fig.yaxis.axis_label_text_font_style = y_font_style
fig.yaxis.axis_label_standoff = y_label_standoff
# Definition of the batch queues
batch_queues = {
# Interactive queues
'interactive': ('interactive', 'mc_interactive'),
# Queues which accept single-node jobs (either single CPU core or multi-CPU core)
'single-core': ('huge', 'long', 'longlasting'),
'multi-core': ('mc_highmem_huge', 'mc_highmem_long', 'mc_huge', 'mc_long', 'mc_longlasting'),
# Queues which accept multi-node jobs
'multi-node': ('pa_gpu_long', 'pa_long', 'pa_longlasting', 'pa_medium'),
'gpu': ('mc_gpu_interactive', 'mc_gpu_long', 'mc_gpu_longlasting', 'mc_gpu_medium'),
}
###Output
_____no_output_____
###Markdown
--- Batch system serviceIn this section we analyse the behavior of jobs and the batch system in general terms. Activity overviewIn this sub-section we present the periods of batch activity
###Code
start_period, end_period = df['submission_time'].min(), df['submission_time'].max()
num_jobs = df.shape[0]
print_md(f'From {start_period.date()} to {end_period.date()} there were {num_jobs} jobs submitted by {user_filter_description(users, groups)}')
# Count submitted and terminated jobs per day
df['_submission_date'] = df['submission_time'].apply(lambda dt: dt.date())
submitted = df.groupby(['_submission_date']).size()
df['_end_date'] = df['end_time'].apply(lambda dt: dt.date())
terminated = df.groupby(['_end_date']).size()
fig = bkh.figure(
title = f'BATCH ACTIVITY OVERVIEW {user_filter_description(users, groups)}',
x_axis_label = 'date',
x_axis_type = 'datetime',
y_axis_label = 'job count'
)
style_figure(fig)
color = 'steelblue'
fig.circle(x=submitted.index, y=submitted.values, fill_color=color, size=8, legend="submitted")
fig.line(x=submitted.index, y=submitted.values, line_color=color, legend="submitted")
color = 'red'
fig.circle(x=terminated.index, y=terminated.values, fill_color=color, size=8, legend="terminated")
fig.line(x=terminated.index, y=terminated.values, line_color=color, legend="terminated")
fig.xaxis.formatter = bkhmodels.formatters.DatetimeTickFormatter(
days=["%Y-%m-%d"], months=["%Y-%m-%d"], years=["%Y-%m-%d"]
)
fig.yaxis.formatter = bkhmodels.formatters.NumeralTickFormatter(format="0,0")
fig.add_tools(bkhmodels.HoverTool(
tooltips = [
('date', '@x{%F}'),
('jobs', '@y{0,000}'),
],
formatters = {
'x': 'datetime',
},
mode = 'mouse',
))
fig.legend.location = 'top_left'
fig.legend.click_policy = 'hide'
bkh.show(fig)
print_md(f"""*The figure above shows the number of jobs submitted and terminated per day.*""")
###Output
_____no_output_____
###Markdown
Execution queuesShow the activity per batch queue. Each queue has its own characteristics in terms of CPU time, memory size, etc. You can see the details of each queue [here](http://cctools.in2p3.fr/mrtguser/info_sge_queue.php).
###Code
queues = df.groupby(['qname']).size().sort_values(ascending=False)
labels = [n for n in queues.index]
values = queues.get_values()
fig = bkh.figure(
title = f'NUMBER OF JOBS PER EXECUTION QUEUE {user_filter_description(users, groups)}',
x_axis_label = 'queue',
y_axis_label = 'job count',
x_range = labels
)
style_figure(fig)
color, alpha = 'darkorange', 0.7
fig.vbar(x=labels, top=values, width=0.7, alpha=alpha, fill_color=color, line_color=color)
fig.xgrid.grid_line_color = None
fig.yaxis.formatter = bkhmodels.formatters.NumeralTickFormatter(format="0,0")
fig.add_tools(bkhmodels.HoverTool(
tooltips = [
('queue', '@x'),
('jobs', '@top{0,000}'),
],
mode = 'mouse',
))
bkh.show(fig)
print_md(f"""*The figure above shows the number of jobs executed per batch system queue. Only queues
where at least one job was put for execution are shown.*""")
###Output
_____no_output_____
###Markdown
Successful and failed jobsCompute the percentiles of job which succeeded and failed. A job is considered failes if its exit status is not zero.
###Code
def format_failed_pct(v):
return '{:.0%}'.format(v) if v > 0.0 else 'n/a'
def format_job_count(count):
return '{:,}'.format(count)
d = {}
queues = df.groupby(['qname']).size().sort_values(ascending=False)
for q in queues.index:
job_count = queues[q]
succeeded = df.loc[(df['qname'] == q) & (df['exit_status'] == 0)].shape[0]
d[q] = {
'job_count': job_count,
'succeeded': succeeded/job_count if job_count > 0 else 0,
'failed': (job_count - succeeded)/job_count if job_count > 0 else 0,
}
table = """
### Percentage of failed jobs (`exit_status != 0`)
| Queue | Job Count | Succeeded | Failed |
| ----: | --------: | --------: | -----: |
"""
for k, v in d.items():
table += f'| **{k}** | {format_job_count(v["job_count"])} | {format_failed_pct(v["succeeded"])} | {format_failed_pct(v["failed"])} |\n'
table += """
*The table above shows the fraction of succeeded and failed jobs per queue. A job is considered failed if its `exit_status` is not zero.*
"""
print_md(table)
###Output
_____no_output_____
###Markdown
Waiting timeShow the distribution of the time the jobs spent waiting in the queue before being put in execution in a compute node.
###Code
fig = bkh.figure(
title = f'DISTRIBUTION OF JOB WAITING TIME IN QUEUE {user_filter_description(users, groups)}',
x_axis_label = 'waiting time (min)',
y_axis_label = 'job count',
)
style_figure(fig)
color, alpha = 'firebrick', 0.7
hist, edges = np.histogram(df['_waiting_time_sec']/60, density=False, bins=30)
fig.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], alpha=alpha, fill_color=color, line_color=color)
fig.xaxis.formatter = bkhmodels.formatters.NumeralTickFormatter(format="0,0")
fig.yaxis.formatter = bkhmodels.formatters.NumeralTickFormatter(format="0,0")
fig.add_tools(bkhmodels.HoverTool(
tooltips = [
('jobs', '@top{0,000}'),
('low', '@left{0.0} min'),
('high', '@right{0.0} min'),
],
mode = 'mouse',
))
bkh.show(fig)
print_md(f"""*The figure above shows the distribution of the time the jobs waited in queue before
being put in execution by the job scheduler.*""")
d = {}
queues = df.groupby(['qname']).size().sort_values(ascending=False)
for q in queues.index:
selection = df.loc[df['qname'] == q]
d[q] = {
'job_count': selection.shape[0],
'pcts': np.percentile(selection['_waiting_time_sec'], [50, 80, 98, 100])
}
table = """
### Waiting time percentiles (*minutes*)
| Queue | Job Count | 50pct | 80pct | 98pct | 100pct |
| ----: | --------: | ----: | ----: | ----: | -----: |
"""
for k, v in d.items():
table += f'| **{k}** | {format_job_count(v["job_count"])} | {format_pct_min(v["pcts"][0])} | {format_pct_min(v["pcts"][1])} | {format_pct_min(v["pcts"][2])} | {format_pct_min(v["pcts"][3])} |\n'
table += """
*The table above shows some percentiles for waiting time for each execution queue (unit is minutes).*
"""
print_md(table)
###Output
_____no_output_____
###Markdown
Execution timeShow the distribution of the wallclock time the jobs spent in execution.
###Code
fig = bkh.figure(
title = f'DISTRIBUTION OF JOB EXECUTION WALLCLOCK TIME {user_filter_description(users, groups)}',
x_axis_label = 'execution wallclock time (hours)',
y_axis_label = 'job count'
)
style_figure(fig)
color, alpha = 'darkseagreen', 0.7
hist, edges = np.histogram(df['wallclock']/3600, density=False, bins=40)
fig.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], alpha=alpha, fill_color=color, line_color=color)
fig.yaxis.formatter = bkhmodels.formatters.NumeralTickFormatter(format="0,0")
fig.add_tools(bkhmodels.HoverTool(
tooltips = [
('jobs', '@top{0,000}'),
('low', '@left{0.0} hours'),
('high', '@right{0.0} hours'),
],
mode = 'mouse',
))
bkh.show(fig)
d = {}
queues = df.groupby(['qname']).size().sort_values(ascending=False)
for q in queues.index:
selection = df.loc[df['qname'] == q]
d[q] = {
'job_count': selection.shape[0],
'pcts': np.percentile(selection['wallclock'], [50, 80, 98, 100])
}
table = """
### Execution time percentiles (*hours*)
| Queue | Job Count | 50pct | 80pct | 98pct | 100pct |
| ----: | --------: | ----: | ----: | ----: | -----: |
"""
for k, v in d.items():
table += f'| **{k}** | {format_job_count(v["job_count"])} | {format_pct_hours(v["pcts"][0])} | {format_pct_hours(v["pcts"][1])} | {format_pct_hours(v["pcts"][2])} | {format_pct_hours(v["pcts"][3])} | \n'
table += """
*The table above shows some percentiles for job execution wallclock time per execution queue (unit is hours)*
"""
print_md(table)
###Output
_____no_output_____
###Markdown
CPU efficiencyIn this section we compute the CPU efficiency of the jobs. CPU efficiency is the ratio CPU time to wallclock time, that is, the fraction of the wallclock time the job actually used the CPU.
###Code
# Compute the non-normalized CPU time for each job slot for jobs satisfying the following conditions:
# 1) The exit status of the jobs is zero
# 2) The job executed in a single node batch queue, ie. do not include jobs executed in the 'interactive' queue
# 3) The job spent time in execution at least equivalent to the 5th percentile of the jobs satistying
# the two previous conditions
queues_to_include = batch_queues['single-core'] + batch_queues['multi-core']
job_set = df.loc[(df['exit_status'] == 0) & (df['qname'].isin(queues_to_include))]
wallclock_threshold = np.percentile(job_set['wallclock'], [5])[0]
job_set = job_set.loc[job_set['wallclock'] >= wallclock_threshold]
# Compute the CPU efficiency
cputime_per_slot = job_set['cpu'] / (job_set['x_worker_might_sectohs06sec'] * job_set['slots'])
cpueff = cputime_per_slot / job_set['wallclock']
hist, edges = np.histogram(cpueff, density=False, bins=40)
fig = bkh.figure(
title = f'DISTRIBUTION OF CPU EFFICIENCY {user_filter_description(users, groups)}',
x_axis_label = 'CPU efficiency (CPU time / wallclock time)',
y_axis_label = 'job count'
)
style_figure(fig)
color, alpha = 'saddlebrown', 0.7
fig.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], alpha=alpha, fill_color=color, line_color=color)
fig.yaxis.formatter = bkhmodels.formatters.NumeralTickFormatter(format="0,0")
fig.add_tools(bkhmodels.HoverTool(
tooltips = [
('jobs', '@top{0,000}'),
('low', '@left{0%}'),
('high', '@right{0%}'),
],
mode = 'mouse',
))
bkh.show(fig)
print_md(f"""*The figure above shows the distribution of the time the CPU efficiency of selected jobs. Note that there are some jobs which CPU efficiency is greater than 1.0: this is likely
due to some measurement error of the actual CPU time used by the job as collected by GridEngine or an under-estimation of the number of CPU cores actually used by the job*""")
###Output
_____no_output_____
###Markdown
CPU cores per jobShow the number of job slots that the batch system allocated for each job. This number is a proxy of the number of CPU cores the job requested
###Code
fig = bkh.figure(
title = f'DISTRIBUTION OF JOB SLOTS {user_filter_description(users, groups)}',
x_axis_label = 'number of slots',
y_axis_label = 'job count'
)
style_figure(fig)
color, alpha = 'lightseagreen', 0.7
hist, edges = np.histogram(df['slots'], density=False, bins=10)
fig.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], alpha=alpha, fill_color=color, line_color=color)
fig.yaxis.formatter = bkhmodels.formatters.NumeralTickFormatter(format="0,0")
fig.add_tools(bkhmodels.HoverTool(
tooltips = [
('jobs', '@top{0,000}'),
('low', '@left{0.0} slots'),
('high', '@right{0.0} slots'),
],
mode = 'mouse',
))
bkh.show(fig)
###Output
_____no_output_____
###Markdown
Memory utilizationIn this section we plot the distribution of the usage of RAM by the jobs.
###Code
# Only consider successful jobs executed in single- and multi-core queues
queues_to_include = batch_queues['single-core'] + batch_queues['multi-core']
successful_jobs = df.loc[(df['exit_status'] == 0) & (df['qname'].isin(queues_to_include))]
# Create the figure
fig = bkh.figure(
title = f'DISTRIBUTION OF MEMORY UTILIZATION PER BATCH QUEUE {user_filter_description(users, groups)}',
x_axis_label = 'gigabyte',
y_axis_label = 'job count'
)
style_figure(fig)
# Build a histogram per queue
colors = ('dodgerblue', 'darkmagenta', 'mediumseagreen', 'orange', 'crimson', 'sienna')
queues = successful_jobs.groupby(['qname']).size().sort_values(ascending=False)
for q, color in zip(queues.index, colors):
# Select the jobs in this queue which wallclock time was higher than the 5th percentile,
# to exclude jobs which may have terminated early
job_set = successful_jobs.loc[successful_jobs['qname'] == q]
wallclock_threshold = np.percentile(job_set['wallclock'], [5])[0]
memory = job_set.loc[job_set['wallclock'] >= wallclock_threshold, 'x_mem_max_rss_gb']
hist, edges = np.histogram(memory, density=False, bins=20)
fig.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], alpha= 0.5, fill_color=color, line_color=color, legend=q)
# Finalize the figure
fig.legend.click_policy = 'hide'
fig.yaxis.formatter = bkhmodels.formatters.NumeralTickFormatter(format="0,0")
fig.add_tools(bkhmodels.HoverTool(
tooltips = [
('jobs', '@top{0,000}'),
('low', '@left{0.0} GB'),
('high', '@right{0.0} GB'),
],
mode = 'mouse',
))
bkh.show(fig)
print_md(f"""*The figure above shows the distribution of RAM the jobs actually used in execution (in gigabyte), as collected by GridEngine. Click on the legend to hide / show information.*""")
def format_mem_pct(v):
return '<1' if v < 1.0 else f'{int(v)}'
queues = df.groupby(['qname']).size().sort_values(ascending=False)
d = {}
for q in queues.index:
selection = df[df['qname'] == q]
d[q] = {
'job_count': selection.shape[0],
'pcts': np.percentile(selection['x_mem_max_rss_gb'], [50, 80, 98, 100])
}
table = """
### Memory usage percentiles (gigabytes)
| Queue | Job Count | 50pct | 80pct | 98pct | 100pct |
| ----: | --------: | ----: | ----: | ----: | -----: |
"""
for k, v in d.items():
table += f'| {k} | {format_job_count(v["job_count"])} | {format_mem_pct(v["pcts"][0])} | {format_mem_pct(v["pcts"][1])} | {format_mem_pct(v["pcts"][2])} | {format_mem_pct(v["pcts"][3])} | \n'
table += """
*The table above shows some percentiles of memory consumption **per job** for each execution queue (unit is GB).*
"""
print_md(table)
###Output
_____no_output_____
###Markdown
Estimation of required number of CPU coresIn this section we estimate the number of CPU cores that would be needed to perform the same activity over the considered period. We consider the most powerful CPU available in the farm.
###Code
# Get the value of the maximum CPU time conversion factor
max_might = df['x_worker_might_sectohs06sec'].max()
# Get the model name of a CPU with the max conversion factor (i.e. the most powerful CPU model used for the set of jobs considered)
cpu_model = df.loc[df['x_worker_might_sectohs06sec'] == max_might, 'x_worker_model_type'].unique()[0]
# Get the total amount of normalized CPU time consumed by all the relevant jobs
total_consumed_cputime_hs06sec = df['x_cputime_hs06sec'].sum()
# Compute the number of elapsed (wallclock) seconds in the period
start_period = df['submission_time'].min()
end_period = df['end_time'].max()
elapsed_days = (end_period - start_period).days
elapsed_secs = (end_period - start_period).total_seconds()
# The number of CPU cores (of the most powerful model we have) to perform the same activity
# over the same period of time is computed by dividing the normalized CPU time consumed by
# all the relevant jobs by the scale factor for the most powerful CPU model and dividing
# this by the number of elapsed seconds in the period.
# We assume a CPU efficiency < 1.0 to be more realistic
estimated_cpu_efficiency = 0.6
required_cpu_cores = total_consumed_cputime_hs06sec / (max_might * estimated_cpu_efficiency) / elapsed_secs
print_md(f'The number of cores of **{cpu_model}** required to deliver the same computing capacity over {elapsed_days} days is **{int(required_cpu_cores)}**')
###Output
_____no_output_____
###Markdown
DESC DC2 Run1.2p jobs In this section we focus on the jobs for processing the DESC DC2 Run1.2p. All the jobs of that exercise were executed by used `descprod` and the last iteration started by 2018-10-01. Select the jobs of interest
###Code
# Select the period of interest
start_period = datetime.datetime(2018, 10, 1)
end_period = datetime.datetime(2018, 12, 31)
# DESC jobs run as user 'descprod'
dc2_jobs = filter_by_date(df, start=start_period, end=end_period)
dc2_jobs = dc2_jobs[dc2_jobs['owner'] == 'descprod']
# Add a label and a period for annotating the plots
dc2_label = 'DESC DC2 Run1.2p'
dc2_period = f'{start_period.strftime("%Y-%m-%d")} to {end_period.strftime("%Y-%m-%d")}'
###Output
_____no_output_____
###Markdown
Label jobs according to their role in the pipelineFirst, we label the jobs according to their purpose, using the submission command which contains some clues on the type of job. Note that there are some jobs which contain the type of the job in the `account` field, but this was implemented late in the process, so not all jobs were labeled this way.We add a column which contains the kind of job, named `_job_category`.
###Code
dc2_jobs['_job_category'] = ''
patterns = {
'ingest_flat': re.compile(r'.*/ingest_flat/.*'),
'ingest_bias': re.compile(r'.*/ingest_bias/.*'),
'ingest_dark': re.compile(r'.*/ingest_dark/.*'),
'ingestRefCat': re.compile(r'.*/ingestRefCat/.*'),
'ingestData': re.compile(r'.*/ingestData/.*'),
'setup_ingest': re.compile(r'.*/setup_ingest/.*'),
'run_ingestData': re.compile(r'.*/run_ingestData/.*'),
'setup_calib': re.compile(r'.*/setup_calib/.*'),
'run_bias': re.compile(r'.*/run_bias/.*'),
'run_flat': re.compile(r'.*/run_flat/.*'),
'run_dark': re.compile(r'.*/run_dark/.*'),
'setup_bias': re.compile(r'.*/setup_bias/.*'),
'setup_flat': re.compile(r'.*/setup_flat/.*'),
'setup_dark': re.compile(r'.*/setup_dark/.*'),
'finish_calib': re.compile(r'.*/finish_calib/.*'),
'setup_SingleFrameDriver': re.compile(r'.*/setup_SingleFrameDriver/.*'),
'run_SingleFrameDriver': re.compile(r'.*/run_SingleFrameDriver/.*'),
'setup_coadd': re.compile(r'.*/setup_coadd/.*'),
'run_coaddDriver': re.compile(r'.*/run_coaddDriver/.*'),
'setup_multiBand': re.compile(r'.*/setup_multiBand/.*'),
'run_multiBandDriver': re.compile(r'.*/run_multiBandDriver/.*'),
'makeSkyMap': re.compile(r'.*/makeSkyMap/.*'),
}
def get_job_type(submit_cmd):
for cat, rex in patterns.items():
if rex.search(submit_cmd):
return cat
return ''
dc2_jobs['_job_category'] = dc2_jobs['submit_cmd'].apply(lambda cmd: get_job_type(cmd))
unlabeled = dc2_jobs[dc2_jobs['_job_category'] == ''].shape[0]
print_md(f'There are {unlabeled} unlabeled jobs, out of {dc2_jobs.shape[0]} DC2 jobs')
###Output
_____no_output_____
###Markdown
Number of jobs per category
###Code
job_categories = dc2_jobs.groupby('_job_category').size().sort_values(ascending=False)
labels = [n for n in job_categories.index]
values = job_categories.get_values()
fig = bkh.figure(
title = f'{dc2_label} — NUMBER OF JOBS PER CATEGORY [{dc2_period}]',
x_axis_label = 'job category',
y_axis_label = 'job count',
y_axis_type = 'log',
x_range = labels
)
style_figure(fig)
color, alpha = 'orange', 0.8
fig.vbar(x=labels, top=values, bottom=0.1, width=0.6, alpha=alpha, fill_color=color, line_color=color)
fig.xgrid.grid_line_color = None
fig.xaxis.major_label_orientation = 0.9
fig.yaxis.formatter = bkhmodels.formatters.NumeralTickFormatter(format="0,0")
fig.add_tools(bkhmodels.HoverTool(
tooltips = [
('job type', '@x'),
('jobs', '@top{0,000}'),
],
mode = 'mouse',
))
bkh.show(fig)
print_md(f"""*The figure above shows the number of DC2 jobs per job category.
Hover the mouse over each vertical bar to get the exact number of jobs*""")
###Output
_____no_output_____
###Markdown
CPU utilization per job category
###Code
def get_most_powerful_CPU(df):
"""Returns a tuple with the conversion factor of the most powerful CPU and its CPU model
"""
# Get the value of the maximum CPU time conversion factor
max_might = df['x_worker_might_sectohs06sec'].max()
# Get the model name of the CPU with the max conversion factor
cpu_model = df.loc[df['x_worker_might_sectohs06sec'] == max_might, 'x_worker_model_type'].unique()[0]
return max_might, cpu_model
# Get the value of the maximum CPU time conversion factor and the CPU model
(max_might, cpu_model) = get_most_powerful_CPU(df)
# Get the aggregated CPU time per job category and normalize it to the most powerful CPU used
total_cpu = dc2_jobs['x_cputime_hs06sec'].sum() / (max_might * 3600)
cpu_time_per_category = dc2_jobs.groupby('_job_category')['x_cputime_hs06sec'].sum() / (max_might * 3600)
cpu_time_per_category = cpu_time_per_category.sort_values(ascending=False)
cpu_time_per_category = cpu_time_per_category[cpu_time_per_category >= 1.0]
data = bkhmodels.ColumnDataSource({
'categories': cpu_time_per_category.index.values,
'cpu_time': cpu_time_per_category.get_values(),
'percent_total_cpu_time': [v / total_cpu for v in cpu_time_per_category.get_values()],
})
fig = bkh.figure(
title = f'{dc2_label} — DISTRIBUTION OF CPU UTILIZATION [{dc2_period}]',
x_axis_label = 'job category',
y_axis_label = f'Hours on {cpu_model}',
y_axis_type = 'log',
x_range = labels
)
style_figure(fig)
color, alpha = 'dodgerblue', 0.8
fig.vbar(x='categories', top='cpu_time', source=data, width=0.6, bottom=1, alpha=alpha, fill_color=color, line_color=color)
fig.xgrid.grid_line_color = None
fig.xaxis.major_label_orientation = 0.9
fig.yaxis.formatter = bkhmodels.formatters.NumeralTickFormatter(format="0,0")
fig.add_tools(bkhmodels.HoverTool(
tooltips = [
('job type', '@categories'),
('CPU time', '@cpu_time{0,0.000}'),
('% of total CPU time', '@percent_total_cpu_time{%0.0}'),
],
mode = 'mouse',
))
bkh.show(fig)
print_md(f"""*The figure above shows the aggregated CPU time used by DC2 jobs per job category.
Only the job categories for which the CPU consumption is at least 1 hour are shown.
Hover the mouse over each vertical bar to get the CPU hours consumed by each category*""")
###Output
_____no_output_____
###Markdown
CPU and memory utilization per job category
###Code
# Get the value of the maximum CPU time conversion factor and the CPU model
(max_might, cpu_model) = get_most_powerful_CPU(df)
# Only consider jobs with exit_status < 128 belonging to the selected categories. An exit status >= 128 means that the job
# was interrupted by GridEngine, perhaps because it has exceeded its CPU or memory allocation
categories = ('run_multiBandDriver', 'run_SingleFrameDriver', 'run_coaddDriver')
job_set = dc2_jobs.loc[(dc2_jobs['exit_status'] < 128) & (dc2_jobs['_job_category'].isin(categories))]
max_cpu_time = job_set['x_cputime_hs06sec'].max()/(max_might * 3600)
fig = bkh.figure(
title = f'{dc2_label} — CPU & RAM UTILIZATION PER JOB CATEGORY [{dc2_period}]',
x_axis_label = 'RAM (gigabyte)',
y_axis_label = f'Hours on {cpu_model}',
)
style_figure(fig)
colors = ('dodgerblue', 'darkmagenta', 'mediumseagreen', 'orange', 'crimson', 'sienna')
for cat, color in zip(categories, colors):
jobs_in_category = job_set.loc[job_set['_job_category'] == cat]
if jobs_in_category.shape[0] > 0:
data = bkhmodels.ColumnDataSource({
'memory': jobs_in_category['x_mem_max_rss_gb'],
'cpu': jobs_in_category['x_cputime_hs06sec']/(max_might * 3600),
'jobid': jobs_in_category['job_number'],
'queue': jobs_in_category['qname'],
'slots': jobs_in_category['slots'],
})
fig.scatter(x='memory', y='cpu', source=data, fill_color=color, fill_alpha=0.5, line_color=color, legend=cat)
fig.yaxis.formatter = bkhmodels.formatters.NumeralTickFormatter(format="0,000")
fig.y_range.end = max_cpu_time*1.3 # make sure the legend does not hide data points
fig.legend.location = "top_left"
fig.legend.click_policy = 'hide'
fig.add_tools(bkhmodels.HoverTool(
tooltips = [
('JobID', '@jobid'),
('CPU', '@cpu{0,0.00}'),
('RAM', '@memory{0.0}'),
('Queue', '@queue'),
('Slots', '@slots'),
],
mode = 'mouse',
))
bkh.show(fig)
print_md(f"""*The figure above shows the normalized CPU time used vs RAM used by DC2 jobs per job category.
Each dot represents one job. The CPU time is normalized to the concrete given CPU model and is aggregated over all the CPU slots requested by the job at submission time. Click on the legend to hide/show any category.
Hover the mouse over each dot to retrieve some details about that job.*""")
###Output
1066.7641987518355
###Markdown
Failed jobs per category
###Code
# Get the value of the maximum CPU time conversion factor and the CPU model
(max_might, cpu_model) = get_most_powerful_CPU(df)
# Only consider jobs with exit_status > 0
categories = ('run_multiBandDriver', 'run_SingleFrameDriver', 'run_coaddDriver')
failed_jobs = dc2_jobs.loc[(dc2_jobs['exit_status'] > 0) & (dc2_jobs['_job_category'].isin(categories))]
max_cpu_time = failed_jobs['x_cputime_hs06sec'].max()/(max_might * 3600)
fig = bkh.figure(
title = f'{dc2_label} — FAILED JOBS PER JOB CATEGORY [{dc2_period}]',
x_axis_label = 'RAM (gigabyte)',
y_axis_label = f'Hours on {cpu_model}',
)
style_figure(fig)
colors = ('dodgerblue', 'darkmagenta', 'mediumseagreen', 'orange', 'crimson', 'sienna')
for cat, color in zip(categories, colors):
jobs_in_category = failed_jobs.loc[failed_jobs['_job_category'] == cat]
if jobs_in_category.shape[0] > 0:
data = bkhmodels.ColumnDataSource({
'memory': jobs_in_category['x_mem_max_rss_gb'],
'cpu': jobs_in_category['x_cputime_hs06sec']/(max_might * 3600),
'jobid': jobs_in_category['job_number'],
'queue': jobs_in_category['qname'],
'slots': jobs_in_category['slots'],
'status': jobs_in_category['exit_status'],
})
fig.scatter(x='memory', y='cpu', source=data, fill_color=color, fill_alpha=0.5, line_color=color, legend=cat)
fig.yaxis.formatter = bkhmodels.formatters.NumeralTickFormatter(format='0,000')
fig.legend.location = 'top_left'
fig.legend.click_policy = 'hide'
fig.y_range.end = max_cpu_time*1.3 # make sure the legend does not hide data points
fig.add_tools(bkhmodels.HoverTool(
tooltips = [
('JobID', '@jobid'),
('CPU', '@cpu{0,0.00}'),
('RAM', '@memory{0.0}'),
('Queue', '@queue'),
('Slots', '@slots'),
('Status', '@status'),
],
mode = 'mouse',
))
bkh.show(fig)
print_md(f"""*The figure above shows the normalized CPU time used vs RAM used by a total of **{failed_jobs.shape[0]} failed DC2 jobs**, presented per job category.
Each dot represents one failed job, that is, one which exit status is not zero. The CPU time is normalized to the concrete given CPU model and is aggregated over all the CPU slots requested by the job at submission time. Click on the legend to hide/show any category.
Hover the mouse over each dot to retrieve some details about that job, including its exit status.*""")
###Output
_____no_output_____ |
jupyterhub/notebooks/install_dependencies.ipynb | ###Markdown
Starting Jupyter- To run Jupyter in Kubernetes, run `deploy.sh`.- To run Jupyter directly in Docker, run `start_jupyter_docker.sh`. How to use this Notebook1. Click *Kernel* -> *Restart Kernel and Run All Cells*. Install dependenciesThis only needs to be done once after you have created a new Jupyter container. Install OpenCV and GRPC
###Code
!conda install -y opencv
!pip install grpcio
###Output
_____no_output_____
###Markdown
Install Pravega GRPC Gateway Client
###Code
!pip uninstall -y pravega-grpc-gateway-client ; \
rm -rf /tmp/pravega-grpc-gateway ; \
git clone https://github.com/pravega/pravega-grpc-gateway /tmp/pravega-grpc-gateway && \
cd /tmp/pravega-grpc-gateway && \
git checkout master && \
pip install pravega-grpc-gateway/src/main/python
###Output
_____no_output_____ |
Hubspot/Hubspot_update_followers_from_linkedin.ipynb | ###Markdown
Hubspot - Update followers from linkedin **Tags:** hubspot crm sales contact naas_drivers linkedin network scheduler naas Input Import library
###Code
from naas_drivers import hubspot, linkedin
import naas
import pandas as pd
###Output
_____no_output_____
###Markdown
Enter Hubspot api key
###Code
auth_token = "YOUR_HUBSPOT_API_KEY"
###Output
_____no_output_____
###Markdown
Get your cookiesHow to get your cookies ?
###Code
LI_AT = 'YOUR_COOKIE_LI_AT' # EXAMPLE AQFAzQN_PLPR4wAAAXc-FCKmgiMit5FLdY1af3-2
JSESSIONID = 'YOUR_COOKIE_JSESSIONID' # EXAMPLE ajax:8379907400220387585
###Output
_____no_output_____
###Markdown
Connect to Hubspot
###Code
hs = hubspot.connect(auth_token)
###Output
_____no_output_____
###Markdown
Schedule your notebook everyday
###Code
naas.scheduler.add(cron="15 6 * * *")
###Output
_____no_output_____
###Markdown
Get all contacts in Hubspot
###Code
properties_list = [
"hs_object_id",
"firstname",
"lastname",
"linkedinbio",
"linkedinconnections",
]
hubspot_contacts = hs.contacts.get_all(properties_list).fillna("Not Defined")
hubspot_contacts
###Output
_____no_output_____
###Markdown
Model Filter to get linkedinconnections = "Not Defined" and "linkedinbio" = defined
###Code
df_to_update = hubspot_contacts.copy()
# Filter on "Not defined"
df_to_update = df_to_update[(df_to_update.linkedinbio != "Not Defined") &
(df_to_update.linkedinconnections == "Not Defined")]
# Limit to last 50 contacts
df_to_update = df_to_update.sort_values(by="createdate", ascending=False)[:50].reset_index(drop=True)
df_to_update
###Output
_____no_output_____
###Markdown
Get followers from Linkedin
###Code
for _, row in df_to_update.iterrows():
linkedinbio = row.linkedinbio
# Get followers
df = linkedin.connect(LI_AT, JSESSIONID).profile.get_network(linkedinbio)
linkedinconnections = df.loc[0, "FOLLOWERS_COUNT"]
# Get linkedinbio
df_to_update.loc[_, "linkedinconnections"] = linkedinconnections
df_to_update
###Output
_____no_output_____
###Markdown
Output Update followers in Hubspot
###Code
for _, row in df_to_update.iterrows():
# Init data
data = {}
# Get data
hs_object_id = row.hs_object_id
linkedinconnections = row.linkedinconnections
# Update LK Bio
if linkedinconnections != None:
data = {"properties": {"linkedinconnections": linkedinconnections}}
hs.contacts.patch(hs_object_id, data)
###Output
_____no_output_____
###Markdown
Hubspot - Update followers from linkedin hubspot crm sales linkedin network Input Import library
###Code
from naas_drivers import hubspot, linkedin
import naas
import pandas as pd
###Output
_____no_output_____
###Markdown
Enter Hubspot api key
###Code
auth_token = "YOUR_HUBSPOT_API_KEY"
###Output
_____no_output_____
###Markdown
Get your cookiesHow to get your cookies ?
###Code
LI_AT = 'YOUR_COOKIE_LI_AT' # EXAMPLE AQFAzQN_PLPR4wAAAXc-FCKmgiMit5FLdY1af3-2
JSESSIONID = 'YOUR_COOKIE_JSESSIONID' # EXAMPLE ajax:8379907400220387585
###Output
_____no_output_____
###Markdown
Connect to Hubspot
###Code
hs = hubspot.connect(auth_token)
###Output
_____no_output_____
###Markdown
Schedule your notebook everyday
###Code
naas.scheduler.add(cron="15 6 * * *")
###Output
_____no_output_____
###Markdown
Get all contacts in Hubspot
###Code
properties_list = [
"hs_object_id",
"firstname",
"lastname",
"linkedinbio",
"linkedinconnections",
]
hubspot_contacts = hs.contacts.get_all(properties_list).fillna("Not Defined")
hubspot_contacts
###Output
_____no_output_____
###Markdown
Model Filter to get linkedinconnections = "Not Defined" and "linkedinbio" = defined
###Code
df_to_update = hubspot_contacts.copy()
# Filter on "Not defined"
df_to_update = df_to_update[(df_to_update.linkedinbio != "Not Defined") &
(df_to_update.linkedinconnections == "Not Defined")]
# Limit to last 50 contacts
df_to_update = df_to_update.sort_values(by="createdate", ascending=False)[:50].reset_index(drop=True)
df_to_update
###Output
_____no_output_____
###Markdown
Get followers from Linkedin
###Code
for _, row in df_to_update.iterrows():
linkedinbio = row.linkedinbio
# Get followers
df = linkedin.connect(LI_AT, JSESSIONID).profile.get_network(linkedinbio)
linkedinconnections = df.loc[0, "FOLLOWERS_COUNT"]
# Get linkedinbio
df_to_update.loc[_, "linkedinconnections"] = linkedinconnections
df_to_update
###Output
_____no_output_____
###Markdown
Output Update followers in Hubspot
###Code
for _, row in df_to_update.iterrows():
# Init data
data = {}
# Get data
hs_object_id = row.hs_object_id
linkedinconnections = row.linkedinconnections
# Update LK Bio
if linkedinconnections != None:
data = {"properties": {"linkedinconnections": linkedinconnections}}
hs.contacts.patch(hs_object_id, data)
###Output
_____no_output_____
###Markdown
Hubspot - Update followers from linkedin **Tags:** hubspot crm sales contact naas_drivers linkedin network scheduler naas Input Import library
###Code
from naas_drivers import hubspot, linkedin
import naas
import pandas as pd
###Output
_____no_output_____
###Markdown
Enter Hubspot api key
###Code
auth_token = "YOUR_HUBSPOT_API_KEY"
###Output
_____no_output_____
###Markdown
Get your cookiesHow to get your cookies ?
###Code
LI_AT = 'YOUR_COOKIE_LI_AT' # EXAMPLE AQFAzQN_PLPR4wAAAXc-FCKmgiMit5FLdY1af3-2
JSESSIONID = 'YOUR_COOKIE_JSESSIONID' # EXAMPLE ajax:8379907400220387585
###Output
_____no_output_____
###Markdown
Connect to Hubspot
###Code
hs = hubspot.connect(auth_token)
###Output
_____no_output_____
###Markdown
Schedule your notebook everyday
###Code
naas.scheduler.add(cron="15 6 * * *")
###Output
_____no_output_____
###Markdown
Get all contacts in Hubspot
###Code
properties_list = [
"hs_object_id",
"firstname",
"lastname",
"linkedinbio",
"linkedinconnections",
]
hubspot_contacts = hs.contacts.get_all(properties_list).fillna("Not Defined")
hubspot_contacts
###Output
_____no_output_____
###Markdown
Model Filter to get linkedinconnections = "Not Defined" and "linkedinbio" = defined
###Code
df_to_update = hubspot_contacts.copy()
# Filter on "Not defined"
df_to_update = df_to_update[(df_to_update.linkedinbio != "Not Defined") &
(df_to_update.linkedinconnections == "Not Defined")]
# Limit to last 50 contacts
df_to_update = df_to_update.sort_values(by="createdate", ascending=False)[:50].reset_index(drop=True)
df_to_update
###Output
_____no_output_____
###Markdown
Get followers from Linkedin
###Code
for _, row in df_to_update.iterrows():
linkedinbio = row.linkedinbio
# Get followers
df = linkedin.connect(LI_AT, JSESSIONID).profile.get_network(linkedinbio)
linkedinconnections = df.loc[0, "FOLLOWERS_COUNT"]
# Get linkedinbio
df_to_update.loc[_, "linkedinconnections"] = linkedinconnections
df_to_update
###Output
_____no_output_____
###Markdown
Output Update followers in Hubspot
###Code
for _, row in df_to_update.iterrows():
# Init data
data = {}
# Get data
hs_object_id = row.hs_object_id
linkedinconnections = row.linkedinconnections
# Update LK Bio
if linkedinconnections != None:
data = {"properties": {"linkedinconnections": linkedinconnections}}
hs.contacts.patch(hs_object_id, data)
###Output
_____no_output_____ |
TOPAS_Sequential_refinements_from_Jupyter.ipynb | ###Markdown
Run Topas from Jupyter* Version 1.7* Last update: 2017/08/17* Created by Anders Bank Blichfeld and Susanne Linn Skjærvø. Modified by Ola Gjønnes Grendal (version 1.7) for doing * backwards sequential refinements.--------------------------------------------------------------------------------------------------------------------------------1. Save this notebook anywhere2. Make a folder in this directory with the datafile extension as a name (i.e. xye, dat, raw etc.) and put all datafiles in it. 3. Put the .inp file in the directory above4. Replace the 'directory+filename' in the .inp file with "IN_FILE.xye". Add "DATASET" wherever you want only the filename to be rendered (i.e. same as for batch refinements with 'topas_batch_initial.bat'). 5. Make sure all directories and files are listed correctly below. 6. Run the script.- To be able to generate plots of the refined patterns, a macro called "macro Out_X_Yobs_Ycalc_Difference(file)" must be added to 'C:/Topas5/local.inc', and then added as one of the output parameters in the .inp file. Ask Susanne or Ola for details.- The current script should work for all files with namestyle: as_long_as_you_want_12345.xye- It should be noted that there is a difference between version 2.x and 3.x of Python in how the Topas output code is rendered. While version 2 will output ascii, version 3 will give you bytes. The current script should work for both, but if you are having problems write 'line' instead of 'line.decode('ascii')'.- DIRECTORY MUST BE WITHOUT SPACES
###Code
%matplotlib inline
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import time as t
import pylab as pl
from IPython import display
import os, subprocess, re, shutil
import numpy as np
presidents = ["Washington", "Adams", "Jefferson", "Madison", "Monroe", "Adams", "Jackson"]
for num, name in enumerate(presidents, start=1):
print(presidents[num])
###Output
Adams
Jefferson
Madison
Monroe
Adams
Jackson
###Markdown
Directories and files User defined
###Code
Inp_file='Test_comp_batch.inp' # name of input file
ext = 'xye' # datafile extension
directory1='C:/Documents/PhD/Resultater/In-situ/5_SNBL_feb_2018/Test_backwards_refinement/' # working directory
#directory1=os.getcwd()+'' # if you want the directory of the notebook to define working space (does not work for UNC paths)
directory3=directory1+'INP_OUT/' # folder for output files
out_file='Backward_refinement.txt' #Set name of output file with all refined values
directory2=directory1+'%s/'%(ext) # folder of datafiles
if not os.path.exists(os.path.dirname(directory3)):
os.makedirs(os.path.dirname(directory3)) # make directory for output
###Output
_____no_output_____
###Markdown
Process parameters User defined
###Code
plotfig = 0 # 1: graphical output (makes process slower), 0: text output
printall = 0 # 1: print full refinement output, 0: print only most essential
pause = 0 # 1: if you want extra time to look at output of each refinement (problems with stopping loop: click 'restart & clear output' in kernel dropdown menu)
start = 0 # Define the filenumbers you want to refine here
end = 913
keywords=['DATASET', 'IN_FILE'] # keywords you want to replace in your .inp file, make sure pattern_str and replace_str has equally many arguments
all_files = directory2
all_data=[]
for filename in os.listdir(all_files):
if filename.endswith(".%s"%(ext)):
all_data+=[all_files+filename]
continue
else:
continue
all_data.sort(reverse=True) #Set False for starting with first file, True for starting with last file
print("Number of datafiles files found: %s"%len(all_data))
all_data=all_data[start:end]
print("Refining this many files: %s"%len(all_data))
###Output
Number of datafiles files found: 913
Refining this many files: 913
###Markdown
Run Topas
###Code
counter=0
for count, files in enumerate(all_data):
if counter != 0:
Base1=os.path.splitext(os.path.basename(files))[0] # filenames of current the datafiles without the extension
Base2=os.path.splitext(os.path.basename(all_data[count-1]))[0] #filenames of previous
temp_inp=directory3+Base1+os.path.splitext(directory1+Inp_file)[1] # temporary variable making individual inputfiles for every datafile
shutil.copy(directory3+Base2+'.out',temp_inp) # copies the .inpfile in directory1 to directory3 and gives it the same name as the datafile
print(' '+Base1+'.inp')
pattern_str=[Base2]
replace_str=[Base1] # this is what you want to replace the keywords with (make sure patter_str and replace_str has the same amounts of arguments)
for i,word in enumerate(pattern_str): # this loop replaces all the keywords you have set
with open(temp_inp) as f:
s = f.read()
with open(temp_inp,'w') as f:
s = s.replace(pattern_str[i],replace_str[i])
f.write(s)
cmd = 'C:/TOPAS5/tc %s' %(temp_inp) # setting inputparameter for command prompt
process = subprocess.Popen(cmd,stdout=subprocess.PIPE) # launching Topas from the command prompt
line_save=[]
for line in process.stdout: # retrieves certain parts of the refinement output
if re.search(b'seconds',line):
if plotfig != 1 and printall !=1:
print(' '+' '+line_save.decode('ascii'))
else:
no.append(int(os.path.splitext(os.path.basename(files))[0][35:])) #First bracket: 0=filename, 1=fileextension. Second bracket: first number removes start of filename, second number removes end of filename
Rwp.append(float(line_save.decode('ascii')[24:33]))
time.append(float(line_save.decode('ascii')[12:17]))
line_save=line
if counter == 0:
Base=os.path.splitext(os.path.basename(files))[0] # filenames of the datafiles without the extension
temp_inp=directory3+Base+os.path.splitext(directory1+Inp_file)[1] # temporary variable making individual inputfiles for every datafile
shutil.copy(directory1+Inp_file,temp_inp) # copies the .inpfile in directory1 to directory2 and gives it the same name as the datafile
print(' '+Base+'.inp')
pattern_str=keywords
replace_str=[Base, directory2+Base] # this is what you want to replace the keywords with (make sure patter_str and replace_str has the same amounts of arguments)
for i,word in enumerate(pattern_str): # this loop replaces all the keywords you have set
with open(temp_inp) as f:
s = f.read()
with open(temp_inp,'w') as f:
s = s.replace(pattern_str[i],replace_str[i])
f.write(s)
cmd = 'C:/TOPAS5/tc %s' %(temp_inp) # setting inputparameter for command prompt
process = subprocess.Popen(cmd,stdout=subprocess.PIPE) # launching Topas from the command prompt
line_save=[]
for line in process.stdout: # retrieves certain parts of the refinement output
if re.search(b'seconds',line):
if plotfig != 1 and printall !=1:
print(' '+' '+line_save.decode('ascii'))
else:
no.append(int(os.path.splitext(os.path.basename(files))[0][35:])) #First bracket: 0=filename, 1=fileextension. Second bracket: first number removes start of filename, second number removes end of filename
Rwp.append(float(line_save.decode('ascii')[24:33]))
time.append(float(line_save.decode('ascii')[12:17]))
line_save=line
counter=counter+1
from IPython.display import Audio, display
def allDone():
display(Audio('C:/Documents/PhD/Diverse/Alesis-Sanctuary-QCard-Loose-Bell-C5.wav', autoplay=True))
allDone()
print('FINISHED!')
###Output
SBN_40_T225_P200_Nbaged_heat_0001p_00912.inp
4 Time 0.08 Rwp 2.160 -0.000 MC 0.45 1
SBN_40_T225_P200_Nbaged_heat_0001p_00911.inp
2 Time 0.07 Rwp 2.163 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00910.inp
2 Time 0.07 Rwp 2.162 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00909.inp
2 Time 0.09 Rwp 2.164 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00908.inp
3 Time 0.09 Rwp 2.165 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00907.inp
2 Time 0.09 Rwp 2.163 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00906.inp
3 Time 0.10 Rwp 2.161 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00905.inp
3 Time 0.09 Rwp 2.158 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00904.inp
3 Time 0.07 Rwp 2.160 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00903.inp
3 Time 0.09 Rwp 2.161 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00902.inp
3 Time 0.10 Rwp 2.157 0.000 MC 88.11 3
SBN_40_T225_P200_Nbaged_heat_0001p_00901.inp
2 Time 0.08 Rwp 2.164 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00900.inp
2 Time 0.12 Rwp 2.155 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00899.inp
3 Time 0.09 Rwp 2.161 -0.000 MC 0.31 3
SBN_40_T225_P200_Nbaged_heat_0001p_00898.inp
2 Time 0.08 Rwp 2.153 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00897.inp
2 Time 0.08 Rwp 2.151 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00896.inp
3 Time 0.08 Rwp 2.149 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00895.inp
3 Time 0.10 Rwp 2.158 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00894.inp
3 Time 0.08 Rwp 2.152 -0.000 MC 0.05 2
SBN_40_T225_P200_Nbaged_heat_0001p_00893.inp
3 Time 0.09 Rwp 2.154 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00892.inp
2 Time 0.09 Rwp 2.158 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00891.inp
3 Time 0.10 Rwp 2.157 -0.000 MC 0.31 2
SBN_40_T225_P200_Nbaged_heat_0001p_00890.inp
3 Time 0.10 Rwp 2.154 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00889.inp
3 Time 0.11 Rwp 2.159 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00888.inp
3 Time 0.09 Rwp 2.151 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00887.inp
2 Time 0.09 Rwp 2.154 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00886.inp
3 Time 0.10 Rwp 2.149 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00885.inp
2 Time 0.08 Rwp 2.151 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00884.inp
2 Time 0.08 Rwp 2.151 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00883.inp
3 Time 0.07 Rwp 2.148 -0.000 MC 1.21 2
SBN_40_T225_P200_Nbaged_heat_0001p_00882.inp
3 Time 0.09 Rwp 2.155 -0.000 MC 0.17 3
SBN_40_T225_P200_Nbaged_heat_0001p_00881.inp
3 Time 0.09 Rwp 2.156 0.000 MC 100.00 3
SBN_40_T225_P200_Nbaged_heat_0001p_00880.inp
3 Time 0.14 Rwp 2.153 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00879.inp
3 Time 0.08 Rwp 2.152 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00878.inp
3 Time 0.08 Rwp 2.162 -0.000 MC 0.02 2
SBN_40_T225_P200_Nbaged_heat_0001p_00877.inp
3 Time 0.09 Rwp 2.153 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00876.inp
3 Time 0.07 Rwp 2.156 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00875.inp
3 Time 0.08 Rwp 2.150 -0.000 MC 0.54 3
SBN_40_T225_P200_Nbaged_heat_0001p_00874.inp
2 Time 0.09 Rwp 2.152 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00873.inp
2 Time 0.07 Rwp 2.158 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00872.inp
3 Time 0.09 Rwp 2.146 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00871.inp
3 Time 0.07 Rwp 2.143 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00870.inp
2 Time 0.07 Rwp 2.149 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00869.inp
3 Time 0.09 Rwp 2.153 -0.000 MC 0.14 1
SBN_40_T225_P200_Nbaged_heat_0001p_00868.inp
3 Time 0.09 Rwp 2.150 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00867.inp
3 Time 0.08 Rwp 2.148 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00866.inp
3 Time 0.09 Rwp 2.154 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00865.inp
3 Time 0.08 Rwp 2.153 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00864.inp
2 Time 0.08 Rwp 2.151 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00863.inp
3 Time 0.09 Rwp 2.154 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00862.inp
3 Time 0.08 Rwp 2.145 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00861.inp
3 Time 0.10 Rwp 2.151 -0.000 MC 0.06 2
SBN_40_T225_P200_Nbaged_heat_0001p_00860.inp
3 Time 0.07 Rwp 2.146 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00859.inp
2 Time 0.08 Rwp 2.147 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00858.inp
3 Time 0.07 Rwp 2.152 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00857.inp
3 Time 0.11 Rwp 2.158 -0.000 MC 0.96 3
SBN_40_T225_P200_Nbaged_heat_0001p_00856.inp
2 Time 0.09 Rwp 2.153 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00855.inp
2 Time 0.09 Rwp 2.149 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00854.inp
3 Time 0.10 Rwp 2.148 -0.000 MC 0.08 2
SBN_40_T225_P200_Nbaged_heat_0001p_00853.inp
2 Time 0.10 Rwp 2.147 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00852.inp
3 Time 0.07 Rwp 2.155 -0.000 MC 0.05 2
SBN_40_T225_P200_Nbaged_heat_0001p_00851.inp
3 Time 0.09 Rwp 2.150 -0.000 MC 0.02 2
SBN_40_T225_P200_Nbaged_heat_0001p_00850.inp
3 Time 0.10 Rwp 2.150 -0.000 MC 0.16 2
SBN_40_T225_P200_Nbaged_heat_0001p_00849.inp
3 Time 0.11 Rwp 2.155 -0.000 MC 2.69 3
SBN_40_T225_P200_Nbaged_heat_0001p_00848.inp
3 Time 0.07 Rwp 2.151 -0.000 MC 0.02 2
SBN_40_T225_P200_Nbaged_heat_0001p_00847.inp
3 Time 0.08 Rwp 2.149 -0.000 MC 2.14 2
SBN_40_T225_P200_Nbaged_heat_0001p_00846.inp
3 Time 0.07 Rwp 2.149 -0.000 MC 0.36 1
SBN_40_T225_P200_Nbaged_heat_0001p_00845.inp
3 Time 0.08 Rwp 2.151 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00844.inp
2 Time 0.12 Rwp 2.150 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00843.inp
3 Time 0.40 Rwp 2.148 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00842.inp
3 Time 0.21 Rwp 2.152 -0.000 MC 0.20 2
SBN_40_T225_P200_Nbaged_heat_0001p_00841.inp
3 Time 0.28 Rwp 2.153 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00840.inp
3 Time 0.22 Rwp 2.145 -0.000 MC 0.87 4
SBN_40_T225_P200_Nbaged_heat_0001p_00839.inp
3 Time 0.13 Rwp 2.149 -0.000 MC 0.11 2
SBN_40_T225_P200_Nbaged_heat_0001p_00838.inp
3 Time 0.10 Rwp 2.146 -0.000 MC 1.34 2
SBN_40_T225_P200_Nbaged_heat_0001p_00837.inp
2 Time 0.21 Rwp 2.149 0.000 MC 100.00 3
SBN_40_T225_P200_Nbaged_heat_0001p_00836.inp
3 Time 0.09 Rwp 2.145 -0.000 MC 0.01 1
SBN_40_T225_P200_Nbaged_heat_0001p_00835.inp
3 Time 0.12 Rwp 2.153 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00834.inp
2 Time 0.11 Rwp 2.155 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00833.inp
2 Time 0.10 Rwp 2.156 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00832.inp
3 Time 0.11 Rwp 2.156 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00831.inp
3 Time 0.09 Rwp 2.149 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00830.inp
3 Time 0.07 Rwp 2.157 -0.000 MC 0.04 2
SBN_40_T225_P200_Nbaged_heat_0001p_00829.inp
3 Time 0.10 Rwp 2.152 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00828.inp
3 Time 0.10 Rwp 2.152 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00827.inp
3 Time 0.08 Rwp 2.152 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00826.inp
3 Time 0.07 Rwp 2.154 -0.000 MC 0.66 4
SBN_40_T225_P200_Nbaged_heat_0001p_00825.inp
3 Time 0.11 Rwp 2.153 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00824.inp
3 Time 0.11 Rwp 2.150 -0.000 MC 1.94 3
SBN_40_T225_P200_Nbaged_heat_0001p_00823.inp
3 Time 0.08 Rwp 2.146 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00822.inp
3 Time 0.07 Rwp 2.145 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00821.inp
3 Time 0.08 Rwp 2.155 -0.000 MC 0.02 2
SBN_40_T225_P200_Nbaged_heat_0001p_00820.inp
3 Time 0.10 Rwp 2.149 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00819.inp
3 Time 0.08 Rwp 2.147 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00818.inp
3 Time 0.09 Rwp 2.152 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00817.inp
3 Time 0.10 Rwp 2.150 -0.000 MC 1.87 3
SBN_40_T225_P200_Nbaged_heat_0001p_00816.inp
3 Time 0.10 Rwp 2.148 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00815.inp
3 Time 0.08 Rwp 2.149 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00814.inp
2 Time 0.07 Rwp 2.143 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00813.inp
2 Time 0.07 Rwp 2.143 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00812.inp
3 Time 0.07 Rwp 2.146 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00811.inp
3 Time 0.07 Rwp 2.146 -0.000 MC 0.02 2
SBN_40_T225_P200_Nbaged_heat_0001p_00810.inp
3 Time 0.10 Rwp 2.139 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00809.inp
3 Time 0.07 Rwp 2.144 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00808.inp
2 Time 0.09 Rwp 2.142 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00807.inp
3 Time 0.11 Rwp 2.150 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00806.inp
3 Time 0.09 Rwp 2.149 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00805.inp
3 Time 0.11 Rwp 2.145 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00804.inp
3 Time 0.08 Rwp 2.150 0.000 MC 100.00 3
SBN_40_T225_P200_Nbaged_heat_0001p_00803.inp
3 Time 0.10 Rwp 2.154 -0.000 MC 0.29 2
SBN_40_T225_P200_Nbaged_heat_0001p_00802.inp
3 Time 0.09 Rwp 2.149 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00801.inp
3 Time 0.07 Rwp 2.147 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00800.inp
3 Time 0.09 Rwp 2.152 -0.000 MC 1.17 4
SBN_40_T225_P200_Nbaged_heat_0001p_00799.inp
3 Time 0.10 Rwp 2.149 -0.000 MC 2.52 3
SBN_40_T225_P200_Nbaged_heat_0001p_00798.inp
3 Time 0.07 Rwp 2.150 -0.000 MC 0.12 2
SBN_40_T225_P200_Nbaged_heat_0001p_00797.inp
3 Time 0.10 Rwp 2.142 -0.000 MC 0.95 2
SBN_40_T225_P200_Nbaged_heat_0001p_00796.inp
3 Time 0.10 Rwp 2.143 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00795.inp
3 Time 0.09 Rwp 2.146 -0.000 MC 0.32 3
SBN_40_T225_P200_Nbaged_heat_0001p_00794.inp
2 Time 0.10 Rwp 2.145 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00793.inp
2 Time 0.09 Rwp 2.143 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00792.inp
3 Time 0.09 Rwp 2.145 -0.000 MC 0.48 2
SBN_40_T225_P200_Nbaged_heat_0001p_00791.inp
3 Time 0.08 Rwp 2.143 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00790.inp
2 Time 0.09 Rwp 2.147 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00789.inp
2 Time 0.09 Rwp 2.142 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00788.inp
3 Time 0.08 Rwp 2.145 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00787.inp
3 Time 0.07 Rwp 2.145 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00786.inp
2 Time 0.07 Rwp 2.141 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00785.inp
2 Time 0.09 Rwp 2.146 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00784.inp
3 Time 0.08 Rwp 2.143 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00783.inp
3 Time 0.10 Rwp 2.143 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00782.inp
3 Time 0.08 Rwp 2.148 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00781.inp
2 Time 0.08 Rwp 2.145 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00780.inp
3 Time 0.10 Rwp 2.141 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00779.inp
3 Time 0.11 Rwp 2.140 -0.000 MC 1.28 3
SBN_40_T225_P200_Nbaged_heat_0001p_00778.inp
3 Time 0.08 Rwp 2.143 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00777.inp
3 Time 0.09 Rwp 2.144 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00776.inp
3 Time 0.09 Rwp 2.149 -0.000 MC 2.11 3
SBN_40_T225_P200_Nbaged_heat_0001p_00775.inp
3 Time 0.08 Rwp 2.141 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00774.inp
3 Time 0.09 Rwp 2.140 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00773.inp
2 Time 0.08 Rwp 2.140 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00772.inp
3 Time 0.08 Rwp 2.139 -0.000 MC 1.23 2
SBN_40_T225_P200_Nbaged_heat_0001p_00771.inp
2 Time 0.07 Rwp 2.142 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00770.inp
3 Time 0.07 Rwp 2.134 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00769.inp
2 Time 0.07 Rwp 2.140 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00768.inp
2 Time 0.10 Rwp 2.140 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00767.inp
3 Time 0.07 Rwp 2.142 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00766.inp
3 Time 0.10 Rwp 2.139 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00765.inp
3 Time 0.10 Rwp 2.136 -0.000 MC 4.18 3
SBN_40_T225_P200_Nbaged_heat_0001p_00764.inp
3 Time 0.10 Rwp 2.135 -0.000 MC 7.12 3
SBN_40_T225_P200_Nbaged_heat_0001p_00763.inp
3 Time 0.07 Rwp 2.135 -0.000 MC 0.02 2
SBN_40_T225_P200_Nbaged_heat_0001p_00762.inp
3 Time 0.09 Rwp 2.141 -0.000 MC 0.40 2
SBN_40_T225_P200_Nbaged_heat_0001p_00761.inp
3 Time 0.07 Rwp 2.134 -0.000 MC 0.01 1
SBN_40_T225_P200_Nbaged_heat_0001p_00760.inp
3 Time 0.10 Rwp 2.139 -0.000 MC 1.08 4
SBN_40_T225_P200_Nbaged_heat_0001p_00759.inp
3 Time 0.09 Rwp 2.135 -0.000 MC 0.39 1
SBN_40_T225_P200_Nbaged_heat_0001p_00758.inp
3 Time 0.10 Rwp 2.130 -0.000 MC 1.02 3
SBN_40_T225_P200_Nbaged_heat_0001p_00757.inp
2 Time 0.07 Rwp 2.130 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00756.inp
2 Time 0.09 Rwp 2.125 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00755.inp
3 Time 0.21 Rwp 2.133 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00754.inp
3 Time 0.11 Rwp 2.132 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00753.inp
2 Time 0.07 Rwp 2.127 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00752.inp
3 Time 0.08 Rwp 2.134 -0.000 MC 0.15 2
SBN_40_T225_P200_Nbaged_heat_0001p_00751.inp
3 Time 0.09 Rwp 2.135 -0.000 MC 0.03 2
SBN_40_T225_P200_Nbaged_heat_0001p_00750.inp
2 Time 0.07 Rwp 2.132 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00749.inp
2 Time 0.07 Rwp 2.140 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00748.inp
3 Time 0.07 Rwp 2.136 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00747.inp
3 Time 0.08 Rwp 2.133 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00746.inp
3 Time 0.10 Rwp 2.126 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00745.inp
3 Time 0.07 Rwp 2.130 -0.000 MC 0.04 2
SBN_40_T225_P200_Nbaged_heat_0001p_00744.inp
3 Time 0.09 Rwp 2.132 -0.000 MC 0.29 3
SBN_40_T225_P200_Nbaged_heat_0001p_00743.inp
3 Time 0.07 Rwp 2.129 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00742.inp
3 Time 0.10 Rwp 2.130 -0.000 MC 1.10 2
SBN_40_T225_P200_Nbaged_heat_0001p_00741.inp
3 Time 0.08 Rwp 2.127 -0.000 MC 28.67 2
SBN_40_T225_P200_Nbaged_heat_0001p_00740.inp
2 Time 0.09 Rwp 2.127 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00739.inp
2 Time 0.10 Rwp 2.133 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00738.inp
3 Time 0.08 Rwp 2.131 -0.000 MC 0.02 2
SBN_40_T225_P200_Nbaged_heat_0001p_00737.inp
2 Time 0.07 Rwp 2.128 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00736.inp
3 Time 0.13 Rwp 2.127 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00735.inp
3 Time 0.11 Rwp 2.128 -0.000 MC 1.93 3
SBN_40_T225_P200_Nbaged_heat_0001p_00734.inp
3 Time 0.09 Rwp 2.124 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00733.inp
2 Time 0.07 Rwp 2.122 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00732.inp
3 Time 0.08 Rwp 2.119 -0.000 MC 0.30 3
SBN_40_T225_P200_Nbaged_heat_0001p_00731.inp
3 Time 0.08 Rwp 2.123 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00730.inp
3 Time 0.09 Rwp 2.127 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00729.inp
2 Time 0.07 Rwp 2.139 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00728.inp
3 Time 0.07 Rwp 2.125 -0.000 MC 0.04 2
SBN_40_T225_P200_Nbaged_heat_0001p_00727.inp
3 Time 0.10 Rwp 2.118 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00726.inp
3 Time 0.07 Rwp 2.122 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00725.inp
2 Time 0.07 Rwp 2.119 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00724.inp
3 Time 0.09 Rwp 2.121 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00723.inp
3 Time 0.10 Rwp 2.120 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00722.inp
3 Time 0.07 Rwp 2.124 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00721.inp
2 Time 0.08 Rwp 2.117 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00720.inp
2 Time 0.09 Rwp 2.125 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00719.inp
3 Time 0.10 Rwp 2.117 -0.000 MC 1.04 4
SBN_40_T225_P200_Nbaged_heat_0001p_00718.inp
3 Time 0.10 Rwp 2.118 -0.000 MC 1.01 3
SBN_40_T225_P200_Nbaged_heat_0001p_00717.inp
3 Time 0.10 Rwp 2.118 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00716.inp
3 Time 0.08 Rwp 2.119 -0.000 MC 7.31 3
SBN_40_T225_P200_Nbaged_heat_0001p_00715.inp
3 Time 0.12 Rwp 2.115 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00714.inp
3 Time 0.10 Rwp 2.115 -0.000 MC 1.81 3
SBN_40_T225_P200_Nbaged_heat_0001p_00713.inp
3 Time 0.08 Rwp 2.110 -0.000 MC 0.66 3
SBN_40_T225_P200_Nbaged_heat_0001p_00712.inp
3 Time 0.08 Rwp 2.112 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00711.inp
3 Time 0.07 Rwp 2.114 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00710.inp
3 Time 0.08 Rwp 2.104 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00709.inp
3 Time 0.07 Rwp 2.113 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00708.inp
2 Time 0.09 Rwp 2.106 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00707.inp
2 Time 0.07 Rwp 2.107 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00706.inp
3 Time 0.07 Rwp 2.104 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00705.inp
2 Time 0.07 Rwp 2.107 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00704.inp
3 Time 0.09 Rwp 2.107 -0.000 MC 0.32 3
SBN_40_T225_P200_Nbaged_heat_0001p_00703.inp
3 Time 0.09 Rwp 2.113 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00702.inp
3 Time 0.07 Rwp 2.112 -0.000 MC 0.08 2
SBN_40_T225_P200_Nbaged_heat_0001p_00701.inp
3 Time 0.08 Rwp 2.109 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00700.inp
3 Time 0.07 Rwp 2.107 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00699.inp
2 Time 0.07 Rwp 2.105 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00698.inp
3 Time 0.13 Rwp 2.105 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00697.inp
3 Time 0.10 Rwp 2.103 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00696.inp
3 Time 0.09 Rwp 2.100 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00695.inp
3 Time 0.09 Rwp 2.098 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00694.inp
2 Time 0.09 Rwp 2.102 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00693.inp
2 Time 0.07 Rwp 2.105 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00692.inp
3 Time 0.08 Rwp 2.093 0.000 MC 100.00 3
SBN_40_T225_P200_Nbaged_heat_0001p_00691.inp
3 Time 0.10 Rwp 2.104 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00690.inp
3 Time 0.08 Rwp 2.099 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00689.inp
3 Time 0.07 Rwp 2.090 -0.000 MC 0.12 2
SBN_40_T225_P200_Nbaged_heat_0001p_00688.inp
2 Time 0.09 Rwp 2.102 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00687.inp
2 Time 0.09 Rwp 2.097 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00686.inp
2 Time 0.10 Rwp 2.097 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00685.inp
3 Time 0.07 Rwp 2.095 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00684.inp
3 Time 0.09 Rwp 2.100 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00683.inp
3 Time 0.07 Rwp 2.096 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00682.inp
3 Time 0.08 Rwp 2.097 0.000 MC 100.00 3
SBN_40_T225_P200_Nbaged_heat_0001p_00681.inp
3 Time 0.09 Rwp 2.092 -0.000 MC 0.77 2
SBN_40_T225_P200_Nbaged_heat_0001p_00680.inp
3 Time 0.08 Rwp 2.091 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00679.inp
2 Time 0.10 Rwp 2.089 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00678.inp
3 Time 0.12 Rwp 2.087 -0.000 MC 2.73 2
SBN_40_T225_P200_Nbaged_heat_0001p_00677.inp
3 Time 0.08 Rwp 2.082 -0.000 MC 0.57 3
SBN_40_T225_P200_Nbaged_heat_0001p_00676.inp
3 Time 0.08 Rwp 2.085 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00675.inp
3 Time 0.08 Rwp 2.080 0.000 MC 100.00 3
SBN_40_T225_P200_Nbaged_heat_0001p_00674.inp
3 Time 0.09 Rwp 2.082 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00673.inp
2 Time 0.10 Rwp 2.077 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00672.inp
2 Time 0.07 Rwp 2.077 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00671.inp
3 Time 0.09 Rwp 2.075 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00670.inp
3 Time 0.08 Rwp 2.077 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00669.inp
3 Time 0.08 Rwp 2.077 0.000 MC 100.00 2
SBN_40_T225_P200_Nbaged_heat_0001p_00668.inp
3 Time 0.07 Rwp 2.073 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00667.inp
2 Time 0.10 Rwp 2.073 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00666.inp
2 Time 0.07 Rwp 2.070 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00665.inp
2 Time 0.10 Rwp 2.081 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00664.inp
3 Time 0.07 Rwp 2.080 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00663.inp
3 Time 0.07 Rwp 2.075 -0.000 MC 3.98 4
SBN_40_T225_P200_Nbaged_heat_0001p_00662.inp
3 Time 0.08 Rwp 2.080 -0.000 MC 4.93 2
SBN_40_T225_P200_Nbaged_heat_0001p_00661.inp
2 Time 0.07 Rwp 2.077 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00660.inp
3 Time 0.08 Rwp 2.064 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00659.inp
3 Time 0.12 Rwp 2.072 0.000 MC 100.00 3
SBN_40_T225_P200_Nbaged_heat_0001p_00658.inp
3 Time 0.09 Rwp 2.067 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00657.inp
3 Time 0.08 Rwp 2.067 -0.000 MC 0.03 2
SBN_40_T225_P200_Nbaged_heat_0001p_00656.inp
2 Time 0.10 Rwp 2.072 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00655.inp
2 Time 0.07 Rwp 2.065 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00654.inp
3 Time 0.10 Rwp 2.059 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00653.inp
2 Time 0.08 Rwp 2.062 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00652.inp
2 Time 0.07 Rwp 2.052 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00651.inp
3 Time 0.07 Rwp 2.054 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00650.inp
2 Time 0.10 Rwp 2.058 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00649.inp
2 Time 0.07 Rwp 2.062 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00648.inp
2 Time 0.09 Rwp 2.063 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00647.inp
3 Time 0.07 Rwp 2.062 -0.000 MC 0.01 1
SBN_40_T225_P200_Nbaged_heat_0001p_00646.inp
3 Time 0.07 Rwp 2.066 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00645.inp
3 Time 0.10 Rwp 2.059 -0.000 MC 2.48 3
SBN_40_T225_P200_Nbaged_heat_0001p_00644.inp
3 Time 0.07 Rwp 2.051 -0.000 MC 0.41 3
SBN_40_T225_P200_Nbaged_heat_0001p_00643.inp
3 Time 0.09 Rwp 2.051 -0.000 MC 0.21 3
SBN_40_T225_P200_Nbaged_heat_0001p_00642.inp
2 Time 0.09 Rwp 2.055 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00641.inp
3 Time 0.09 Rwp 2.053 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00640.inp
2 Time 0.10 Rwp 2.054 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00639.inp
3 Time 0.11 Rwp 2.050 -0.000 MC 0.80 2
SBN_40_T225_P200_Nbaged_heat_0001p_00638.inp
3 Time 0.08 Rwp 2.049 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00637.inp
3 Time 0.08 Rwp 2.041 -0.000 MC 3.51 2
SBN_40_T225_P200_Nbaged_heat_0001p_00636.inp
2 Time 0.07 Rwp 2.044 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00635.inp
3 Time 0.07 Rwp 2.048 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00634.inp
2 Time 0.07 Rwp 2.054 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00633.inp
3 Time 0.07 Rwp 2.044 -0.000 MC 0.43 3
SBN_40_T225_P200_Nbaged_heat_0001p_00632.inp
2 Time 0.07 Rwp 2.047 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00631.inp
3 Time 0.09 Rwp 2.044 -0.000 MC 0.88 3
SBN_40_T225_P200_Nbaged_heat_0001p_00630.inp
3 Time 0.07 Rwp 2.044 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00629.inp
2 Time 0.07 Rwp 2.045 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00628.inp
3 Time 0.10 Rwp 2.043 -0.000 MC 0.82 2
SBN_40_T225_P200_Nbaged_heat_0001p_00627.inp
3 Time 0.10 Rwp 2.037 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00626.inp
3 Time 0.07 Rwp 2.033 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00625.inp
3 Time 0.08 Rwp 2.039 0.000 MC 100.00 4
SBN_40_T225_P200_Nbaged_heat_0001p_00624.inp
3 Time 0.09 Rwp 2.031 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00623.inp
3 Time 0.08 Rwp 2.037 0.000 MC 100.00 3
SBN_40_T225_P200_Nbaged_heat_0001p_00622.inp
3 Time 0.09 Rwp 2.017 -0.000 MC 0.29 2
SBN_40_T225_P200_Nbaged_heat_0001p_00621.inp
3 Time 0.09 Rwp 2.027 -0.000 MC 0.22 2
SBN_40_T225_P200_Nbaged_heat_0001p_00620.inp
3 Time 0.10 Rwp 2.028 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00619.inp
3 Time 0.07 Rwp 2.027 -0.000 MC 0.16 2
SBN_40_T225_P200_Nbaged_heat_0001p_00618.inp
3 Time 0.09 Rwp 2.023 -0.000 MC 0.41 2
SBN_40_T225_P200_Nbaged_heat_0001p_00617.inp
3 Time 0.07 Rwp 2.022 -0.000 MC 0.25 2
SBN_40_T225_P200_Nbaged_heat_0001p_00616.inp
3 Time 0.07 Rwp 2.026 -0.000 MC 0.29 2
SBN_40_T225_P200_Nbaged_heat_0001p_00615.inp
3 Time 0.09 Rwp 2.027 -0.000 MC 0.07 1
SBN_40_T225_P200_Nbaged_heat_0001p_00614.inp
3 Time 0.08 Rwp 2.013 -0.000 MC 0.17 2
SBN_40_T225_P200_Nbaged_heat_0001p_00613.inp
3 Time 0.08 Rwp 2.018 -0.000 MC 0.24 2
SBN_40_T225_P200_Nbaged_heat_0001p_00612.inp
3 Time 0.09 Rwp 2.013 -0.000 MC 0.26 2
SBN_40_T225_P200_Nbaged_heat_0001p_00611.inp
3 Time 0.08 Rwp 2.010 -0.000 MC 0.23 2
SBN_40_T225_P200_Nbaged_heat_0001p_00610.inp
3 Time 0.07 Rwp 2.011 -0.000 MC 0.98 3
SBN_40_T225_P200_Nbaged_heat_0001p_00609.inp
3 Time 0.11 Rwp 2.003 0.000 MC 100.00 3
SBN_40_T225_P200_Nbaged_heat_0001p_00608.inp
3 Time 0.07 Rwp 2.010 -0.001 MC 0.16 2
SBN_40_T225_P200_Nbaged_heat_0001p_00607.inp
3 Time 0.07 Rwp 2.013 -0.001 MC 0.16 2
SBN_40_T225_P200_Nbaged_heat_0001p_00606.inp
3 Time 0.09 Rwp 2.009 -0.001 MC 0.17 2
SBN_40_T225_P200_Nbaged_heat_0001p_00605.inp
3 Time 0.07 Rwp 2.011 -0.000 MC 0.36 2
SBN_40_T225_P200_Nbaged_heat_0001p_00604.inp
4 Time 0.08 Rwp 2.021 -0.000 MC 0.22 2
SBN_40_T225_P200_Nbaged_heat_0001p_00603.inp
3 Time 0.10 Rwp 2.020 -0.000 MC 0.26 2
SBN_40_T225_P200_Nbaged_heat_0001p_00602.inp
3 Time 0.07 Rwp 2.009 -0.000 MC 0.27 2
SBN_40_T225_P200_Nbaged_heat_0001p_00601.inp
3 Time 0.09 Rwp 2.002 -0.000 MC 0.27 2
SBN_40_T225_P200_Nbaged_heat_0001p_00600.inp
3 Time 0.08 Rwp 2.005 -0.000 MC 0.29 2
SBN_40_T225_P200_Nbaged_heat_0001p_00599.inp
3 Time 0.09 Rwp 2.005 -0.000 MC 0.30 2
SBN_40_T225_P200_Nbaged_heat_0001p_00598.inp
3 Time 0.08 Rwp 1.991 -0.000 MC 0.29 2
SBN_40_T225_P200_Nbaged_heat_0001p_00597.inp
4 Time 0.08 Rwp 1.990 -0.001 MC 0.22 2
SBN_40_T225_P200_Nbaged_heat_0001p_00596.inp
3 Time 0.07 Rwp 1.993 -0.000 MC 0.28 2
SBN_40_T225_P200_Nbaged_heat_0001p_00595.inp
3 Time 0.07 Rwp 2.001 -0.000 MC 0.30 2
SBN_40_T225_P200_Nbaged_heat_0001p_00594.inp
3 Time 0.07 Rwp 1.989 -0.000 MC 0.29 2
SBN_40_T225_P200_Nbaged_heat_0001p_00593.inp
3 Time 0.09 Rwp 1.979 -0.000 MC 0.27 2
SBN_40_T225_P200_Nbaged_heat_0001p_00592.inp
3 Time 0.09 Rwp 1.984 -0.000 MC 0.28 2
SBN_40_T225_P200_Nbaged_heat_0001p_00591.inp
3 Time 0.08 Rwp 1.982 -0.000 MC 0.34 2
SBN_40_T225_P200_Nbaged_heat_0001p_00590.inp
3 Time 0.09 Rwp 1.977 -0.000 MC 0.59 2
SBN_40_T225_P200_Nbaged_heat_0001p_00589.inp
3 Time 0.09 Rwp 1.975 -0.000 MC 0.38 2
SBN_40_T225_P200_Nbaged_heat_0001p_00588.inp
3 Time 0.10 Rwp 1.976 -0.000 MC 0.34 2
SBN_40_T225_P200_Nbaged_heat_0001p_00587.inp
3 Time 0.10 Rwp 1.974 -0.000 MC 0.52 2
SBN_40_T225_P200_Nbaged_heat_0001p_00586.inp
3 Time 0.10 Rwp 1.973 -0.000 MC 0.34 2
SBN_40_T225_P200_Nbaged_heat_0001p_00585.inp
3 Time 0.08 Rwp 1.972 -0.000 MC 0.38 2
SBN_40_T225_P200_Nbaged_heat_0001p_00584.inp
3 Time 0.07 Rwp 1.974 -0.000 MC 0.28 2
SBN_40_T225_P200_Nbaged_heat_0001p_00583.inp
3 Time 0.11 Rwp 1.963 -0.000 MC 0.28 2
SBN_40_T225_P200_Nbaged_heat_0001p_00582.inp
3 Time 0.10 Rwp 1.962 -0.000 MC 0.32 2
SBN_40_T225_P200_Nbaged_heat_0001p_00581.inp
3 Time 0.10 Rwp 1.963 -0.000 MC 0.34 2
SBN_40_T225_P200_Nbaged_heat_0001p_00580.inp
3 Time 0.09 Rwp 1.957 -0.000 MC 0.30 2
SBN_40_T225_P200_Nbaged_heat_0001p_00579.inp
3 Time 0.11 Rwp 1.961 -0.000 MC 1.06 3
SBN_40_T225_P200_Nbaged_heat_0001p_00578.inp
3 Time 0.10 Rwp 1.959 -0.000 MC 0.68 2
SBN_40_T225_P200_Nbaged_heat_0001p_00577.inp
3 Time 0.08 Rwp 1.945 -0.000 MC 1.49 2
SBN_40_T225_P200_Nbaged_heat_0001p_00576.inp
3 Time 0.08 Rwp 1.956 -0.000 MC 2.48 3
SBN_40_T225_P200_Nbaged_heat_0001p_00575.inp
3 Time 0.07 Rwp 1.949 -0.000 MC 0.30 2
SBN_40_T225_P200_Nbaged_heat_0001p_00574.inp
3 Time 0.10 Rwp 1.955 -0.000 MC 0.98 3
SBN_40_T225_P200_Nbaged_heat_0001p_00573.inp
3 Time 0.08 Rwp 1.952 -0.000 MC 0.33 2
SBN_40_T225_P200_Nbaged_heat_0001p_00572.inp
3 Time 0.10 Rwp 1.946 -0.000 MC 0.28 2
SBN_40_T225_P200_Nbaged_heat_0001p_00571.inp
3 Time 0.07 Rwp 1.942 -0.000 MC 0.29 2
SBN_40_T225_P200_Nbaged_heat_0001p_00570.inp
3 Time 0.07 Rwp 1.941 -0.000 MC 0.29 2
SBN_40_T225_P200_Nbaged_heat_0001p_00569.inp
3 Time 0.08 Rwp 1.941 -0.000 MC 0.28 2
SBN_40_T225_P200_Nbaged_heat_0001p_00568.inp
3 Time 0.09 Rwp 1.940 -0.000 MC 0.46 2
SBN_40_T225_P200_Nbaged_heat_0001p_00567.inp
3 Time 0.09 Rwp 1.931 -0.000 MC 0.30 2
SBN_40_T225_P200_Nbaged_heat_0001p_00566.inp
3 Time 0.07 Rwp 1.937 -0.001 MC 0.30 2
SBN_40_T225_P200_Nbaged_heat_0001p_00565.inp
3 Time 0.07 Rwp 1.935 -0.000 MC 0.27 2
SBN_40_T225_P200_Nbaged_heat_0001p_00564.inp
3 Time 0.09 Rwp 1.926 -0.000 MC 0.29 2
SBN_40_T225_P200_Nbaged_heat_0001p_00563.inp
3 Time 0.11 Rwp 1.933 -0.000 MC 0.48 2
SBN_40_T225_P200_Nbaged_heat_0001p_00562.inp
3 Time 0.12 Rwp 1.931 -0.000 MC 0.47 2
SBN_40_T225_P200_Nbaged_heat_0001p_00561.inp
3 Time 0.08 Rwp 1.927 -0.000 MC 0.49 2
SBN_40_T225_P200_Nbaged_heat_0001p_00560.inp
3 Time 0.07 Rwp 1.924 -0.000 MC 0.32 2
SBN_40_T225_P200_Nbaged_heat_0001p_00559.inp
3 Time 0.09 Rwp 1.925 -0.000 MC 0.45 2
SBN_40_T225_P200_Nbaged_heat_0001p_00558.inp
3 Time 0.10 Rwp 1.923 -0.001 MC 0.30 2
SBN_40_T225_P200_Nbaged_heat_0001p_00557.inp
3 Time 0.10 Rwp 1.925 -0.000 MC 0.25 2
SBN_40_T225_P200_Nbaged_heat_0001p_00556.inp
3 Time 0.10 Rwp 1.920 -0.000 MC 0.50 2
SBN_40_T225_P200_Nbaged_heat_0001p_00555.inp
3 Time 0.10 Rwp 1.921 -0.000 MC 0.48 2
SBN_40_T225_P200_Nbaged_heat_0001p_00554.inp
3 Time 0.09 Rwp 1.917 -0.000 MC 0.30 2
SBN_40_T225_P200_Nbaged_heat_0001p_00553.inp
3 Time 0.11 Rwp 1.919 -0.000 MC 0.46 2
SBN_40_T225_P200_Nbaged_heat_0001p_00552.inp
3 Time 0.10 Rwp 1.914 -0.000 MC 0.50 2
SBN_40_T225_P200_Nbaged_heat_0001p_00551.inp
3 Time 0.10 Rwp 1.905 -0.000 MC 0.49 2
SBN_40_T225_P200_Nbaged_heat_0001p_00550.inp
3 Time 0.10 Rwp 1.904 -0.000 MC 0.80 2
SBN_40_T225_P200_Nbaged_heat_0001p_00549.inp
3 Time 0.08 Rwp 1.905 -0.000 MC 0.47 2
SBN_40_T225_P200_Nbaged_heat_0001p_00548.inp
3 Time 0.09 Rwp 1.892 -0.000 MC 0.34 2
SBN_40_T225_P200_Nbaged_heat_0001p_00547.inp
3 Time 0.09 Rwp 1.902 -0.001 MC 0.31 2
SBN_40_T225_P200_Nbaged_heat_0001p_00546.inp
3 Time 0.08 Rwp 1.897 -0.000 MC 0.45 2
SBN_40_T225_P200_Nbaged_heat_0001p_00545.inp
4 Time 0.10 Rwp 1.891 -0.001 MC 0.34 2
SBN_40_T225_P200_Nbaged_heat_0001p_00544.inp
3 Time 0.08 Rwp 1.890 -0.001 MC 0.20 2
SBN_40_T225_P200_Nbaged_heat_0001p_00543.inp
3 Time 0.08 Rwp 1.891 -0.000 MC 0.29 2
SBN_40_T225_P200_Nbaged_heat_0001p_00542.inp
3 Time 0.08 Rwp 1.889 -0.000 MC 0.60 2
SBN_40_T225_P200_Nbaged_heat_0001p_00541.inp
3 Time 0.08 Rwp 1.887 -0.001 MC 0.27 2
SBN_40_T225_P200_Nbaged_heat_0001p_00540.inp
3 Time 0.10 Rwp 1.880 -0.000 MC 0.69 2
SBN_40_T225_P200_Nbaged_heat_0001p_00539.inp
3 Time 0.08 Rwp 1.878 -0.001 MC 0.30 2
SBN_40_T225_P200_Nbaged_heat_0001p_00538.inp
3 Time 0.07 Rwp 1.877 -0.000 MC 0.28 2
SBN_40_T225_P200_Nbaged_heat_0001p_00537.inp
3 Time 0.07 Rwp 1.875 -0.001 MC 0.32 2
SBN_40_T225_P200_Nbaged_heat_0001p_00536.inp
3 Time 0.08 Rwp 1.898 -0.001 MC 0.47 2
SBN_40_T225_P200_Nbaged_heat_0001p_00535.inp
3 Time 0.10 Rwp 1.868 -0.001 MC 0.34 2
SBN_40_T225_P200_Nbaged_heat_0001p_00534.inp
3 Time 0.08 Rwp 1.874 -0.000 MC 0.53 2
SBN_40_T225_P200_Nbaged_heat_0001p_00533.inp
3 Time 0.09 Rwp 1.872 -0.000 MC 0.50 2
SBN_40_T225_P200_Nbaged_heat_0001p_00532.inp
3 Time 0.09 Rwp 1.868 -0.001 MC 0.31 2
SBN_40_T225_P200_Nbaged_heat_0001p_00531.inp
3 Time 0.11 Rwp 1.873 -0.001 MC 0.44 2
SBN_40_T225_P200_Nbaged_heat_0001p_00530.inp
3 Time 0.10 Rwp 1.870 -0.000 MC 0.47 2
SBN_40_T225_P200_Nbaged_heat_0001p_00529.inp
4 Time 0.10 Rwp 1.874 -0.001 MC 0.32 2
SBN_40_T225_P200_Nbaged_heat_0001p_00528.inp
3 Time 0.07 Rwp 1.869 -0.001 MC 0.30 2
SBN_40_T225_P200_Nbaged_heat_0001p_00527.inp
3 Time 0.10 Rwp 1.869 -0.001 MC 0.29 2
SBN_40_T225_P200_Nbaged_heat_0001p_00526.inp
3 Time 0.08 Rwp 1.863 -0.001 MC 0.48 2
SBN_40_T225_P200_Nbaged_heat_0001p_00525.inp
3 Time 0.10 Rwp 1.859 -0.000 MC 0.39 2
SBN_40_T225_P200_Nbaged_heat_0001p_00524.inp
3 Time 0.11 Rwp 1.861 -0.001 MC 0.31 2
SBN_40_T225_P200_Nbaged_heat_0001p_00523.inp
3 Time 0.08 Rwp 1.858 -0.001 MC 0.55 2
SBN_40_T225_P200_Nbaged_heat_0001p_00522.inp
3 Time 0.09 Rwp 1.848 -0.000 MC 0.43 2
SBN_40_T225_P200_Nbaged_heat_0001p_00521.inp
4 Time 0.09 Rwp 1.846 -0.001 MC 0.35 2
SBN_40_T225_P200_Nbaged_heat_0001p_00520.inp
3 Time 0.08 Rwp 1.843 -0.000 MC 0.65 2
SBN_40_T225_P200_Nbaged_heat_0001p_00519.inp
5 Time 0.10 Rwp 1.848 -0.001 MC 0.32 2
SBN_40_T225_P200_Nbaged_heat_0001p_00518.inp
3 Time 0.10 Rwp 1.848 -0.001 MC 0.45 2
SBN_40_T225_P200_Nbaged_heat_0001p_00517.inp
3 Time 0.08 Rwp 1.847 -0.001 MC 0.27 2
SBN_40_T225_P200_Nbaged_heat_0001p_00516.inp
3 Time 0.10 Rwp 1.841 -0.001 MC 0.33 2
SBN_40_T225_P200_Nbaged_heat_0001p_00515.inp
3 Time 0.10 Rwp 1.839 -0.001 MC 0.35 2
SBN_40_T225_P200_Nbaged_heat_0001p_00514.inp
5 Time 0.09 Rwp 1.850 -0.001 MC 0.32 2
SBN_40_T225_P200_Nbaged_heat_0001p_00513.inp
3 Time 0.09 Rwp 1.837 -0.001 MC 0.50 2
SBN_40_T225_P200_Nbaged_heat_0001p_00512.inp
3 Time 0.10 Rwp 1.836 -0.000 MC 0.33 2
SBN_40_T225_P200_Nbaged_heat_0001p_00511.inp
4 Time 0.12 Rwp 1.844 -0.001 MC 0.46 2
SBN_40_T225_P200_Nbaged_heat_0001p_00510.inp
3 Time 0.08 Rwp 1.833 -0.000 MC 0.47 2
SBN_40_T225_P200_Nbaged_heat_0001p_00509.inp
3 Time 0.10 Rwp 1.833 -0.000 MC 0.47 2
SBN_40_T225_P200_Nbaged_heat_0001p_00508.inp
3 Time 0.10 Rwp 1.820 -0.000 MC 0.51 2
SBN_40_T225_P200_Nbaged_heat_0001p_00507.inp
3 Time 0.09 Rwp 1.821 -0.000 MC 0.51 2
SBN_40_T225_P200_Nbaged_heat_0001p_00506.inp
3 Time 0.11 Rwp 1.818 -0.000 MC 0.88 2
SBN_40_T225_P200_Nbaged_heat_0001p_00505.inp
3 Time 0.08 Rwp 1.818 -0.001 MC 0.53 2
SBN_40_T225_P200_Nbaged_heat_0001p_00504.inp
3 Time 0.10 Rwp 1.811 -0.001 MC 0.52 2
SBN_40_T225_P200_Nbaged_heat_0001p_00503.inp
3 Time 0.08 Rwp 1.810 -0.001 MC 0.49 2
SBN_40_T225_P200_Nbaged_heat_0001p_00502.inp
3 Time 0.10 Rwp 1.802 -0.000 MC 0.66 2
SBN_40_T225_P200_Nbaged_heat_0001p_00501.inp
3 Time 0.08 Rwp 1.805 -0.001 MC 0.51 2
SBN_40_T225_P200_Nbaged_heat_0001p_00500.inp
3 Time 0.09 Rwp 1.813 -0.001 MC 0.30 2
SBN_40_T225_P200_Nbaged_heat_0001p_00499.inp
3 Time 0.08 Rwp 1.802 -0.001 MC 0.46 2
SBN_40_T225_P200_Nbaged_heat_0001p_00498.inp
3 Time 0.10 Rwp 1.800 -0.001 MC 0.45 2
SBN_40_T225_P200_Nbaged_heat_0001p_00497.inp
3 Time 0.11 Rwp 1.796 -0.000 MC 0.48 2
SBN_40_T225_P200_Nbaged_heat_0001p_00496.inp
3 Time 0.09 Rwp 1.788 -0.000 MC 0.33 2
SBN_40_T225_P200_Nbaged_heat_0001p_00495.inp
3 Time 0.09 Rwp 1.794 -0.000 MC 0.49 2
SBN_40_T225_P200_Nbaged_heat_0001p_00494.inp
3 Time 0.12 Rwp 1.785 -0.001 MC 0.47 2
SBN_40_T225_P200_Nbaged_heat_0001p_00493.inp
3 Time 0.10 Rwp 1.782 -0.000 MC 0.46 2
SBN_40_T225_P200_Nbaged_heat_0001p_00492.inp
3 Time 0.08 Rwp 1.781 -0.001 MC 0.49 2
SBN_40_T225_P200_Nbaged_heat_0001p_00491.inp
5 Time 0.09 Rwp 1.782 -0.001 MC 0.34 2
SBN_40_T225_P200_Nbaged_heat_0001p_00490.inp
4 Time 0.10 Rwp 1.768 -0.001 MC 0.32 2
SBN_40_T225_P200_Nbaged_heat_0001p_00489.inp
3 Time 0.16 Rwp 1.764 -0.001 MC 0.47 2
SBN_40_T225_P200_Nbaged_heat_0001p_00488.inp
3 Time 0.10 Rwp 1.761 -0.001 MC 0.47 2
SBN_40_T225_P200_Nbaged_heat_0001p_00487.inp
5 Time 0.12 Rwp 1.761 -0.001 MC 0.41 2
SBN_40_T225_P200_Nbaged_heat_0001p_00486.inp
4 Time 0.18 Rwp 1.759 -0.000 MC 0.43 2
SBN_40_T225_P200_Nbaged_heat_0001p_00485.inp
3 Time 0.10 Rwp 1.754 -0.000 MC 0.49 2
SBN_40_T225_P200_Nbaged_heat_0001p_00484.inp
3 Time 0.09 Rwp 1.747 -0.001 MC 0.48 2
SBN_40_T225_P200_Nbaged_heat_0001p_00483.inp
3 Time 0.09 Rwp 1.748 -0.000 MC 0.49 2
SBN_40_T225_P200_Nbaged_heat_0001p_00482.inp
4 Time 0.10 Rwp 1.747 -0.001 MC 0.34 2
SBN_40_T225_P200_Nbaged_heat_0001p_00481.inp
3 Time 0.09 Rwp 1.744 -0.001 MC 0.46 2
SBN_40_T225_P200_Nbaged_heat_0001p_00480.inp
4 Time 0.09 Rwp 1.739 -0.001 MC 0.34 2
SBN_40_T225_P200_Nbaged_heat_0001p_00479.inp
4 Time 0.08 Rwp 1.741 -0.001 MC 0.34 2
SBN_40_T225_P200_Nbaged_heat_0001p_00478.inp
3 Time 0.10 Rwp 1.745 -0.001 MC 0.52 2
SBN_40_T225_P200_Nbaged_heat_0001p_00477.inp
3 Time 0.10 Rwp 1.736 -0.001 MC 0.30 2
SBN_40_T225_P200_Nbaged_heat_0001p_00476.inp
4 Time 0.08 Rwp 1.733 -0.001 MC 0.42 2
SBN_40_T225_P200_Nbaged_heat_0001p_00475.inp
4 Time 0.11 Rwp 1.735 -0.001 MC 0.44 2
SBN_40_T225_P200_Nbaged_heat_0001p_00474.inp
3 Time 0.09 Rwp 1.721 -0.001 MC 0.28 2
SBN_40_T225_P200_Nbaged_heat_0001p_00473.inp
4 Time 0.10 Rwp 1.723 -0.001 MC 0.43 2
SBN_40_T225_P200_Nbaged_heat_0001p_00472.inp
3 Time 0.08 Rwp 1.715 -0.000 MC 0.47 2
SBN_40_T225_P200_Nbaged_heat_0001p_00471.inp
4 Time 0.08 Rwp 1.714 -0.000 MC 0.43 2
SBN_40_T225_P200_Nbaged_heat_0001p_00470.inp
3 Time 0.10 Rwp 1.703 -0.001 MC 0.45 2
SBN_40_T225_P200_Nbaged_heat_0001p_00469.inp
3 Time 0.08 Rwp 1.711 -0.001 MC 0.50 2
SBN_40_T225_P200_Nbaged_heat_0001p_00468.inp
4 Time 0.11 Rwp 1.707 -0.001 MC 0.46 2
SBN_40_T225_P200_Nbaged_heat_0001p_00467.inp
4 Time 0.11 Rwp 1.697 -0.001 MC 0.44 2
SBN_40_T225_P200_Nbaged_heat_0001p_00466.inp
3 Time 0.10 Rwp 1.701 -0.001 MC 0.46 2
SBN_40_T225_P200_Nbaged_heat_0001p_00465.inp
4 Time 0.10 Rwp 1.717 -0.001 MC 0.34 2
SBN_40_T225_P200_Nbaged_heat_0001p_00464.inp
4 Time 0.10 Rwp 1.727 -0.001 MC 0.31 2
SBN_40_T225_P200_Nbaged_heat_0001p_00463.inp
3 Time 0.08 Rwp 1.736 -0.001 MC 0.35 2
SBN_40_T225_P200_Nbaged_heat_0001p_00462.inp
4 Time 0.10 Rwp 1.724 -0.001 MC 0.42 2
SBN_40_T225_P200_Nbaged_heat_0001p_00461.inp
3 Time 0.12 Rwp 1.701 -0.001 MC 0.48 2
SBN_40_T225_P200_Nbaged_heat_0001p_00460.inp
5 Time 0.15 Rwp 1.696 -0.000 MC 0.40 2
SBN_40_T225_P200_Nbaged_heat_0001p_00459.inp
4 Time 0.11 Rwp 1.683 -0.000 MC 0.60 2
SBN_40_T225_P200_Nbaged_heat_0001p_00458.inp
3 Time 0.11 Rwp 1.671 -0.001 MC 0.45 2
SBN_40_T225_P200_Nbaged_heat_0001p_00457.inp
3 Time 0.14 Rwp 1.662 -0.001 MC 0.48 2
SBN_40_T225_P200_Nbaged_heat_0001p_00456.inp
3 Time 0.14 Rwp 1.663 -0.001 MC 0.48 2
SBN_40_T225_P200_Nbaged_heat_0001p_00455.inp
3 Time 0.12 Rwp 1.654 -0.001 MC 0.64 2
SBN_40_T225_P200_Nbaged_heat_0001p_00454.inp
3 Time 0.08 Rwp 1.649 -0.001 MC 0.46 2
SBN_40_T225_P200_Nbaged_heat_0001p_00453.inp
4 Time 0.12 Rwp 1.642 -0.001 MC 0.47 2
SBN_40_T225_P200_Nbaged_heat_0001p_00452.inp
3 Time 0.11 Rwp 1.645 -0.001 MC 0.52 2
SBN_40_T225_P200_Nbaged_heat_0001p_00451.inp
3 Time 0.12 Rwp 1.634 -0.001 MC 0.47 2
SBN_40_T225_P200_Nbaged_heat_0001p_00450.inp
3 Time 0.13 Rwp 1.635 -0.001 MC 0.47 2
SBN_40_T225_P200_Nbaged_heat_0001p_00449.inp
4 Time 0.11 Rwp 1.629 -0.001 MC 0.35 2
SBN_40_T225_P200_Nbaged_heat_0001p_00448.inp
4 Time 0.10 Rwp 1.623 -0.000 MC 1.19 3
SBN_40_T225_P200_Nbaged_heat_0001p_00447.inp
5 Time 0.12 Rwp 1.636 -0.001 MC 0.40 2
SBN_40_T225_P200_Nbaged_heat_0001p_00446.inp
4 Time 0.11 Rwp 1.607 -0.000 MC 0.47 2
SBN_40_T225_P200_Nbaged_heat_0001p_00445.inp
3 Time 0.10 Rwp 1.612 -0.001 MC 0.47 2
SBN_40_T225_P200_Nbaged_heat_0001p_00444.inp
3 Time 0.10 Rwp 1.605 -0.001 MC 0.52 2
SBN_40_T225_P200_Nbaged_heat_0001p_00443.inp
5 Time 0.12 Rwp 1.615 -0.001 MC 0.43 2
SBN_40_T225_P200_Nbaged_heat_0001p_00442.inp
4 Time 0.14 Rwp 1.600 -0.000 MC 0.32 2
SBN_40_T225_P200_Nbaged_heat_0001p_00441.inp
4 Time 0.10 Rwp 1.590 -0.001 MC 0.44 2
SBN_40_T225_P200_Nbaged_heat_0001p_00440.inp
4 Time 0.10 Rwp 1.589 -0.000 MC 0.47 2
SBN_40_T225_P200_Nbaged_heat_0001p_00439.inp
3 Time 0.10 Rwp 1.584 -0.001 MC 0.55 2
SBN_40_T225_P200_Nbaged_heat_0001p_00438.inp
3 Time 0.19 Rwp 1.572 -0.001 MC 0.45 2
SBN_40_T225_P200_Nbaged_heat_0001p_00437.inp
3 Time 0.08 Rwp 1.579 -0.001 MC 0.49 2
SBN_40_T225_P200_Nbaged_heat_0001p_00436.inp
3 Time 0.10 Rwp 1.577 -0.001 MC 0.52 2
SBN_40_T225_P200_Nbaged_heat_0001p_00435.inp
4 Time 0.12 Rwp 1.562 -0.001 MC 0.45 2
SBN_40_T225_P200_Nbaged_heat_0001p_00434.inp
6 Time 0.13 Rwp 1.563 -0.000 MC 0.45 2
SBN_40_T225_P200_Nbaged_heat_0001p_00433.inp
4 Time 0.11 Rwp 1.554 -0.001 MC 0.30 2
SBN_40_T225_P200_Nbaged_heat_0001p_00432.inp
4 Time 0.11 Rwp 1.553 -0.001 MC 0.44 2
SBN_40_T225_P200_Nbaged_heat_0001p_00431.inp
4 Time 0.09 Rwp 1.561 -0.001 MC 0.43 2
SBN_40_T225_P200_Nbaged_heat_0001p_00430.inp
3 Time 0.09 Rwp 1.539 -0.001 MC 0.45 2
SBN_40_T225_P200_Nbaged_heat_0001p_00429.inp
4 Time 0.09 Rwp 1.531 -0.001 MC 0.46 2
SBN_40_T225_P200_Nbaged_heat_0001p_00428.inp
4 Time 0.09 Rwp 1.532 -0.000 MC 0.46 2
SBN_40_T225_P200_Nbaged_heat_0001p_00427.inp
5 Time 0.11 Rwp 1.533 -0.001 MC 0.40 2
SBN_40_T225_P200_Nbaged_heat_0001p_00426.inp
3 Time 0.10 Rwp 1.537 -0.001 MC 0.44 2
SBN_40_T225_P200_Nbaged_heat_0001p_00425.inp
4 Time 0.11 Rwp 1.525 -0.001 MC 0.44 2
SBN_40_T225_P200_Nbaged_heat_0001p_00424.inp
4 Time 0.11 Rwp 1.516 -0.001 MC 0.44 2
SBN_40_T225_P200_Nbaged_heat_0001p_00423.inp
5 Time 0.10 Rwp 1.513 -0.001 MC 0.34 2
SBN_40_T225_P200_Nbaged_heat_0001p_00422.inp
3 Time 0.08 Rwp 1.509 -0.001 MC 0.42 2
SBN_40_T225_P200_Nbaged_heat_0001p_00421.inp
3 Time 0.08 Rwp 1.505 -0.001 MC 0.45 2
SBN_40_T225_P200_Nbaged_heat_0001p_00420.inp
4 Time 0.09 Rwp 1.497 -0.001 MC 0.43 2
SBN_40_T225_P200_Nbaged_heat_0001p_00419.inp
4 Time 0.10 Rwp 1.489 -0.001 MC 0.37 2
SBN_40_T225_P200_Nbaged_heat_0001p_00418.inp
3 Time 0.12 Rwp 1.485 -0.001 MC 0.34 2
SBN_40_T225_P200_Nbaged_heat_0001p_00417.inp
4 Time 0.12 Rwp 1.482 -0.001 MC 0.43 2
SBN_40_T225_P200_Nbaged_heat_0001p_00416.inp
3 Time 0.09 Rwp 1.476 -0.001 MC 0.45 2
SBN_40_T225_P200_Nbaged_heat_0001p_00415.inp
4 Time 0.13 Rwp 1.474 -0.000 MC 0.35 2
SBN_40_T225_P200_Nbaged_heat_0001p_00414.inp
3 Time 0.10 Rwp 1.464 -0.001 MC 0.47 2
SBN_40_T225_P200_Nbaged_heat_0001p_00413.inp
4 Time 0.10 Rwp 1.454 -0.001 MC 0.41 2
SBN_40_T225_P200_Nbaged_heat_0001p_00412.inp
3 Time 0.17 Rwp 1.458 -0.001 MC 0.46 2
SBN_40_T225_P200_Nbaged_heat_0001p_00411.inp
4 Time 0.15 Rwp 1.447 -0.001 MC 0.43 2
SBN_40_T225_P200_Nbaged_heat_0001p_00410.inp
3 Time 0.15 Rwp 1.440 -0.001 MC 0.30 2
SBN_40_T225_P200_Nbaged_heat_0001p_00409.inp
3 Time 0.11 Rwp 1.446 -0.001 MC 0.46 2
SBN_40_T225_P200_Nbaged_heat_0001p_00408.inp
4 Time 0.17 Rwp 1.437 -0.000 MC 0.42 2
SBN_40_T225_P200_Nbaged_heat_0001p_00407.inp
3 Time 0.28 Rwp 1.431 -0.001 MC 0.47 2
SBN_40_T225_P200_Nbaged_heat_0001p_00406.inp
4 Time 0.33 Rwp 1.437 -0.000 MC 0.49 2
SBN_40_T225_P200_Nbaged_heat_0001p_00405.inp
5 Time 0.14 Rwp 1.421 -0.001 MC 0.41 2
SBN_40_T225_P200_Nbaged_heat_0001p_00404.inp
4 Time 0.14 Rwp 1.412 -0.001 MC 0.37 2
SBN_40_T225_P200_Nbaged_heat_0001p_00403.inp
3 Time 0.12 Rwp 1.413 -0.001 MC 0.47 2
SBN_40_T225_P200_Nbaged_heat_0001p_00402.inp
4 Time 0.11 Rwp 1.405 -0.001 MC 0.43 2
SBN_40_T225_P200_Nbaged_heat_0001p_00401.inp
4 Time 0.22 Rwp 1.401 -0.001 MC 0.45 2
SBN_40_T225_P200_Nbaged_heat_0001p_00400.inp
3 Time 0.09 Rwp 1.390 -0.001 MC 0.47 2
SBN_40_T225_P200_Nbaged_heat_0001p_00399.inp
3 Time 0.08 Rwp 1.387 -0.001 MC 0.30 2
SBN_40_T225_P200_Nbaged_heat_0001p_00398.inp
4 Time 0.12 Rwp 1.391 -0.001 MC 0.36 2
SBN_40_T225_P200_Nbaged_heat_0001p_00397.inp
4 Time 0.09 Rwp 1.388 -0.001 MC 0.41 2
SBN_40_T225_P200_Nbaged_heat_0001p_00396.inp
4 Time 0.10 Rwp 1.388 -0.001 MC 0.44 2
SBN_40_T225_P200_Nbaged_heat_0001p_00395.inp
5 Time 0.10 Rwp 1.391 -0.001 MC 0.34 2
SBN_40_T225_P200_Nbaged_heat_0001p_00394.inp
6 Time 0.10 Rwp 1.368 -0.001 MC 0.31 2
SBN_40_T225_P200_Nbaged_heat_0001p_00393.inp
4 Time 0.10 Rwp 1.361 -0.001 MC 0.44 2
SBN_40_T225_P200_Nbaged_heat_0001p_00392.inp
5 Time 0.21 Rwp 1.360 -0.001 MC 0.33 2
SBN_40_T225_P200_Nbaged_heat_0001p_00391.inp
4 Time 0.10 Rwp 1.347 -0.001 MC 0.45 2
SBN_40_T225_P200_Nbaged_heat_0001p_00390.inp
4 Time 0.11 Rwp 1.336 -0.001 MC 0.43 2
SBN_40_T225_P200_Nbaged_heat_0001p_00389.inp
4 Time 0.13 Rwp 1.328 -0.000 MC 0.57 2
SBN_40_T225_P200_Nbaged_heat_0001p_00388.inp
3 Time 0.12 Rwp 1.330 -0.001 MC 0.38 2
SBN_40_T225_P200_Nbaged_heat_0001p_00387.inp
3 Time 0.10 Rwp 1.323 -0.001 MC 0.48 2
SBN_40_T225_P200_Nbaged_heat_0001p_00386.inp
4 Time 0.14 Rwp 1.319 -0.001 MC 0.45 2
SBN_40_T225_P200_Nbaged_heat_0001p_00385.inp
5 Time 0.13 Rwp 1.335 -0.001 MC 0.44 2
SBN_40_T225_P200_Nbaged_heat_0001p_00384.inp
4 Time 0.10 Rwp 1.316 -0.001 MC 0.34 2
SBN_40_T225_P200_Nbaged_heat_0001p_00383.inp
4 Time 0.18 Rwp 1.317 -0.001 MC 0.43 2
SBN_40_T225_P200_Nbaged_heat_0001p_00382.inp
5 Time 0.13 Rwp 1.313 -0.001 MC 0.42 2
SBN_40_T225_P200_Nbaged_heat_0001p_00381.inp
4 Time 0.10 Rwp 1.308 -0.001 MC 0.42 2
SBN_40_T225_P200_Nbaged_heat_0001p_00380.inp
3 Time 0.11 Rwp 1.305 -0.001 MC 0.48 2
SBN_40_T225_P200_Nbaged_heat_0001p_00379.inp
4 Time 0.10 Rwp 1.296 -0.000 MC 0.41 2
SBN_40_T225_P200_Nbaged_heat_0001p_00378.inp
3 Time 0.08 Rwp 1.295 -0.001 MC 0.50 2
SBN_40_T225_P200_Nbaged_heat_0001p_00377.inp
3 Time 0.08 Rwp 1.289 -0.000 MC 0.30 2
SBN_40_T225_P200_Nbaged_heat_0001p_00376.inp
4 Time 0.11 Rwp 1.290 -0.001 MC 0.44 2
SBN_40_T225_P200_Nbaged_heat_0001p_00375.inp
3 Time 0.08 Rwp 1.286 -0.001 MC 0.49 2
SBN_40_T225_P200_Nbaged_heat_0001p_00374.inp
4 Time 0.11 Rwp 1.287 -0.001 MC 0.32 2
SBN_40_T225_P200_Nbaged_heat_0001p_00373.inp
4 Time 0.09 Rwp 1.271 -0.001 MC 0.33 2
SBN_40_T225_P200_Nbaged_heat_0001p_00372.inp
6 Time 0.12 Rwp 1.273 -0.001 MC 0.37 2
SBN_40_T225_P200_Nbaged_heat_0001p_00371.inp
4 Time 0.09 Rwp 1.252 -0.001 MC 0.32 2
SBN_40_T225_P200_Nbaged_heat_0001p_00370.inp
4 Time 0.09 Rwp 1.256 -0.001 MC 0.45 2
SBN_40_T225_P200_Nbaged_heat_0001p_00369.inp
4 Time 0.12 Rwp 1.253 -0.001 MC 0.33 2
SBN_40_T225_P200_Nbaged_heat_0001p_00368.inp
4 Time 0.09 Rwp 1.245 -0.001 MC 0.41 2
SBN_40_T225_P200_Nbaged_heat_0001p_00367.inp
5 Time 0.10 Rwp 1.235 -0.001 MC 0.42 2
SBN_40_T225_P200_Nbaged_heat_0001p_00366.inp
4 Time 0.09 Rwp 1.230 -0.001 MC 0.33 2
SBN_40_T225_P200_Nbaged_heat_0001p_00365.inp
5 Time 0.10 Rwp 1.223 -0.001 MC 0.34 2
SBN_40_T225_P200_Nbaged_heat_0001p_00364.inp
6 Time 0.13 Rwp 1.212 -0.001 MC 0.33 2
SBN_40_T225_P200_Nbaged_heat_0001p_00363.inp
5 Time 0.11 Rwp 1.204 -0.000 MC 0.33 2
SBN_40_T225_P200_Nbaged_heat_0001p_00362.inp
4 Time 0.10 Rwp 1.198 -0.001 MC 0.33 2
SBN_40_T225_P200_Nbaged_heat_0001p_00361.inp
4 Time 0.10 Rwp 1.200 -0.001 MC 0.43 2
SBN_40_T225_P200_Nbaged_heat_0001p_00360.inp
4 Time 0.11 Rwp 1.185 -0.001 MC 0.46 2
SBN_40_T225_P200_Nbaged_heat_0001p_00359.inp
4 Time 0.10 Rwp 1.192 -0.001 MC 0.30 2
SBN_40_T225_P200_Nbaged_heat_0001p_00358.inp
4 Time 0.11 Rwp 1.180 -0.001 MC 0.41 2
SBN_40_T225_P200_Nbaged_heat_0001p_00357.inp
3 Time 0.09 Rwp 1.177 -0.001 MC 0.44 2
SBN_40_T225_P200_Nbaged_heat_0001p_00356.inp
4 Time 0.09 Rwp 1.169 -0.000 MC 0.44 2
SBN_40_T225_P200_Nbaged_heat_0001p_00355.inp
4 Time 0.09 Rwp 1.164 -0.000 MC 0.43 2
SBN_40_T225_P200_Nbaged_heat_0001p_00354.inp
4 Time 0.09 Rwp 1.167 -0.001 MC 0.32 2
SBN_40_T225_P200_Nbaged_heat_0001p_00353.inp
5 Time 0.11 Rwp 1.162 -0.001 MC 0.31 2
SBN_40_T225_P200_Nbaged_heat_0001p_00352.inp
5 Time 0.10 Rwp 1.155 -0.001 MC 0.35 2
SBN_40_T225_P200_Nbaged_heat_0001p_00351.inp
4 Time 0.09 Rwp 1.149 -0.001 MC 0.32 2
SBN_40_T225_P200_Nbaged_heat_0001p_00350.inp
4 Time 0.08 Rwp 1.145 -0.001 MC 0.30 2
SBN_40_T225_P200_Nbaged_heat_0001p_00349.inp
4 Time 0.11 Rwp 1.141 -0.001 MC 0.45 2
SBN_40_T225_P200_Nbaged_heat_0001p_00348.inp
4 Time 0.11 Rwp 1.140 -0.001 MC 0.44 2
SBN_40_T225_P200_Nbaged_heat_0001p_00347.inp
5 Time 0.10 Rwp 1.143 -0.001 MC 0.33 2
SBN_40_T225_P200_Nbaged_heat_0001p_00346.inp
4 Time 0.10 Rwp 1.130 -0.000 MC 0.42 2
SBN_40_T225_P200_Nbaged_heat_0001p_00345.inp
3 Time 0.10 Rwp 1.128 -0.001 MC 0.31 2
SBN_40_T225_P200_Nbaged_heat_0001p_00344.inp
4 Time 0.11 Rwp 1.127 -0.001 MC 0.32 2
SBN_40_T225_P200_Nbaged_heat_0001p_00343.inp
4 Time 0.11 Rwp 1.117 -0.001 MC 0.43 2
SBN_40_T225_P200_Nbaged_heat_0001p_00342.inp
4 Time 0.10 Rwp 1.119 -0.001 MC 0.43 2
SBN_40_T225_P200_Nbaged_heat_0001p_00341.inp
4 Time 0.11 Rwp 1.122 -0.001 MC 0.33 2
SBN_40_T225_P200_Nbaged_heat_0001p_00340.inp
4 Time 0.09 Rwp 1.109 -0.001 MC 0.31 2
SBN_40_T225_P200_Nbaged_heat_0001p_00339.inp
3 Time 0.09 Rwp 1.098 -0.001 MC 0.28 2
SBN_40_T225_P200_Nbaged_heat_0001p_00338.inp
3 Time 0.10 Rwp 1.093 -0.001 MC 0.46 2
SBN_40_T225_P200_Nbaged_heat_0001p_00337.inp
4 Time 0.10 Rwp 1.083 -0.001 MC 0.45 2
SBN_40_T225_P200_Nbaged_heat_0001p_00336.inp
3 Time 0.10 Rwp 1.078 -0.001 MC 0.29 2
SBN_40_T225_P200_Nbaged_heat_0001p_00335.inp
4 Time 0.09 Rwp 1.065 -0.000 MC 0.42 2
SBN_40_T225_P200_Nbaged_heat_0001p_00334.inp
3 Time 0.08 Rwp 1.067 -0.001 MC 0.32 2
SBN_40_T225_P200_Nbaged_heat_0001p_00333.inp
3 Time 0.09 Rwp 1.065 -0.001 MC 0.30 2
SBN_40_T225_P200_Nbaged_heat_0001p_00332.inp
5 Time 0.11 Rwp 1.064 -0.001 MC 0.39 2
SBN_40_T225_P200_Nbaged_heat_0001p_00331.inp
4 Time 0.10 Rwp 1.069 -0.001 MC 0.43 2
SBN_40_T225_P200_Nbaged_heat_0001p_00330.inp
3 Time 0.09 Rwp 1.056 -0.001 MC 0.28 2
SBN_40_T225_P200_Nbaged_heat_0001p_00329.inp
4 Time 0.10 Rwp 1.057 -0.001 MC 0.32 2
SBN_40_T225_P200_Nbaged_heat_0001p_00328.inp
3 Time 0.08 Rwp 1.046 -0.001 MC 0.47 2
SBN_40_T225_P200_Nbaged_heat_0001p_00327.inp
3 Time 0.10 Rwp 1.054 -0.001 MC 0.48 2
SBN_40_T225_P200_Nbaged_heat_0001p_00326.inp
3 Time 0.10 Rwp 1.042 -0.001 MC 0.29 2
SBN_40_T225_P200_Nbaged_heat_0001p_00325.inp
4 Time 0.14 Rwp 1.044 -0.001 MC 0.30 2
SBN_40_T225_P200_Nbaged_heat_0001p_00324.inp
4 Time 0.10 Rwp 1.039 -0.001 MC 0.33 2
SBN_40_T225_P200_Nbaged_heat_0001p_00323.inp
4 Time 0.09 Rwp 1.031 -0.001 MC 0.30 2
SBN_40_T225_P200_Nbaged_heat_0001p_00322.inp
3 Time 0.10 Rwp 1.025 -0.001 MC 0.27 2
SBN_40_T225_P200_Nbaged_heat_0001p_00321.inp
3 Time 0.09 Rwp 1.025 -0.001 MC 0.47 2
SBN_40_T225_P200_Nbaged_heat_0001p_00320.inp
4 Time 0.09 Rwp 1.022 -0.001 MC 0.34 2
SBN_40_T225_P200_Nbaged_heat_0001p_00319.inp
3 Time 0.08 Rwp 1.011 -0.001 MC 0.29 2
SBN_40_T225_P200_Nbaged_heat_0001p_00318.inp
3 Time 0.10 Rwp 1.016 -0.001 MC 0.46 2
SBN_40_T225_P200_Nbaged_heat_0001p_00317.inp
4 Time 0.09 Rwp 1.016 -0.001 MC 0.35 2
SBN_40_T225_P200_Nbaged_heat_0001p_00316.inp
4 Time 0.09 Rwp 1.013 -0.001 MC 0.28 2
SBN_40_T225_P200_Nbaged_heat_0001p_00315.inp
3 Time 0.08 Rwp 1.003 -0.001 MC 0.26 2
SBN_40_T225_P200_Nbaged_heat_0001p_00314.inp
3 Time 0.09 Rwp 0.998 -0.001 MC 0.35 2
SBN_40_T225_P200_Nbaged_heat_0001p_00313.inp
3 Time 0.10 Rwp 0.993 -0.001 MC 0.30 2
SBN_40_T225_P200_Nbaged_heat_0001p_00312.inp
5 Time 0.11 Rwp 1.030 -0.001 MC 0.33 2
SBN_40_T225_P200_Nbaged_heat_0001p_00311.inp
3 Time 0.10 Rwp 1.081 -0.001 MC 0.27 2
SBN_40_T225_P200_Nbaged_heat_0001p_00310.inp
3 Time 0.09 Rwp 0.988 -0.001 MC 0.29 2
SBN_40_T225_P200_Nbaged_heat_0001p_00309.inp
3 Time 0.10 Rwp 0.970 -0.001 MC 0.34 2
SBN_40_T225_P200_Nbaged_heat_0001p_00308.inp
3 Time 0.09 Rwp 0.974 -0.001 MC 0.30 2
SBN_40_T225_P200_Nbaged_heat_0001p_00307.inp
3 Time 0.09 Rwp 0.976 -0.001 MC 0.27 2
SBN_40_T225_P200_Nbaged_heat_0001p_00306.inp
3 Time 0.10 Rwp 0.970 -0.001 MC 0.32 2
SBN_40_T225_P200_Nbaged_heat_0001p_00305.inp
3 Time 0.09 Rwp 0.972 -0.001 MC 0.30 2
SBN_40_T225_P200_Nbaged_heat_0001p_00304.inp
3 Time 0.08 Rwp 0.960 -0.001 MC 0.31 2
SBN_40_T225_P200_Nbaged_heat_0001p_00303.inp
3 Time 0.09 Rwp 0.970 -0.001 MC 0.46 2
SBN_40_T225_P200_Nbaged_heat_0001p_00302.inp
4 Time 0.11 Rwp 0.964 -0.001 MC 0.31 2
SBN_40_T225_P200_Nbaged_heat_0001p_00301.inp
4 Time 0.09 Rwp 0.960 -0.001 MC 0.30 2
SBN_40_T225_P200_Nbaged_heat_0001p_00300.inp
4 Time 0.10 Rwp 0.945 -0.001 MC 0.31 2
SBN_40_T225_P200_Nbaged_heat_0001p_00299.inp
4 Time 0.11 Rwp 0.944 -0.001 MC 0.32 2
SBN_40_T225_P200_Nbaged_heat_0001p_00298.inp
4 Time 0.09 Rwp 0.951 -0.001 MC 0.31 2
SBN_40_T225_P200_Nbaged_heat_0001p_00297.inp
3 Time 0.09 Rwp 0.950 -0.001 MC 0.47 2
SBN_40_T225_P200_Nbaged_heat_0001p_00296.inp
4 Time 0.09 Rwp 0.944 -0.001 MC 0.30 2
SBN_40_T225_P200_Nbaged_heat_0001p_00295.inp
3 Time 0.09 Rwp 0.939 -0.001 MC 0.27 2
SBN_40_T225_P200_Nbaged_heat_0001p_00294.inp
3 Time 0.08 Rwp 0.930 -0.000 MC 0.30 2
SBN_40_T225_P200_Nbaged_heat_0001p_00293.inp
3 Time 0.08 Rwp 0.931 -0.001 MC 0.30 2
SBN_40_T225_P200_Nbaged_heat_0001p_00292.inp
3 Time 0.09 Rwp 0.927 -0.001 MC 0.30 2
SBN_40_T225_P200_Nbaged_heat_0001p_00291.inp
4 Time 0.09 Rwp 0.937 -0.001 MC 0.33 2
SBN_40_T225_P200_Nbaged_heat_0001p_00290.inp
6 Time 0.11 Rwp 0.936 -0.001 MC 0.34 2
SBN_40_T225_P200_Nbaged_heat_0001p_00289.inp
3 Time 0.09 Rwp 0.922 -0.001 MC 0.32 2
SBN_40_T225_P200_Nbaged_heat_0001p_00288.inp
3 Time 0.11 Rwp 0.926 -0.001 MC 0.30 2
SBN_40_T225_P200_Nbaged_heat_0001p_00287.inp
3 Time 0.09 Rwp 0.928 -0.001 MC 0.28 2
SBN_40_T225_P200_Nbaged_heat_0001p_00286.inp
3 Time 0.09 Rwp 0.918 -0.001 MC 0.28 2
SBN_40_T225_P200_Nbaged_heat_0001p_00285.inp
3 Time 0.08 Rwp 0.915 -0.001 MC 0.29 2
SBN_40_T225_P200_Nbaged_heat_0001p_00284.inp
3 Time 0.08 Rwp 0.912 -0.000 MC 0.47 2
SBN_40_T225_P200_Nbaged_heat_0001p_00283.inp
5 Time 0.11 Rwp 0.920 -0.001 MC 0.34 2
SBN_40_T225_P200_Nbaged_heat_0001p_00282.inp
4 Time 0.09 Rwp 0.924 -0.001 MC 0.33 2
SBN_40_T225_P200_Nbaged_heat_0001p_00281.inp
3 Time 0.09 Rwp 0.919 -0.001 MC 0.27 2
SBN_40_T225_P200_Nbaged_heat_0001p_00280.inp
3 Time 0.08 Rwp 0.914 -0.000 MC 0.29 2
SBN_40_T225_P200_Nbaged_heat_0001p_00279.inp
3 Time 0.09 Rwp 0.901 -0.000 MC 0.29 2
SBN_40_T225_P200_Nbaged_heat_0001p_00278.inp
3 Time 0.10 Rwp 0.895 -0.001 MC 0.30 2
SBN_40_T225_P200_Nbaged_heat_0001p_00277.inp
3 Time 0.10 Rwp 0.914 -0.001 MC 0.28 2
SBN_40_T225_P200_Nbaged_heat_0001p_00276.inp
3 Time 0.09 Rwp 0.900 -0.000 MC 0.27 2
SBN_40_T225_P200_Nbaged_heat_0001p_00275.inp
3 Time 0.13 Rwp 0.898 -0.000 MC 0.49 2
SBN_40_T225_P200_Nbaged_heat_0001p_00274.inp
3 Time 0.10 Rwp 0.918 -0.001 MC 0.31 2
SBN_40_T225_P200_Nbaged_heat_0001p_00273.inp
3 Time 0.10 Rwp 0.911 -0.001 MC 0.29 2
SBN_40_T225_P200_Nbaged_heat_0001p_00272.inp
3 Time 0.09 Rwp 0.887 -0.001 MC 0.30 2
SBN_40_T225_P200_Nbaged_heat_0001p_00271.inp
3 Time 0.09 Rwp 0.881 -0.000 MC 0.28 2
SBN_40_T225_P200_Nbaged_heat_0001p_00270.inp
3 Time 0.09 Rwp 0.885 -0.000 MC 0.29 2
SBN_40_T225_P200_Nbaged_heat_0001p_00269.inp
3 Time 0.10 Rwp 0.875 -0.000 MC 0.27 2
SBN_40_T225_P200_Nbaged_heat_0001p_00268.inp
3 Time 0.11 Rwp 0.880 -0.001 MC 0.30 2
SBN_40_T225_P200_Nbaged_heat_0001p_00267.inp
3 Time 0.09 Rwp 0.883 -0.001 MC 0.29 2
SBN_40_T225_P200_Nbaged_heat_0001p_00266.inp
3 Time 0.09 Rwp 0.875 -0.000 MC 0.28 2
SBN_40_T225_P200_Nbaged_heat_0001p_00265.inp
3 Time 0.10 Rwp 0.884 -0.001 MC 0.30 2
SBN_40_T225_P200_Nbaged_heat_0001p_00264.inp
3 Time 0.10 Rwp 0.880 -0.001 MC 0.29 2
SBN_40_T225_P200_Nbaged_heat_0001p_00263.inp
3 Time 0.09 Rwp 0.879 -0.001 MC 0.30 2
SBN_40_T225_P200_Nbaged_heat_0001p_00262.inp
3 Time 0.09 Rwp 0.874 -0.001 MC 0.30 2
SBN_40_T225_P200_Nbaged_heat_0001p_00261.inp
3 Time 0.08 Rwp 0.872 -0.000 MC 0.31 2
SBN_40_T225_P200_Nbaged_heat_0001p_00260.inp
3 Time 0.09 Rwp 0.871 -0.000 MC 0.30 2
SBN_40_T225_P200_Nbaged_heat_0001p_00259.inp
3 Time 0.08 Rwp 0.872 -0.000 MC 0.30 2
SBN_40_T225_P200_Nbaged_heat_0001p_00258.inp
3 Time 0.10 Rwp 0.875 -0.000 MC 0.28 2
SBN_40_T225_P200_Nbaged_heat_0001p_00257.inp
3 Time 0.09 Rwp 0.874 -0.000 MC 0.29 2
SBN_40_T225_P200_Nbaged_heat_0001p_00256.inp
3 Time 0.10 Rwp 0.860 -0.001 MC 0.30 2
SBN_40_T225_P200_Nbaged_heat_0001p_00255.inp
3 Time 0.08 Rwp 0.858 -0.000 MC 0.27 2
SBN_40_T225_P200_Nbaged_heat_0001p_00254.inp
3 Time 0.10 Rwp 0.863 -0.001 MC 0.29 2
SBN_40_T225_P200_Nbaged_heat_0001p_00253.inp
6 Time 0.11 Rwp 0.848 -0.001 MC 0.29 2
SBN_40_T225_P200_Nbaged_heat_0001p_00252.inp
3 Time 0.09 Rwp 0.852 -0.001 MC 0.29 2
SBN_40_T225_P200_Nbaged_heat_0001p_00251.inp
3 Time 0.10 Rwp 0.850 -0.001 MC 0.30 2
SBN_40_T225_P200_Nbaged_heat_0001p_00250.inp
3 Time 0.09 Rwp 0.839 -0.000 MC 0.30 2
SBN_40_T225_P200_Nbaged_heat_0001p_00249.inp
3 Time 0.08 Rwp 0.840 -0.001 MC 0.31 2
SBN_40_T225_P200_Nbaged_heat_0001p_00248.inp
3 Time 0.09 Rwp 0.841 -0.001 MC 0.30 2
SBN_40_T225_P200_Nbaged_heat_0001p_00247.inp
3 Time 0.10 Rwp 0.846 -0.001 MC 0.28 2
SBN_40_T225_P200_Nbaged_heat_0001p_00246.inp
3 Time 0.12 Rwp 0.836 -0.000 MC 0.27 2
SBN_40_T225_P200_Nbaged_heat_0001p_00245.inp
3 Time 0.09 Rwp 0.832 -0.000 MC 0.28 2
SBN_40_T225_P200_Nbaged_heat_0001p_00244.inp
3 Time 0.09 Rwp 0.834 -0.000 MC 0.30 2
SBN_40_T225_P200_Nbaged_heat_0001p_00243.inp
3 Time 0.08 Rwp 0.834 -0.000 MC 0.33 2
SBN_40_T225_P200_Nbaged_heat_0001p_00242.inp
3 Time 0.08 Rwp 0.839 -0.000 MC 0.30 2
SBN_40_T225_P200_Nbaged_heat_0001p_00241.inp
3 Time 0.11 Rwp 0.839 -0.000 MC 0.31 2
SBN_40_T225_P200_Nbaged_heat_0001p_00240.inp
3 Time 0.09 Rwp 0.836 -0.000 MC 0.28 2
SBN_40_T225_P200_Nbaged_heat_0001p_00239.inp
3 Time 0.09 Rwp 0.830 -0.000 MC 0.30 2
SBN_40_T225_P200_Nbaged_heat_0001p_00238.inp
3 Time 0.10 Rwp 0.826 -0.000 MC 0.28 2
SBN_40_T225_P200_Nbaged_heat_0001p_00237.inp
5 Time 0.11 Rwp 0.828 -0.001 MC 0.27 2
SBN_40_T225_P200_Nbaged_heat_0001p_00236.inp
3 Time 0.09 Rwp 0.828 -0.000 MC 0.27 2
SBN_40_T225_P200_Nbaged_heat_0001p_00235.inp
3 Time 0.10 Rwp 0.825 -0.001 MC 0.28 2
SBN_40_T225_P200_Nbaged_heat_0001p_00234.inp
3 Time 0.10 Rwp 0.824 -0.000 MC 0.28 2
SBN_40_T225_P200_Nbaged_heat_0001p_00233.inp
3 Time 0.10 Rwp 0.818 -0.000 MC 0.30 2
SBN_40_T225_P200_Nbaged_heat_0001p_00232.inp
3 Time 0.09 Rwp 0.825 -0.000 MC 0.31 2
SBN_40_T225_P200_Nbaged_heat_0001p_00231.inp
3 Time 0.10 Rwp 0.820 -0.000 MC 0.32 2
SBN_40_T225_P200_Nbaged_heat_0001p_00230.inp
3 Time 0.10 Rwp 0.818 -0.000 MC 0.31 2
SBN_40_T225_P200_Nbaged_heat_0001p_00229.inp
3 Time 0.11 Rwp 0.826 -0.000 MC 0.27 2
SBN_40_T225_P200_Nbaged_heat_0001p_00228.inp
4 Time 0.11 Rwp 0.821 -0.001 MC 0.27 2
SBN_40_T225_P200_Nbaged_heat_0001p_00227.inp
4 Time 0.10 Rwp 0.819 -0.001 MC 0.22 2
SBN_40_T225_P200_Nbaged_heat_0001p_00226.inp
3 Time 0.08 Rwp 0.826 -0.001 MC 0.21 2
SBN_40_T225_P200_Nbaged_heat_0001p_00225.inp
3 Time 0.09 Rwp 0.811 -0.001 MC 0.08 1
SBN_40_T225_P200_Nbaged_heat_0001p_00224.inp
4 Time 0.11 Rwp 0.817 -0.001 MC 0.22 2
SBN_40_T225_P200_Nbaged_heat_0001p_00223.inp
4 Time 0.11 Rwp 0.819 -0.001 MC 0.22 2
SBN_40_T225_P200_Nbaged_heat_0001p_00222.inp
3 Time 0.09 Rwp 0.808 -0.001 MC 0.22 2
SBN_40_T225_P200_Nbaged_heat_0001p_00221.inp
3 Time 0.10 Rwp 0.811 -0.001 MC 0.21 2
SBN_40_T225_P200_Nbaged_heat_0001p_00220.inp
3 Time 0.09 Rwp 0.809 -0.001 MC 0.22 2
SBN_40_T225_P200_Nbaged_heat_0001p_00219.inp
4 Time 0.11 Rwp 0.800 -0.001 MC 0.22 2
SBN_40_T225_P200_Nbaged_heat_0001p_00218.inp
3 Time 0.11 Rwp 0.800 -0.000 MC 0.30 2
SBN_40_T225_P200_Nbaged_heat_0001p_00217.inp
3 Time 0.10 Rwp 0.809 -0.001 MC 0.19 2
SBN_40_T225_P200_Nbaged_heat_0001p_00216.inp
3 Time 0.10 Rwp 0.810 -0.000 MC 0.22 2
SBN_40_T225_P200_Nbaged_heat_0001p_00215.inp
3 Time 0.09 Rwp 0.800 -0.000 MC 0.22 2
SBN_40_T225_P200_Nbaged_heat_0001p_00214.inp
3 Time 0.09 Rwp 0.816 -0.000 MC 0.22 2
SBN_40_T225_P200_Nbaged_heat_0001p_00213.inp
3 Time 0.09 Rwp 0.817 -0.000 MC 0.22 2
SBN_40_T225_P200_Nbaged_heat_0001p_00212.inp
3 Time 0.08 Rwp 0.817 -0.000 MC 0.20 2
SBN_40_T225_P200_Nbaged_heat_0001p_00211.inp
3 Time 0.11 Rwp 0.810 -0.000 MC 0.20 2
SBN_40_T225_P200_Nbaged_heat_0001p_00210.inp
3 Time 0.10 Rwp 0.816 -0.000 MC 0.19 2
SBN_40_T225_P200_Nbaged_heat_0001p_00209.inp
3 Time 0.08 Rwp 0.811 -0.000 MC 0.15 2
SBN_40_T225_P200_Nbaged_heat_0001p_00208.inp
3 Time 0.08 Rwp 0.815 -0.000 MC 0.20 2
SBN_40_T225_P200_Nbaged_heat_0001p_00207.inp
3 Time 0.09 Rwp 0.808 -0.000 MC 0.15 2
SBN_40_T225_P200_Nbaged_heat_0001p_00206.inp
3 Time 0.09 Rwp 0.806 -0.000 MC 0.14 2
SBN_40_T225_P200_Nbaged_heat_0001p_00205.inp
3 Time 0.08 Rwp 0.803 -0.000 MC 0.15 2
SBN_40_T225_P200_Nbaged_heat_0001p_00204.inp
3 Time 0.09 Rwp 0.801 -0.000 MC 0.20 2
SBN_40_T225_P200_Nbaged_heat_0001p_00203.inp
3 Time 0.11 Rwp 0.808 -0.000 MC 0.15 2
SBN_40_T225_P200_Nbaged_heat_0001p_00202.inp
3 Time 0.09 Rwp 0.812 -0.000 MC 0.15 2
SBN_40_T225_P200_Nbaged_heat_0001p_00201.inp
3 Time 0.09 Rwp 0.808 -0.000 MC 0.14 2
SBN_40_T225_P200_Nbaged_heat_0001p_00200.inp
3 Time 0.10 Rwp 0.805 -0.000 MC 0.15 2
SBN_40_T225_P200_Nbaged_heat_0001p_00199.inp
3 Time 0.10 Rwp 0.801 -0.000 MC 0.19 2
SBN_40_T225_P200_Nbaged_heat_0001p_00198.inp
3 Time 0.09 Rwp 0.800 -0.000 MC 0.15 2
SBN_40_T225_P200_Nbaged_heat_0001p_00197.inp
3 Time 0.11 Rwp 0.799 -0.000 MC 0.20 2
SBN_40_T225_P200_Nbaged_heat_0001p_00196.inp
3 Time 0.10 Rwp 0.795 -0.000 MC 0.04 1
SBN_40_T225_P200_Nbaged_heat_0001p_00195.inp
3 Time 0.11 Rwp 0.800 -0.000 MC 0.19 2
SBN_40_T225_P200_Nbaged_heat_0001p_00194.inp
3 Time 0.11 Rwp 0.802 -0.000 MC 0.20 2
SBN_40_T225_P200_Nbaged_heat_0001p_00193.inp
3 Time 0.10 Rwp 0.800 -0.000 MC 0.20 2
SBN_40_T225_P200_Nbaged_heat_0001p_00192.inp
3 Time 0.12 Rwp 0.798 -0.000 MC 0.21 2
SBN_40_T225_P200_Nbaged_heat_0001p_00191.inp
3 Time 0.11 Rwp 0.794 -0.000 MC 0.10 1
SBN_40_T225_P200_Nbaged_heat_0001p_00190.inp
3 Time 0.11 Rwp 0.800 -0.001 MC 0.15 2
SBN_40_T225_P200_Nbaged_heat_0001p_00189.inp
3 Time 0.10 Rwp 0.800 -0.000 MC 0.24 2
SBN_40_T225_P200_Nbaged_heat_0001p_00188.inp
3 Time 0.12 Rwp 0.792 -0.000 MC 0.22 2
SBN_40_T225_P200_Nbaged_heat_0001p_00187.inp
3 Time 0.13 Rwp 0.797 -0.001 MC 0.16 2
SBN_40_T225_P200_Nbaged_heat_0001p_00186.inp
3 Time 0.13 Rwp 0.802 -0.000 MC 0.26 2
SBN_40_T225_P200_Nbaged_heat_0001p_00185.inp
3 Time 0.11 Rwp 0.799 -0.000 MC 0.31 2
SBN_40_T225_P200_Nbaged_heat_0001p_00184.inp
3 Time 0.10 Rwp 0.798 -0.001 MC 0.16 2
SBN_40_T225_P200_Nbaged_heat_0001p_00183.inp
18 Time 1.11 Rwp 0.684 -0.001 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00182.inp
4 Time 0.51 Rwp 0.667 -0.001 MC 0.24 2
SBN_40_T225_P200_Nbaged_heat_0001p_00181.inp
4 Time 0.42 Rwp 0.671 -0.001 MC 0.21 2
SBN_40_T225_P200_Nbaged_heat_0001p_00180.inp
4 Time 0.47 Rwp 0.657 -0.001 MC 0.20 2
SBN_40_T225_P200_Nbaged_heat_0001p_00179.inp
4 Time 0.40 Rwp 0.646 -0.001 MC 0.24 2
SBN_40_T225_P200_Nbaged_heat_0001p_00178.inp
3 Time 0.38 Rwp 0.647 -0.000 MC 0.29 2
SBN_40_T225_P200_Nbaged_heat_0001p_00177.inp
4 Time 0.51 Rwp 0.651 -0.000 MC 0.24 2
SBN_40_T225_P200_Nbaged_heat_0001p_00176.inp
3 Time 0.37 Rwp 0.637 -0.000 MC 0.29 2
SBN_40_T225_P200_Nbaged_heat_0001p_00175.inp
3 Time 0.32 Rwp 0.635 -0.000 MC 0.24 2
SBN_40_T225_P200_Nbaged_heat_0001p_00174.inp
3 Time 0.38 Rwp 0.638 -0.001 MC 0.16 2
SBN_40_T225_P200_Nbaged_heat_0001p_00173.inp
3 Time 0.36 Rwp 0.631 -0.001 MC 0.16 2
SBN_40_T225_P200_Nbaged_heat_0001p_00172.inp
3 Time 0.41 Rwp 0.628 -0.000 MC 0.23 2
SBN_40_T225_P200_Nbaged_heat_0001p_00171.inp
3 Time 0.32 Rwp 0.626 -0.001 MC 0.16 2
SBN_40_T225_P200_Nbaged_heat_0001p_00170.inp
4 Time 0.44 Rwp 0.631 -0.000 MC 0.22 2
SBN_40_T225_P200_Nbaged_heat_0001p_00169.inp
4 Time 0.50 Rwp 0.626 -0.001 MC 0.22 2
SBN_40_T225_P200_Nbaged_heat_0001p_00168.inp
3 Time 0.31 Rwp 0.632 -0.001 MC 0.16 2
SBN_40_T225_P200_Nbaged_heat_0001p_00167.inp
3 Time 0.32 Rwp 0.622 -0.001 MC 0.16 2
SBN_40_T225_P200_Nbaged_heat_0001p_00166.inp
3 Time 0.29 Rwp 0.622 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00165.inp
3 Time 0.36 Rwp 0.616 -0.000 MC 0.22 2
SBN_40_T225_P200_Nbaged_heat_0001p_00164.inp
3 Time 0.32 Rwp 0.608 -0.001 MC 0.16 2
SBN_40_T225_P200_Nbaged_heat_0001p_00163.inp
3 Time 0.36 Rwp 0.615 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00162.inp
3 Time 0.34 Rwp 0.607 -0.001 MC 0.23 2
SBN_40_T225_P200_Nbaged_heat_0001p_00161.inp
3 Time 0.41 Rwp 0.598 -0.000 MC 0.22 2
SBN_40_T225_P200_Nbaged_heat_0001p_00160.inp
3 Time 0.43 Rwp 0.599 -0.000 MC 0.29 2
SBN_40_T225_P200_Nbaged_heat_0001p_00159.inp
3 Time 0.37 Rwp 0.598 -0.000 MC 0.30 2
SBN_40_T225_P200_Nbaged_heat_0001p_00158.inp
3 Time 0.40 Rwp 0.594 -0.000 MC 0.28 2
SBN_40_T225_P200_Nbaged_heat_0001p_00157.inp
3 Time 0.38 Rwp 0.597 -0.000 MC 0.28 2
SBN_40_T225_P200_Nbaged_heat_0001p_00156.inp
3 Time 0.36 Rwp 0.595 -0.000 MC 0.13 1
SBN_40_T225_P200_Nbaged_heat_0001p_00155.inp
3 Time 0.31 Rwp 0.591 -0.000 MC 0.07 1
SBN_40_T225_P200_Nbaged_heat_0001p_00154.inp
3 Time 0.34 Rwp 0.595 -0.001 MC 0.16 2
SBN_40_T225_P200_Nbaged_heat_0001p_00153.inp
4 Time 0.42 Rwp 0.598 -0.001 MC 0.22 2
SBN_40_T225_P200_Nbaged_heat_0001p_00152.inp
3 Time 0.29 Rwp 0.601 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00151.inp
4 Time 0.41 Rwp 0.592 -0.001 MC 0.22 2
SBN_40_T225_P200_Nbaged_heat_0001p_00150.inp
3 Time 0.34 Rwp 0.596 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00149.inp
4 Time 0.42 Rwp 0.589 -0.001 MC 0.22 2
SBN_40_T225_P200_Nbaged_heat_0001p_00148.inp
3 Time 0.45 Rwp 0.592 -0.001 MC 0.23 2
SBN_40_T225_P200_Nbaged_heat_0001p_00147.inp
3 Time 0.37 Rwp 0.587 -0.001 MC 0.23 2
SBN_40_T225_P200_Nbaged_heat_0001p_00146.inp
3 Time 0.33 Rwp 0.587 -0.001 MC 0.16 2
SBN_40_T225_P200_Nbaged_heat_0001p_00145.inp
4 Time 0.42 Rwp 0.592 -0.001 MC 0.23 2
SBN_40_T225_P200_Nbaged_heat_0001p_00144.inp
3 Time 0.39 Rwp 0.585 -0.001 MC 0.28 2
SBN_40_T225_P200_Nbaged_heat_0001p_00143.inp
3 Time 0.39 Rwp 0.593 -0.001 MC 0.22 2
SBN_40_T225_P200_Nbaged_heat_0001p_00142.inp
4 Time 0.41 Rwp 0.588 -0.001 MC 0.23 2
SBN_40_T225_P200_Nbaged_heat_0001p_00141.inp
3 Time 0.34 Rwp 0.583 -0.001 MC 0.22 2
SBN_40_T225_P200_Nbaged_heat_0001p_00140.inp
3 Time 0.32 Rwp 0.584 -0.001 MC 0.16 2
SBN_40_T225_P200_Nbaged_heat_0001p_00139.inp
3 Time 0.41 Rwp 0.578 -0.000 MC 0.28 2
SBN_40_T225_P200_Nbaged_heat_0001p_00138.inp
3 Time 0.34 Rwp 0.580 -0.001 MC 0.16 2
SBN_40_T225_P200_Nbaged_heat_0001p_00137.inp
3 Time 0.43 Rwp 0.578 -0.000 MC 0.27 2
SBN_40_T225_P200_Nbaged_heat_0001p_00136.inp
3 Time 0.31 Rwp 0.586 -0.001 MC 0.11 1
SBN_40_T225_P200_Nbaged_heat_0001p_00135.inp
3 Time 0.32 Rwp 0.585 -0.001 MC 0.08 1
SBN_40_T225_P200_Nbaged_heat_0001p_00134.inp
3 Time 0.32 Rwp 0.580 -0.001 MC 0.16 2
SBN_40_T225_P200_Nbaged_heat_0001p_00133.inp
3 Time 0.31 Rwp 0.584 -0.001 MC 0.16 2
SBN_40_T225_P200_Nbaged_heat_0001p_00132.inp
3 Time 0.32 Rwp 0.584 -0.001 MC 0.16 2
SBN_40_T225_P200_Nbaged_heat_0001p_00131.inp
3 Time 0.35 Rwp 0.577 -0.001 MC 0.23 2
SBN_40_T225_P200_Nbaged_heat_0001p_00130.inp
4 Time 0.42 Rwp 0.579 -0.001 MC 0.22 2
SBN_40_T225_P200_Nbaged_heat_0001p_00129.inp
3 Time 0.32 Rwp 0.595 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00128.inp
3 Time 0.34 Rwp 0.583 -0.001 MC 0.22 2
SBN_40_T225_P200_Nbaged_heat_0001p_00127.inp
4 Time 0.41 Rwp 0.580 -0.001 MC 0.22 2
SBN_40_T225_P200_Nbaged_heat_0001p_00126.inp
3 Time 0.37 Rwp 0.577 -0.001 MC 0.29 2
SBN_40_T225_P200_Nbaged_heat_0001p_00125.inp
3 Time 0.37 Rwp 0.582 -0.000 MC 0.27 2
SBN_40_T225_P200_Nbaged_heat_0001p_00124.inp
3 Time 0.33 Rwp 0.579 -0.001 MC 0.16 2
SBN_40_T225_P200_Nbaged_heat_0001p_00123.inp
4 Time 0.48 Rwp 0.580 -0.001 MC 0.22 2
SBN_40_T225_P200_Nbaged_heat_0001p_00122.inp
4 Time 0.36 Rwp 0.583 -0.001 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00121.inp
3 Time 0.34 Rwp 0.575 -0.001 MC 0.22 2
SBN_40_T225_P200_Nbaged_heat_0001p_00120.inp
3 Time 0.35 Rwp 0.581 -0.001 MC 0.22 2
SBN_40_T225_P200_Nbaged_heat_0001p_00119.inp
3 Time 0.35 Rwp 0.579 -0.001 MC 0.22 2
SBN_40_T225_P200_Nbaged_heat_0001p_00118.inp
4 Time 0.41 Rwp 0.577 -0.001 MC 0.22 2
SBN_40_T225_P200_Nbaged_heat_0001p_00117.inp
4 Time 0.41 Rwp 0.577 -0.001 MC 0.22 2
SBN_40_T225_P200_Nbaged_heat_0001p_00116.inp
3 Time 0.38 Rwp 0.577 -0.001 MC 0.26 2
SBN_40_T225_P200_Nbaged_heat_0001p_00115.inp
3 Time 0.34 Rwp 0.571 -0.001 MC 0.23 2
SBN_40_T225_P200_Nbaged_heat_0001p_00114.inp
3 Time 0.33 Rwp 0.578 -0.001 MC 0.23 2
SBN_40_T225_P200_Nbaged_heat_0001p_00113.inp
3 Time 0.37 Rwp 0.571 -0.001 MC 0.27 2
SBN_40_T225_P200_Nbaged_heat_0001p_00112.inp
3 Time 0.38 Rwp 0.570 -0.000 MC 0.29 2
SBN_40_T225_P200_Nbaged_heat_0001p_00111.inp
3 Time 0.31 Rwp 0.572 -0.001 MC 0.16 2
SBN_40_T225_P200_Nbaged_heat_0001p_00110.inp
3 Time 0.34 Rwp 0.578 -0.001 MC 0.23 2
SBN_40_T225_P200_Nbaged_heat_0001p_00109.inp
3 Time 0.36 Rwp 0.579 -0.001 MC 0.16 2
SBN_40_T225_P200_Nbaged_heat_0001p_00108.inp
3 Time 0.35 Rwp 0.574 -0.001 MC 0.22 2
SBN_40_T225_P200_Nbaged_heat_0001p_00107.inp
3 Time 0.36 Rwp 0.566 -0.001 MC 0.28 2
SBN_40_T225_P200_Nbaged_heat_0001p_00106.inp
4 Time 0.40 Rwp 0.577 -0.001 MC 0.22 2
SBN_40_T225_P200_Nbaged_heat_0001p_00105.inp
4 Time 0.45 Rwp 0.580 -0.001 MC 0.28 2
SBN_40_T225_P200_Nbaged_heat_0001p_00104.inp
3 Time 0.36 Rwp 0.579 -0.001 MC 0.23 2
SBN_40_T225_P200_Nbaged_heat_0001p_00103.inp
3 Time 0.32 Rwp 0.584 -0.001 MC 0.17 2
SBN_40_T225_P200_Nbaged_heat_0001p_00102.inp
4 Time 0.42 Rwp 0.577 -0.001 MC 0.21 2
SBN_40_T225_P200_Nbaged_heat_0001p_00101.inp
4 Time 0.42 Rwp 0.582 -0.001 MC 0.23 2
SBN_40_T225_P200_Nbaged_heat_0001p_00100.inp
3 Time 0.42 Rwp 0.576 -0.001 MC 0.23 2
SBN_40_T225_P200_Nbaged_heat_0001p_00099.inp
3 Time 0.38 Rwp 0.569 -0.001 MC 0.22 2
SBN_40_T225_P200_Nbaged_heat_0001p_00098.inp
4 Time 0.47 Rwp 0.574 -0.001 MC 0.22 2
SBN_40_T225_P200_Nbaged_heat_0001p_00097.inp
3 Time 0.38 Rwp 0.576 -0.001 MC 0.27 2
SBN_40_T225_P200_Nbaged_heat_0001p_00096.inp
3 Time 0.38 Rwp 0.578 -0.001 MC 0.23 2
SBN_40_T225_P200_Nbaged_heat_0001p_00095.inp
3 Time 0.36 Rwp 0.578 -0.001 MC 0.23 2
SBN_40_T225_P200_Nbaged_heat_0001p_00094.inp
3 Time 0.37 Rwp 0.572 -0.001 MC 0.22 2
SBN_40_T225_P200_Nbaged_heat_0001p_00093.inp
3 Time 0.38 Rwp 0.574 -0.001 MC 0.27 2
SBN_40_T225_P200_Nbaged_heat_0001p_00092.inp
3 Time 0.41 Rwp 0.575 -0.000 MC 0.28 2
SBN_40_T225_P200_Nbaged_heat_0001p_00091.inp
3 Time 0.38 Rwp 0.570 -0.001 MC 0.29 2
SBN_40_T225_P200_Nbaged_heat_0001p_00090.inp
3 Time 0.34 Rwp 0.571 -0.001 MC 0.23 2
SBN_40_T225_P200_Nbaged_heat_0001p_00089.inp
3 Time 0.32 Rwp 0.576 -0.001 MC 0.16 2
SBN_40_T225_P200_Nbaged_heat_0001p_00088.inp
4 Time 0.41 Rwp 0.572 -0.001 MC 0.23 2
SBN_40_T225_P200_Nbaged_heat_0001p_00087.inp
3 Time 0.29 Rwp 0.580 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00086.inp
3 Time 0.37 Rwp 0.583 -0.001 MC 0.22 2
SBN_40_T225_P200_Nbaged_heat_0001p_00085.inp
3 Time 0.35 Rwp 0.575 -0.001 MC 0.23 2
SBN_40_T225_P200_Nbaged_heat_0001p_00084.inp
3 Time 0.29 Rwp 0.572 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00083.inp
4 Time 0.41 Rwp 0.575 -0.001 MC 0.21 2
SBN_40_T225_P200_Nbaged_heat_0001p_00082.inp
3 Time 0.39 Rwp 0.569 -0.001 MC 0.21 2
SBN_40_T225_P200_Nbaged_heat_0001p_00081.inp
4 Time 0.42 Rwp 0.573 -0.001 MC 0.23 2
SBN_40_T225_P200_Nbaged_heat_0001p_00080.inp
3 Time 0.37 Rwp 0.570 -0.001 MC 0.27 2
SBN_40_T225_P200_Nbaged_heat_0001p_00079.inp
4 Time 0.42 Rwp 0.568 -0.001 MC 0.22 2
SBN_40_T225_P200_Nbaged_heat_0001p_00078.inp
3 Time 0.35 Rwp 0.575 -0.001 MC 0.22 2
SBN_40_T225_P200_Nbaged_heat_0001p_00077.inp
3 Time 0.39 Rwp 0.566 -0.001 MC 0.29 2
SBN_40_T225_P200_Nbaged_heat_0001p_00076.inp
4 Time 0.41 Rwp 0.571 -0.001 MC 0.31 2
SBN_40_T225_P200_Nbaged_heat_0001p_00075.inp
3 Time 0.36 Rwp 0.570 -0.001 MC 0.22 2
SBN_40_T225_P200_Nbaged_heat_0001p_00074.inp
3 Time 0.29 Rwp 0.573 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00073.inp
3 Time 0.34 Rwp 0.568 -0.001 MC 0.23 2
SBN_40_T225_P200_Nbaged_heat_0001p_00072.inp
3 Time 0.32 Rwp 0.572 -0.001 MC 0.18 2
SBN_40_T225_P200_Nbaged_heat_0001p_00071.inp
3 Time 0.39 Rwp 0.577 -0.001 MC 0.23 2
SBN_40_T225_P200_Nbaged_heat_0001p_00070.inp
3 Time 0.35 Rwp 0.568 -0.001 MC 0.22 2
SBN_40_T225_P200_Nbaged_heat_0001p_00069.inp
3 Time 0.30 Rwp 0.573 -0.001 MC 0.04 1
SBN_40_T225_P200_Nbaged_heat_0001p_00068.inp
3 Time 0.38 Rwp 0.570 -0.001 MC 0.22 2
SBN_40_T225_P200_Nbaged_heat_0001p_00067.inp
3 Time 0.38 Rwp 0.571 -0.001 MC 0.28 2
SBN_40_T225_P200_Nbaged_heat_0001p_00066.inp
3 Time 0.36 Rwp 0.563 -0.000 MC 0.26 2
SBN_40_T225_P200_Nbaged_heat_0001p_00065.inp
3 Time 0.29 Rwp 0.567 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00064.inp
4 Time 0.42 Rwp 0.564 -0.001 MC 0.21 2
SBN_40_T225_P200_Nbaged_heat_0001p_00063.inp
3 Time 0.34 Rwp 0.562 -0.001 MC 0.22 2
SBN_40_T225_P200_Nbaged_heat_0001p_00062.inp
3 Time 0.37 Rwp 0.561 -0.001 MC 0.27 2
SBN_40_T225_P200_Nbaged_heat_0001p_00061.inp
3 Time 0.37 Rwp 0.563 -0.000 MC 0.32 2
SBN_40_T225_P200_Nbaged_heat_0001p_00060.inp
3 Time 0.33 Rwp 0.571 -0.001 MC 0.16 2
SBN_40_T225_P200_Nbaged_heat_0001p_00059.inp
3 Time 0.38 Rwp 0.572 -0.001 MC 0.34 2
SBN_40_T225_P200_Nbaged_heat_0001p_00058.inp
3 Time 0.37 Rwp 0.572 -0.001 MC 0.23 2
SBN_40_T225_P200_Nbaged_heat_0001p_00057.inp
3 Time 0.37 Rwp 0.567 -0.001 MC 0.23 2
SBN_40_T225_P200_Nbaged_heat_0001p_00056.inp
3 Time 0.34 Rwp 0.572 -0.001 MC 0.22 2
SBN_40_T225_P200_Nbaged_heat_0001p_00055.inp
3 Time 0.38 Rwp 0.572 -0.001 MC 0.23 2
SBN_40_T225_P200_Nbaged_heat_0001p_00054.inp
3 Time 0.33 Rwp 0.570 -0.000 MC 0.01 1
SBN_40_T225_P200_Nbaged_heat_0001p_00053.inp
3 Time 0.35 Rwp 0.566 -0.001 MC 0.22 2
SBN_40_T225_P200_Nbaged_heat_0001p_00052.inp
3 Time 0.34 Rwp 0.569 -0.001 MC 0.22 2
SBN_40_T225_P200_Nbaged_heat_0001p_00051.inp
3 Time 0.29 Rwp 0.572 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00050.inp
3 Time 0.33 Rwp 0.562 -0.001 MC 0.22 2
SBN_40_T225_P200_Nbaged_heat_0001p_00049.inp
3 Time 0.37 Rwp 0.559 -0.000 MC 0.28 2
SBN_40_T225_P200_Nbaged_heat_0001p_00048.inp
3 Time 0.34 Rwp 0.569 -0.001 MC 0.22 2
SBN_40_T225_P200_Nbaged_heat_0001p_00047.inp
3 Time 0.37 Rwp 0.563 -0.001 MC 0.27 2
SBN_40_T225_P200_Nbaged_heat_0001p_00046.inp
3 Time 0.30 Rwp 0.574 -0.000 MC 0.07 1
SBN_40_T225_P200_Nbaged_heat_0001p_00045.inp
4 Time 0.41 Rwp 0.565 -0.001 MC 0.22 2
SBN_40_T225_P200_Nbaged_heat_0001p_00044.inp
3 Time 0.40 Rwp 0.567 -0.001 MC 0.22 2
SBN_40_T225_P200_Nbaged_heat_0001p_00043.inp
3 Time 0.28 Rwp 0.569 -0.001 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00042.inp
4 Time 0.42 Rwp 0.570 -0.001 MC 0.18 2
SBN_40_T225_P200_Nbaged_heat_0001p_00041.inp
3 Time 0.37 Rwp 0.561 -0.001 MC 0.21 2
SBN_40_T225_P200_Nbaged_heat_0001p_00040.inp
3 Time 0.42 Rwp 0.563 -0.001 MC 0.29 2
SBN_40_T225_P200_Nbaged_heat_0001p_00039.inp
4 Time 0.41 Rwp 0.572 -0.001 MC 0.21 2
SBN_40_T225_P200_Nbaged_heat_0001p_00038.inp
3 Time 0.38 Rwp 0.565 -0.001 MC 0.21 2
SBN_40_T225_P200_Nbaged_heat_0001p_00037.inp
3 Time 0.58 Rwp 0.558 -0.000 MC 0.29 2
SBN_40_T225_P200_Nbaged_heat_0001p_00036.inp
3 Time 0.34 Rwp 0.564 -0.001 MC 0.22 2
SBN_40_T225_P200_Nbaged_heat_0001p_00035.inp
3 Time 0.38 Rwp 0.561 -0.000 MC 0.30 2
SBN_40_T225_P200_Nbaged_heat_0001p_00034.inp
3 Time 0.35 Rwp 0.560 -0.001 MC 0.23 2
SBN_40_T225_P200_Nbaged_heat_0001p_00033.inp
3 Time 0.39 Rwp 0.562 -0.000 MC 0.28 2
SBN_40_T225_P200_Nbaged_heat_0001p_00032.inp
4 Time 0.41 Rwp 0.559 -0.001 MC 0.21 2
SBN_40_T225_P200_Nbaged_heat_0001p_00031.inp
4 Time 0.41 Rwp 0.559 -0.001 MC 0.22 2
SBN_40_T225_P200_Nbaged_heat_0001p_00030.inp
3 Time 0.37 Rwp 0.559 -0.001 MC 0.28 2
SBN_40_T225_P200_Nbaged_heat_0001p_00029.inp
4 Time 0.47 Rwp 0.562 -0.001 MC 0.21 2
SBN_40_T225_P200_Nbaged_heat_0001p_00028.inp
3 Time 0.39 Rwp 0.566 -0.000 MC 0.06 1
SBN_40_T225_P200_Nbaged_heat_0001p_00027.inp
3 Time 0.40 Rwp 0.566 -0.001 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00026.inp
3 Time 0.35 Rwp 0.577 -0.001 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00025.inp
3 Time 0.34 Rwp 0.559 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00024.inp
3 Time 0.37 Rwp 0.557 -0.001 MC 0.22 2
SBN_40_T225_P200_Nbaged_heat_0001p_00023.inp
3 Time 0.44 Rwp 0.554 -0.001 MC 0.21 2
SBN_40_T225_P200_Nbaged_heat_0001p_00022.inp
3 Time 0.33 Rwp 0.553 -0.001 MC 0.16 2
SBN_40_T225_P200_Nbaged_heat_0001p_00021.inp
3 Time 0.31 Rwp 0.557 -0.000 MC 0.01 1
SBN_40_T225_P200_Nbaged_heat_0001p_00020.inp
3 Time 0.31 Rwp 0.562 -0.000 MC 0.01 1
SBN_40_T225_P200_Nbaged_heat_0001p_00019.inp
3 Time 0.37 Rwp 0.561 -0.001 MC 0.22 2
SBN_40_T225_P200_Nbaged_heat_0001p_00018.inp
3 Time 0.38 Rwp 0.556 -0.001 MC 0.21 2
SBN_40_T225_P200_Nbaged_heat_0001p_00017.inp
3 Time 0.31 Rwp 0.561 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00016.inp
3 Time 0.43 Rwp 0.547 -0.001 MC 0.21 2
SBN_40_T225_P200_Nbaged_heat_0001p_00015.inp
3 Time 0.38 Rwp 0.545 -0.001 MC 0.21 2
SBN_40_T225_P200_Nbaged_heat_0001p_00014.inp
3 Time 0.42 Rwp 0.556 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00013.inp
3 Time 0.35 Rwp 0.550 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00012.inp
3 Time 0.42 Rwp 0.548 -0.001 MC 0.22 2
SBN_40_T225_P200_Nbaged_heat_0001p_00011.inp
3 Time 0.42 Rwp 0.542 -0.000 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00010.inp
4 Time 0.43 Rwp 0.548 -0.000 MC 0.01 1
SBN_40_T225_P200_Nbaged_heat_0001p_00009.inp
3 Time 0.31 Rwp 0.546 -0.001 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00008.inp
5 Time 0.46 Rwp 0.541 -0.001 MC 0.00 1
SBN_40_T225_P200_Nbaged_heat_0001p_00007.inp
4 Time 0.50 Rwp 0.536 -0.001 MC 0.27 2
SBN_40_T225_P200_Nbaged_heat_0001p_00006.inp
4 Time 0.46 Rwp 0.544 -0.001 MC 0.29 2
SBN_40_T225_P200_Nbaged_heat_0001p_00005.inp
8 Time 0.83 Rwp 0.689 -0.001 MC 0.34 2
SBN_40_T225_P200_Nbaged_heat_0001p_00004.inp
8 Time 0.70 Rwp 1.079 -0.001 MC 0.31 2
SBN_40_T225_P200_Nbaged_heat_0001p_00003.inp
8 Time 0.62 Rwp 0.793 -0.000 MC 0.03 1
SBN_40_T225_P200_Nbaged_heat_0001p_00002.inp
7 Time 0.79 Rwp 0.760 -0.001 MC 0.27 2
SBN_40_T225_P200_Nbaged_heat_0001p_00001.inp
3 Time 0.37 Rwp 0.755 -0.001 MC 0.28 2
SBN_40_T225_P200_Nbaged_heat_0001p_00000.inp
3 Time 0.41 Rwp 0.752 -0.001 MC 0.31 2
|
Generate_text_with_RuGPTs_HF_torch_1_4_0.ipynb | ###Markdown
Generate text with RuGPTs in huggingfaceHow to generate text with pretrained RuGPTs models with huggingface.This notebook is valid for all RuGPTs models except RuGPT3XL. Install env
###Code
!pip3 install transformers==3.5.0
!pip install torch==1.4.0
# !pip install flask-ngrok
!pip install flask
!git clone https://github.com/sberbank-ai/ru-gpts
###Output
Cloning into 'ru-gpts'...
remote: Enumerating objects: 651, done.[K
remote: Counting objects: 100% (146/146), done.[K
remote: Compressing objects: 100% (78/78), done.[K
remote: Total 651 (delta 90), reused 113 (delta 68), pack-reused 505[K
Receiving objects: 100% (651/651), 379.58 KiB | 4.57 MiB/s, done.
Resolving deltas: 100% (390/390), done.
###Markdown
Generate
###Code
import numpy as np
import torch
from flask import Flask, jsonify, request
# from flask_ngrok import run_with_ngrok
TRAIN_TEXT_FILE_PATH = "train_text_5133.txt"
np.random.seed(42)
torch.manual_seed(42)
from transformers import GPT2LMHeadModel, GPT2Tokenizer
###Output
_____no_output_____
###Markdown
Новый раздел
###Code
def load_tokenizer_and_model(model_name_or_path):
return GPT2Tokenizer.from_pretrained(model_name_or_path), GPT2LMHeadModel.from_pretrained(model_name_or_path).cuda()
# check_text = "Компания: БургерКинг. Слоган: Лучшие бургеры на планете."
def generate(
model, tok, text,
do_sample=True, max_length=50, repetition_penalty=5.0,
top_k=5, top_p=0.95, temperature=0.85,
num_beams=None,
no_repeat_ngram_size=3
):
input_ids = tok.encode(text, return_tensors="pt").cuda()
out = model.generate(
input_ids.cuda(),
max_length=max_length,
repetition_penalty=repetition_penalty,
do_sample=do_sample,
top_k=top_k, top_p=top_p, temperature=temperature,
num_beams=num_beams, no_repeat_ngram_size=no_repeat_ngram_size
)
return list(map(tok.decode, out))
###Output
_____no_output_____
###Markdown
RuGPT3Small
###Code
tok, model = load_tokenizer_and_model("sberbank-ai/rugpt3small_based_on_gpt2")
generated = generate(model, tok, "Компания Новые двери. Слоган: ", num_beams=10)
generated[0]
app = Flask(__name__)
run_with_ngrok(app)
@app.route("/")
def hello():
return "Hello World!! from Google Colab"
@app.route("/bot")
def hellobot():
#Компания: БургерКинг. Слоган: Лучшие бургеры на планете. Компания:
# split('"')[1::2]
# "Компания: Tesla Inc. Индустрия: Автомобильная промышленность. Продукт: электромобиль. Слоган: "
company = request.args.get('company')
industry = request.args.get('industry')
product = request.args.get('product')
# mystr[mystr.find(char1)+1 : mystr.find(char2)]
result_text = "Компания: " + company + ". Индустрия: " + industry + ". Слоган: "
tok, model = load_tokenizer_and_model("sberbank-ai/rugpt3small_based_on_gpt2")
generated = generate(model, tok, result_text, num_beams=10)
generated_text = generated[0]
return generated_text
if __name__ == '__main__':
app.run()
import requests
result = requests.get("http://3abaede87cbc.ngrok.io/bot?company=Эверест&industry=Реклама&product=Маркетинг")
print(result.text)
###Output
Tunnel 3abaede87cbc.ngrok.io not found
ERR_NGROK_3200
###Markdown
RuGPT3Medium
###Code
tok, model = load_tokenizer_and_model("sberbank-ai/rugpt3medium_based_on_gpt2")
generated = generate(model, tok, "Александр Сергеевич Пушкин родился в ", num_beams=10)
generated[0]
###Output
_____no_output_____
###Markdown
RuGPT3Large
###Code
tok, model = load_tokenizer_and_model("sberbank-ai/rugpt3large_based_on_gpt2")
generated = generate(model, tok, "Александр Сергеевич Пушкин родился в ", num_beams=10)
generated[0]
###Output
_____no_output_____ |
SalesAnalysis.ipynb | ###Markdown
Sales Analysis Import necessary libraries
###Code
import os
import pandas as pd
###Output
_____no_output_____
###Markdown
Merge data from each month into one CSV
###Code
path = "./Sales_Data"
files = [file for file in os.listdir(path) if not file.startswith('.')] # Ignore hidden files
all_months_data = pd.DataFrame()
for file in files:
current_data = pd.read_csv(path+"/"+file)
all_months_data = pd.concat([all_months_data, current_data])
all_months_data.to_csv("all_data_copy.csv", index=False)
###Output
_____no_output_____
###Markdown
Read in updated dataframe
###Code
all_data = pd.read_csv("all_data.csv")
all_data.head()
###Output
_____no_output_____
###Markdown
Clean up the data!The first step in this is figuring out what we need to clean. I have found in practice, that you find things you need to clean as you perform operations and get errors. Based on the error, you decide how you should go about cleaning the data Drop rows of NAN
###Code
# Find NAN
nan_df = all_data[all_data.isna().any(axis=1)]
display(nan_df.head())
all_data = all_data.dropna(how='all')
all_data.head()
###Output
_____no_output_____
###Markdown
Get rid of text in order date column
###Code
all_data = all_data[all_data['Order Date'].str[0:2]!='Or']
###Output
_____no_output_____
###Markdown
Make columns correct type
###Code
all_data['Quantity Ordered'] = pd.to_numeric(all_data['Quantity Ordered'])
all_data['Price Each'] = pd.to_numeric(all_data['Price Each'])
###Output
_____no_output_____
###Markdown
Augment data with additional columns Add month column
###Code
all_data['Month'] = all_data['Order Date'].str[0:2]
all_data['Month'] = all_data['Month'].astype('int32')
all_data.head()
###Output
_____no_output_____
###Markdown
Add month column (alternative method)
###Code
all_data['Month 2'] = pd.to_datetime(all_data['Order Date']).dt.month
all_data.head()
###Output
_____no_output_____
###Markdown
Add city column
###Code
def get_city(address):
return address.split(",")[1].strip(" ")
def get_state(address):
return address.split(",")[2].split(" ")[1]
all_data['City'] = all_data['Purchase Address'].apply(lambda x: f"{get_city(x)} ({get_state(x)})")
all_data.head()
###Output
_____no_output_____
###Markdown
Data Exploration! Question 1: What was the best month for sales? How much was earned that month?
###Code
all_data['Sales'] = all_data['Quantity Ordered'].astype('int') * all_data['Price Each'].astype('float')
all_data.groupby(['Month']).sum()
import matplotlib.pyplot as plt
months = range(1,13)
print(months)
plt.bar(months,all_data.groupby(['Month']).sum()['Sales'])
plt.xticks(months)
plt.ylabel('Sales in USD ($)')
plt.xlabel('Month number')
plt.show()
###Output
range(1, 13)
###Markdown
Question 2: What city sold the most product?
###Code
all_data.groupby(['City']).sum()
import matplotlib.pyplot as plt
keys = [city for city, df in all_data.groupby(['City'])]
plt.bar(keys,all_data.groupby(['City']).sum()['Sales'])
plt.ylabel('Sales in USD ($)')
plt.xlabel('Month number')
plt.xticks(keys, rotation='vertical', size=8)
plt.show()
###Output
_____no_output_____
###Markdown
Question 3: What time should we display advertisements to maximize likelihood of customer's buying product?
###Code
# Add hour column
all_data['Hour'] = pd.to_datetime(all_data['Order Date']).dt.hour
all_data['Minute'] = pd.to_datetime(all_data['Order Date']).dt.minute
all_data['Count'] = 1
all_data.head()
keys = [pair for pair, df in all_data.groupby(['Hour'])]
plt.plot(keys, all_data.groupby(['Hour']).count()['Count'])
plt.xticks(keys)
plt.grid()
plt.show()
# My recommendation is slightly before 11am or 7pm
###Output
_____no_output_____
###Markdown
Question 4: What products are most often sold together?
###Code
# https://stackoverflow.com/questions/43348194/pandas-select-rows-if-id-appear-several-time
df = all_data[all_data['Order ID'].duplicated(keep=False)]
# Referenced: https://stackoverflow.com/questions/27298178/concatenate-strings-from-several-rows-using-pandas-groupby
df['Grouped'] = df.groupby('Order ID')['Product'].transform(lambda x: ','.join(x))
df2 = df[['Order ID', 'Grouped']].drop_duplicates()
# Referenced: https://stackoverflow.com/questions/52195887/counting-unique-pairs-of-numbers-into-a-python-dictionary
from itertools import combinations
from collections import Counter
count = Counter()
for row in df2['Grouped']:
row_list = row.split(',')
count.update(Counter(combinations(row_list, 2)))
for key,value in count.most_common(10):
print(key, value)
###Output
('iPhone', 'Lightning Charging Cable') 1005
('Google Phone', 'USB-C Charging Cable') 987
('iPhone', 'Wired Headphones') 447
('Google Phone', 'Wired Headphones') 414
('Vareebadd Phone', 'USB-C Charging Cable') 361
('iPhone', 'Apple Airpods Headphones') 360
('Google Phone', 'Bose SoundSport Headphones') 220
('USB-C Charging Cable', 'Wired Headphones') 160
('Vareebadd Phone', 'Wired Headphones') 143
('Lightning Charging Cable', 'Wired Headphones') 92
###Markdown
What product sold the most? Why do you think it sold the most?
###Code
product_group = all_data.groupby('Product')
quantity_ordered = product_group.sum()['Quantity Ordered']
keys = [pair for pair, df in product_group]
plt.bar(keys, quantity_ordered)
plt.xticks(keys, rotation='vertical', size=8)
plt.show()
###Output
_____no_output_____
###Markdown
The products that sold the most are the cheapest ones like bateries, charging cables and headphones.
###Code
prices = all_data.groupby('Product').mean()['Price Each']
fig, ax1 = plt.subplots()
ax2 = ax1.twinx()
ax1.bar(keys, quantity_ordered, color='g')
ax2.plot(keys, prices, color='b')
ax1.set_xlabel('Product Name')
ax1.set_ylabel('Quantity Ordered', color='g')
ax2.set_ylabel('Price ($)', color='b')
ax1.set_xticklabels(keys, rotation='vertical', size=8)
fig.show()
###Output
C:\Users\keith\Anaconda3\lib\site-packages\ipykernel_launcher.py:16: UserWarning: Matplotlib is currently using module://ipykernel.pylab.backend_inline, which is a non-GUI backend, so cannot show the figure.
app.launch_new_instance()
###Markdown
Sales Analysis Import necessary libraries
###Code
import os
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
###Output
_____no_output_____
###Markdown
Merge all 12 month file into one CSV
###Code
path = "./Sales_Data"
files = [file for file in os.listdir(path) if not file.startswith('.')]
all_months_data = pd.DataFrame()
for file in files:
current_data = pd.read_csv(path+"/"+file)
all_months_data = pd.concat([all_months_data, current_data])
all_months_data.to_csv("all_data.csv", index=False)
###Output
_____no_output_____
###Markdown
Read in updated dataframe
###Code
df = pd.read_csv("all_data.csv")
df.head()
df.describe()
df.shape
###Output
_____no_output_____
###Markdown
Clean up the data!The first step in this is figuring out what we need to clean. I have found in practice, that you find things you need to clean as you perform operations and get errors. Based on the error, you decide how you should go about cleaning the data Drop rows of NAN
###Code
# Find NAN
nan_df = df[df.isna().any(axis=1)]
display(nan_df.head())
df = df.dropna(how='all')
df.head()
###Output
_____no_output_____
###Markdown
Get rid of text in order date column
###Code
df = df[df['Order Date'].str[0:2]!='Or']
###Output
_____no_output_____
###Markdown
Make columns correct type
###Code
df['Quantity Ordered'] = pd.to_numeric(df['Quantity Ordered'])
df['Price Each'] = pd.to_numeric(df['Price Each'])
###Output
_____no_output_____
###Markdown
Add month column
###Code
df['Month'] = df['Order Date'].str[0:2]
df['Month'] = df['Month'].astype('int32')
df.head()
###Output
_____no_output_____
###Markdown
Data Exploration! Question 1: What was the best month for sales? How much was earned that month?
###Code
df['Sales'] = df['Quantity Ordered'].astype('int') * df['Price Each'].astype('float')
df.head()
result = df.groupby(['Month']).sum()
result
months = range(1,13)
print(months)
plt.bar(months,result['Sales'])
plt.xticks(months)
plt.ylabel('Sales ')
plt.xlabel('Month number')
plt.show()
sns.heatmap(df.corr(),annot=True, fmt='.2f');
###Output
_____no_output_____
###Markdown
Question 2: What city sold the most product? Add city column
###Code
def get_city(address):
return address.split(",")[1].strip(" ")
def get_state(address):
return address.split(",")[2].split(" ")[1]
df['City'] = df['Purchase Address'].apply(lambda x: f"{get_city(x)} ({get_state(x)})")
df.head(10)
result2 = df.groupby(['City']).sum()
result2
import matplotlib.pyplot as plt
cities = [city for city, df in df.groupby(['City'])]
plt.bar(cities,result2['Sales'])
plt.ylabel('Sales ')
plt.xlabel('City Name')
plt.xticks(cities, rotation='vertical', size=8)
plt.show()
###Output
_____no_output_____
###Markdown
Question 3: What time should we display advertisements to maximize likelihood of customer's buying product?
###Code
# Add hour column
df['Hour'] = pd.to_datetime(df['Order Date']).dt.hour
df['Minute'] = pd.to_datetime(df['Order Date']).dt.minute
df['Count'] = 1
df.head()
hours = [hour for hour, df in df.groupby(['Hour'])]
plt.plot(hours, df.groupby(['Hour']).count()['Count'])
plt.xticks(hours)
plt.xlabel('Hour ')
plt.ylabel('Number of orders')
plt.grid()
plt.show()
# My recommendation is slightly before 11am or 7pm
###Output
_____no_output_____
###Markdown
Clean up the data drop rows of Nan
###Code
nan_df= all_data [all_data.isna().any(axis=1)]
nan_df.head()
all_data = all_data.dropna(how='all')
all_data.head()
###Output
_____no_output_____
###Markdown
find 'OR' and delete it
###Code
all_data= all_data[all_data['Order Date'].str[0:2]!= 'Or']
###Output
_____no_output_____
###Markdown
Convert columns to the correct type
###Code
all_data['Quantity Ordered']= pd.to_numeric(all_data['Quantity Ordered']) #make int
all_data['Price Each'] = pd.to_numeric(all_data['Price Each'])#make float
all_data.head()
###Output
_____no_output_____
###Markdown
Augment data with additional columns Adding month column
###Code
all_data['Month']= all_data['Order Date'].str[0:2]
all_data['Month']= all_data['Month'].astype('int32')
all_data.head()
###Output
_____no_output_____
###Markdown
add Sales column
###Code
all_data['Sales']= all_data['Quantity Ordered'] * all_data['Price Each']
all_data.head()
###Output
_____no_output_____
###Markdown
adding city and state column
###Code
def get_city (address):
return address.split(',')[1]
def get_state(address):
return address.split(',')[2].strip()[0:2]
all_data['City'] = all_data['Purchase Address'].apply (lambda x: f"{get_city(x)} ({get_state(x)})")
all_data.head()
###Output
_____no_output_____
###Markdown
The best month for sales
###Code
results= all_data.groupby('Month').sum()
months = range(1,13)
plt.bar(months,results['Sales'])
plt.xticks(months)
plt.ylabel('Sales in USD ($)')
plt.xlabel('Month number')
plt.show()
###Output
_____no_output_____
###Markdown
What city had the highest number of sales
###Code
results = all_data.groupby('City').sum()
cities = [city for city, df in all_data.groupby('City')]
plt.bar(cities,results['Sales'])
plt.xticks(cities, rotation = "vertical" , size=8)
plt.ylabel('Sales in USD ($)')
plt.xlabel('City Name')
plt.show()
###Output
_____no_output_____
###Markdown
Sales Analysis
###Code
# Import Libraries
import os
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from itertools import combinations
from collections import Counter
###Output
_____no_output_____
###Markdown
Load the Data
###Code
files = [file for file in os.listdir('./Sales_Data')]
sales_data = pd.DataFrame()
for file in files:
df = pd.read_csv(f'./Sales_Data/{file}')
sales_data = pd.concat([sales_data, df])
sales_data.head()
###Output
_____no_output_____
###Markdown
Data Cleaning and Processing
###Code
sales_data.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 186850 entries, 0 to 11685
Data columns (total 6 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Order ID 186305 non-null object
1 Product 186305 non-null object
2 Quantity Ordered 186305 non-null object
3 Price Each 186305 non-null object
4 Order Date 186305 non-null object
5 Purchase Address 186305 non-null object
dtypes: object(6)
memory usage: 10.0+ MB
###Markdown
Deal with missing values
###Code
missing_count = sales_data.isna().sum().sort_values(ascending=False)
missing_count
sales_data = sales_data.dropna(how='all')
sales_data.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 186305 entries, 0 to 11685
Data columns (total 6 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Order ID 186305 non-null object
1 Product 186305 non-null object
2 Quantity Ordered 186305 non-null object
3 Price Each 186305 non-null object
4 Order Date 186305 non-null object
5 Purchase Address 186305 non-null object
dtypes: object(6)
memory usage: 9.9+ MB
###Markdown
Change column datatype
###Code
# Get rid of header column in middle of data
sales_data = sales_data[sales_data.Product != 'Product']
# Change column datatype
sales_data['Quantity Ordered'] = pd.to_numeric(sales_data['Quantity Ordered'])
sales_data['Price Each'] = pd.to_numeric(sales_data['Price Each'])
sales_data['Order Date'] = pd.to_datetime(sales_data['Order Date'])
sales_data.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 185950 entries, 0 to 11685
Data columns (total 6 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Order ID 185950 non-null object
1 Product 185950 non-null object
2 Quantity Ordered 185950 non-null int64
3 Price Each 185950 non-null float64
4 Order Date 185950 non-null datetime64[ns]
5 Purchase Address 185950 non-null object
dtypes: datetime64[ns](1), float64(1), int64(1), object(3)
memory usage: 9.9+ MB
###Markdown
Add additional columns Add Time Columns
###Code
sales_data['Month'] = sales_data['Order Date'].dt.month
sales_data['Year'] = sales_data['Order Date'].dt.year
sales_data['Day of Week'] = sales_data['Order Date'].dt.dayofweek
sales_data['Hour'] = sales_data['Order Date'].dt.hour
sales_data.head()
###Output
_____no_output_____
###Markdown
Add City
###Code
def get_city(address):
return address.split(",")[1].strip(" ")
def get_state(address):
return address.split(",")[2].split(" ")[1]
sales_data['City'] = sales_data['Purchase Address'].apply(lambda x: f"{get_city(x)}, {get_state(x)}")
sales_data.head()
###Output
_____no_output_____
###Markdown
Add sales amount column
###Code
sales_data['Sales'] = sales_data['Quantity Ordered']* sales_data['Price Each']
sales_data.head()
###Output
_____no_output_____
###Markdown
Exploratory Analysis and Visualization Number of orders
###Code
orders = sales_data[['Order ID', 'Quantity Ordered', 'Sales']].groupby('Order ID').sum()
len(orders)
###Output
_____no_output_____
###Markdown
Distribution of Quantity Ordered and Sales per Order
###Code
fig, axes = plt.subplots(1, 2, figsize=(15, 5))
fig.suptitle('Distribution per Order')
sns.histplot(ax=axes[0], data=orders, x="Quantity Ordered")
axes[0].set_title('Quantity')
sns.histplot(ax=axes[1], data=orders, x="Sales", binwidth = 100)
axes[1].set_title('Sales Amount')
plt.show()
###Output
_____no_output_____
###Markdown
Ask and Answer Questions Question 1: What was the best month for sales? How much was earned that month?
###Code
sales_by_month = sales_data[['Month', 'Quantity Ordered', 'Sales']].groupby(['Month']).sum()
sales_by_month['Sales in thousands'] = sales_by_month['Sales']/1000
sales_by_month
sns.set_palette("Set2")
ax = sns.barplot(x=sales_by_month.index, y='Sales in thousands', data=sales_by_month)
ax.set_ylabel('Sales (in thousands)')
plt.title('Sales By Month')
plt.show()
###Output
_____no_output_____
###Markdown
Question 2: What was the best day of the week for sales? How much was earned?
###Code
sales_by_day = sales_data[['Day of Week', 'Quantity Ordered', 'Sales']].groupby(['Day of Week']).sum()
sales_by_day['Sales in thousands'] = sales_by_day['Sales']/1000
sales_by_day
ax = sns.barplot(x=sales_by_day.index, y='Sales in thousands', data=sales_by_day)
ax.set_ylabel('Sales (in thousands)')
plt.title('Sales By Day of the Week')
plt.show()
###Output
_____no_output_____
###Markdown
Question 3: What time of the day has the best sales?
###Code
sales_by_time = sales_data[['Hour', 'Quantity Ordered', 'Sales']].groupby(['Hour']).sum()
sales_by_time['Sales in thousands'] = sales_by_time['Sales']/1000
sales_by_time
ax = sns.lineplot(x=sales_by_time.index, y='Sales in thousands', data=sales_by_time)
ax.set_ylabel('Sales (in thousands)')
plt.xticks(range(24))
plt.grid()
plt.title('Sales By Hour')
plt.show()
###Output
_____no_output_____
###Markdown
Question 4: What city had the best sales?
###Code
sales_by_city = sales_data[['City', 'Quantity Ordered', 'Sales']].groupby(['City']).sum()
sales_by_city['Sales in thousands'] = sales_by_city['Sales']/1000
sales_by_city = sales_by_city.sort_values(by=['Sales in thousands'], ascending = False)
sales_by_city
ax = sns.barplot(y=sales_by_city.index, x='Sales in thousands', data=sales_by_city)
ax.set_xlabel('Sales (in thousands)')
plt.title('Sales By City')
plt.show()
###Output
_____no_output_____
###Markdown
Question 5: Which product sold the most?
###Code
sales_by_product = sales_data[['Product', 'Quantity Ordered', 'Sales']].groupby(['Product']).sum()
sales_by_product['Quantity in thousands'] = sales_by_product['Quantity Ordered']/1000
sales_by_product['Sales in thousands'] = sales_by_product['Sales']/1000
sales_by_product['Price Each'] = sales_data.groupby('Product').mean()['Price Each']
sales_by_product = sales_by_product.sort_values(by=['Quantity Ordered'], ascending = False)
sales_by_product
ax = sns.barplot(x=sales_by_product.index, y='Quantity in thousands', data=sales_by_product)
ax.set_ylabel('Quantity Ordered (in thousands)')
ax.set_xticklabels(ax.get_xticklabels(),rotation = 90)
ax2=ax.twinx()
ax2 = sns.lineplot(x=sales_by_product.index, y='Price Each', data=sales_by_product)
ax2.set_ylabel('Price ($USD)')
plt.title('Sales By Product')
plt.show()
###Output
_____no_output_____
###Markdown
Question 6: What products are often sold together?
###Code
df = sales_data[sales_data['Order ID'].duplicated(keep=False)]
grouped_products = df.groupby('Order ID')['Product'].apply(', '.join).reset_index()
grouped_products = grouped_products.groupby(['Product']).size().reset_index(name='Count')
grouped_products = grouped_products.sort_values(by='Count', ascending = False)
grouped_products.head(10)
df = sales_data[sales_data['Order ID'].duplicated(keep=False)]
grouped_products2 = df.groupby('Order ID')['Product'].apply(', '.join).reset_index()
count = Counter()
for row in grouped_products2['Product']:
row_list = row.split(', ')
count.update(Counter(combinations(row_list, 2)))
for key,value in count.most_common(10):
print(key, value)
###Output
('iPhone', 'Lightning Charging Cable') 1005
('Google Phone', 'USB-C Charging Cable') 987
('iPhone', 'Wired Headphones') 447
('Google Phone', 'Wired Headphones') 414
('Vareebadd Phone', 'USB-C Charging Cable') 361
('iPhone', 'Apple Airpods Headphones') 360
('Google Phone', 'Bose SoundSport Headphones') 220
('USB-C Charging Cable', 'Wired Headphones') 160
('Vareebadd Phone', 'Wired Headphones') 143
('Lightning Charging Cable', 'Wired Headphones') 92
###Markdown
Sales Analysis Import necessary libraries
###Code
import os
import pandas as pd
###Output
_____no_output_____
###Markdown
Merge data from each month into one CSV
###Code
path = "./Sales_Data"
files = [file for file in os.listdir(path) if not file.startswith('.')] # Ignore hidden files
all_months_data = pd.DataFrame()
for file in files:
current_data = pd.read_csv(path+"/"+file)
all_months_data = pd.concat([all_months_data, current_data])
all_months_data.to_csv("all_data.csv", index=False)
###Output
_____no_output_____
###Markdown
Read in updated dataframe
###Code
all_data = pd.read_csv("all_data.csv")
all_data.head()
###Output
_____no_output_____
###Markdown
Clean up the data!The first step in this is figuring out what we need to clean. I have found in practice, that you find things you need to clean as you perform operations and get errors. Based on the error, you decide how you should go about cleaning the data Drop rows of NAN
###Code
# Find NAN
nan_df = all_data[all_data.isna().any(axis=1)]
display(nan_df.head())
all_data = all_data.dropna(how='all')
all_data.head()
###Output
_____no_output_____
###Markdown
Get rid of text in order date column
###Code
all_data = all_data[all_data['Order Date'].str[0:2]!='Or']
###Output
_____no_output_____
###Markdown
Make columns correct type
###Code
all_data['Quantity Ordered'] = pd.to_numeric(all_data['Quantity Ordered'])
all_data['Price Each'] = pd.to_numeric(all_data['Price Each'])
###Output
_____no_output_____
###Markdown
Augment data with additional columns Add month column
###Code
all_data['Month'] = all_data['Order Date'].str[0:2]
all_data['Month'] = all_data['Month'].astype('int32')
all_data.head()
###Output
_____no_output_____
###Markdown
Add month column (alternative method)
###Code
all_data['Month 2'] = pd.to_datetime(all_data['Order Date']).dt.month
all_data.head()
###Output
_____no_output_____
###Markdown
Add city column
###Code
def get_city(address):
return address.split(",")[1].strip(" ")
def get_state(address):
return address.split(",")[2].split(" ")[1]
all_data['City'] = all_data['Purchase Address'].apply(lambda x: f"{get_city(x)} ({get_state(x)})")
all_data.head()
###Output
_____no_output_____
###Markdown
Data Exploration! Question 1: What was the best month for sales? How much was earned that month?
###Code
all_data['Sales'] = all_data['Quantity Ordered'].astype('int') * all_data['Price Each'].astype('float')
all_data.groupby(['Month']).sum()
import matplotlib.pyplot as plt
months = range(1,13)
print(months)
plt.bar(months,all_data.groupby(['Month']).sum()['Sales'])
plt.xticks(months)
plt.ylabel('Sales in USD ($)')
plt.xlabel('Month number')
plt.show()
###Output
range(1, 13)
###Markdown
Question 2: What city sold the most product?
###Code
all_data.groupby(['City']).sum()
import matplotlib.pyplot as plt
keys = [city for city, df in all_data.groupby(['City'])]
plt.bar(keys,all_data.groupby(['City']).sum()['Sales'])
plt.ylabel('Sales in USD ($)')
plt.xlabel('Month number')
plt.xticks(keys, rotation='vertical', size=8)
plt.show()
###Output
_____no_output_____
###Markdown
Question 3: What time should we display advertisements to maximize likelihood of customer's buying product?
###Code
# Add hour column
all_data['Hour'] = pd.to_datetime(all_data['Order Date']).dt.hour
all_data['Minute'] = pd.to_datetime(all_data['Order Date']).dt.minute
all_data['Count'] = 1
all_data.head()
keys = [pair for pair, df in all_data.groupby(['Hour'])]
plt.plot(keys, all_data.groupby(['Hour']).count()['Count'])
plt.xticks(keys)
plt.grid()
plt.show()
# My recommendation is slightly before 11am or 7pm
###Output
_____no_output_____
###Markdown
Question 4: What products are most often sold together?
###Code
# https://stackoverflow.com/questions/43348194/pandas-select-rows-if-id-appear-several-time
df = all_data[all_data['Order ID'].duplicated(keep=False)]
# Referenced: https://stackoverflow.com/questions/27298178/concatenate-strings-from-several-rows-using-pandas-groupby
df['Grouped'] = df.groupby('Order ID')['Product'].transform(lambda x: ','.join(x))
df2 = df[['Order ID', 'Grouped']].drop_duplicates()
# Referenced: https://stackoverflow.com/questions/52195887/counting-unique-pairs-of-numbers-into-a-python-dictionary
from itertools import combinations
from collections import Counter
count = Counter()
for row in df2['Grouped']:
row_list = row.split(',')
count.update(Counter(combinations(row_list, 2)))
for key,value in count.most_common(10):
print(key, value)
###Output
('iPhone', 'Lightning Charging Cable') 1005
('Google Phone', 'USB-C Charging Cable') 987
('iPhone', 'Wired Headphones') 447
('Google Phone', 'Wired Headphones') 414
('Vareebadd Phone', 'USB-C Charging Cable') 361
('iPhone', 'Apple Airpods Headphones') 360
('Google Phone', 'Bose SoundSport Headphones') 220
('USB-C Charging Cable', 'Wired Headphones') 160
('Vareebadd Phone', 'Wired Headphones') 143
('Lightning Charging Cable', 'Wired Headphones') 92
###Markdown
What product sold the most? Why do you think it sold the most?
###Code
product_group = all_data.groupby('Product')
quantity_ordered = product_group.sum()['Quantity Ordered']
keys = [pair for pair, df in product_group]
plt.bar(keys, quantity_ordered)
plt.xticks(keys, rotation='vertical', size=8)
plt.show()
# Referenced: https://stackoverflow.com/questions/14762181/adding-a-y-axis-label-to-secondary-y-axis-in-matplotlib
prices = all_data.groupby('Product').mean()['Price Each']
fig, ax1 = plt.subplots()
ax2 = ax1.twinx()
ax1.bar(keys, quantity_ordered, color='g')
ax2.plot(keys, prices, color='b')
ax1.set_xlabel('Product Name')
ax1.set_ylabel('Quantity Ordered', color='g')
ax2.set_ylabel('Price ($)', color='b')
ax1.set_xticklabels(keys, rotation='vertical', size=8)
fig.show()
###Output
C:\Users\DELL\Anaconda3\lib\site-packages\ipykernel_launcher.py:16: UserWarning: Matplotlib is currently using module://ipykernel.pylab.backend_inline, which is a non-GUI backend, so cannot show the figure.
app.launch_new_instance()
|
Notebooks/quantopian_research_public/tutorials/pipeline/pipeline_tutorial_lesson_2.ipynb | ###Markdown
Creating a PipelineIn this lesson, we will take a look at creating an empty pipeline. First, let's import the Pipeline class:
###Code
from quantopian.pipeline import Pipeline
###Output
_____no_output_____
###Markdown
In a new cell, let's define a function to create our pipeline. Wrapping our pipeline creation in a function sets up a structure for more complex pipelines that we will see later on. For now, this function simply returns an empty pipeline:
###Code
def make_pipeline():
return Pipeline()
###Output
_____no_output_____
###Markdown
In a new cell, let's instantiate our pipeline by running `make_pipeline()`:
###Code
my_pipe = make_pipeline()
###Output
_____no_output_____
###Markdown
Running a PipelineNow that we have a reference to an empty Pipeline, `my_pipe` let's run it to see what it looks like. Before running our pipeline, we first need to import `run_pipeline`, a research-only function that allows us to run a pipeline over a specified time period.
###Code
from quantopian.research import run_pipeline
###Output
_____no_output_____
###Markdown
Let's run our pipeline for one day (2015-05-05) with `run_pipeline` and display it. Note that the 2nd and 3rd arguments are the start and end dates of the simulation, respectively.
###Code
result = run_pipeline(my_pipe, '2015-05-05', '2015-05-05')
###Output
_____no_output_____
###Markdown
A call to `run_pipeline` returns a [pandas DataFrame](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html) indexed by date and securities. Let's see what the empty pipeline looks like:
###Code
result
###Output
_____no_output_____ |
17-assignment2/BatchNormalization.ipynb | ###Markdown
Batch NormalizationOne way to make deep networks easier to train is to use more sophisticated optimization procedures such as SGD+momentum, RMSProp, or Adam. Another strategy is to change the architecture of the network to make it easier to train. One idea along these lines is batch normalization which was recently proposed by [3].The idea is relatively straightforward. Machine learning methods tend to work better when their input data consists of uncorrelated features with zero mean and unit variance. When training a neural network, we can preprocess the data before feeding it to the network to explicitly decorrelate its features; this will ensure that the first layer of the network sees data that follows a nice distribution. However even if we preprocess the input data, the activations at deeper layers of the network will likely no longer be decorrelated and will no longer have zero mean or unit variance since they are output from earlier layers in the network. Even worse, during the training process the distribution of features at each layer of the network will shift as the weights of each layer are updated.The authors of [3] hypothesize that the shifting distribution of features inside deep neural networks may make training deep networks more difficult. To overcome this problem, [3] proposes to insert batch normalization layers into the network. At training time, a batch normalization layer uses a minibatch of data to estimate the mean and standard deviation of each feature. These estimated means and standard deviations are then used to center and normalize the features of the minibatch. A running average of these means and standard deviations is kept during training, and at test time these running averages are used to center and normalize features.It is possible that this normalization strategy could reduce the representational power of the network, since it may sometimes be optimal for certain layers to have features that are not zero-mean or unit variance. To this end, the batch normalization layer includes learnable shift and scale parameters for each feature dimension.[3] Sergey Ioffe and Christian Szegedy, "Batch Normalization: Accelerating Deep Network Training by ReducingInternal Covariate Shift", ICML 2015.
###Code
# As usual, a bit of setup
from __future__ import print_function
import time
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.fc_net import *
from cs231n.data_utils import get_CIFAR10_data
from cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array
from cs231n.solver import Solver
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
""" returns relative error """
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Load the (preprocessed) CIFAR10 data.
data = get_CIFAR10_data()
for k, v in data.items():
print('%s: ' % k, v.shape)
###Output
_____no_output_____
###Markdown
Batch normalization: ForwardIn the file `cs231n/layers.py`, implement the batch normalization forward pass in the function `batchnorm_forward`. Once you have done so, run the following to test your implementation.
###Code
# Check the training-time forward pass by checking means and variances
# of features both before and after batch normalization
# Simulate the forward pass for a two-layer network
np.random.seed(231)
N, D1, D2, D3 = 200, 50, 60, 3
X = np.random.randn(N, D1)
W1 = np.random.randn(D1, D2)
W2 = np.random.randn(D2, D3)
a = np.maximum(0, X.dot(W1)).dot(W2)
print('Before batch normalization:')
print(' means: ', a.mean(axis=0))
print(' stds: ', a.std(axis=0))
# Means should be close to zero and stds close to one
print('After batch normalization (gamma=1, beta=0)')
a_norm, _ = batchnorm_forward(a, np.ones(D3), np.zeros(D3), {'mode': 'train'})
print(' mean: ', a_norm.mean(axis=0))
print(' std: ', a_norm.std(axis=0))
# Now means should be close to beta and stds close to gamma
gamma = np.asarray([1.0, 2.0, 3.0])
beta = np.asarray([11.0, 12.0, 13.0])
a_norm, _ = batchnorm_forward(a, gamma, beta, {'mode': 'train'})
print('After batch normalization (nontrivial gamma, beta)')
print(' means: ', a_norm.mean(axis=0))
print(' stds: ', a_norm.std(axis=0))
# Check the test-time forward pass by running the training-time
# forward pass many times to warm up the running averages, and then
# checking the means and variances of activations after a test-time
# forward pass.
np.random.seed(231)
N, D1, D2, D3 = 200, 50, 60, 3
W1 = np.random.randn(D1, D2)
W2 = np.random.randn(D2, D3)
bn_param = {'mode': 'train'}
gamma = np.ones(D3)
beta = np.zeros(D3)
for t in range(50):
X = np.random.randn(N, D1)
a = np.maximum(0, X.dot(W1)).dot(W2)
batchnorm_forward(a, gamma, beta, bn_param)
bn_param['mode'] = 'test'
X = np.random.randn(N, D1)
a = np.maximum(0, X.dot(W1)).dot(W2)
a_norm, _ = batchnorm_forward(a, gamma, beta, bn_param)
# Means should be close to zero and stds close to one, but will be
# noisier than training-time forward passes.
print('After batch normalization (test-time):')
print(' means: ', a_norm.mean(axis=0))
print(' stds: ', a_norm.std(axis=0))
###Output
_____no_output_____
###Markdown
Batch Normalization: backwardNow implement the backward pass for batch normalization in the function `batchnorm_backward`.To derive the backward pass you should write out the computation graph for batch normalization and backprop through each of the intermediate nodes. Some intermediates may have multiple outgoing branches; make sure to sum gradients across these branches in the backward pass.Once you have finished, run the following to numerically check your backward pass.
###Code
# Gradient check batchnorm backward pass
np.random.seed(231)
N, D = 4, 5
x = 5 * np.random.randn(N, D) + 12
gamma = np.random.randn(D)
beta = np.random.randn(D)
dout = np.random.randn(N, D)
bn_param = {'mode': 'train'}
fx = lambda x: batchnorm_forward(x, gamma, beta, bn_param)[0]
fg = lambda a: batchnorm_forward(x, gamma, beta, bn_param)[0]
fb = lambda b: batchnorm_forward(x, gamma, beta, bn_param)[0]
dx_num = eval_numerical_gradient_array(fx, x, dout)
da_num = eval_numerical_gradient_array(fg, gamma, dout)
db_num = eval_numerical_gradient_array(fb, beta, dout)
_, cache = batchnorm_forward(x, gamma, beta, bn_param)
dx, dgamma, dbeta = batchnorm_backward(dout, cache)
print('dx error: ', rel_error(dx_num, dx))
print('dgamma error: ', rel_error(da_num, dgamma))
print('dbeta error: ', rel_error(db_num, dbeta))
###Output
_____no_output_____
###Markdown
Batch Normalization: alternative backward (OPTIONAL, +3 points extra credit)In class we talked about two different implementations for the sigmoid backward pass. One strategy is to write out a computation graph composed of simple operations and backprop through all intermediate values. Another strategy is to work out the derivatives on paper. For the sigmoid function, it turns out that you can derive a very simple formula for the backward pass by simplifying gradients on paper.Surprisingly, it turns out that you can also derive a simple expression for the batch normalization backward pass if you work out derivatives on paper and simplify. After doing so, implement the simplified batch normalization backward pass in the function `batchnorm_backward_alt` and compare the two implementations by running the following. Your two implementations should compute nearly identical results, but the alternative implementation should be a bit faster.NOTE: This part of the assignment is entirely optional, but we will reward 3 points of extra credit if you can complete it.
###Code
np.random.seed(231)
N, D = 100, 500
x = 5 * np.random.randn(N, D) + 12
gamma = np.random.randn(D)
beta = np.random.randn(D)
dout = np.random.randn(N, D)
bn_param = {'mode': 'train'}
out, cache = batchnorm_forward(x, gamma, beta, bn_param)
t1 = time.time()
dx1, dgamma1, dbeta1 = batchnorm_backward(dout, cache)
t2 = time.time()
dx2, dgamma2, dbeta2 = batchnorm_backward_alt(dout, cache)
t3 = time.time()
print('dx difference: ', rel_error(dx1, dx2))
print('dgamma difference: ', rel_error(dgamma1, dgamma2))
print('dbeta difference: ', rel_error(dbeta1, dbeta2))
print('speedup: %.2fx' % ((t2 - t1) / (t3 - t2)))
###Output
_____no_output_____
###Markdown
Fully Connected Nets with Batch NormalizationNow that you have a working implementation for batch normalization, go back to your `FullyConnectedNet` in the file `cs2312n/classifiers/fc_net.py`. Modify your implementation to add batch normalization.Concretely, when the flag `use_batchnorm` is `True` in the constructor, you should insert a batch normalization layer before each ReLU nonlinearity. The outputs from the last layer of the network should not be normalized. Once you are done, run the following to gradient-check your implementation.HINT: You might find it useful to define an additional helper layer similar to those in the file `cs231n/layer_utils.py`. If you decide to do so, do it in the file `cs231n/classifiers/fc_net.py`.
###Code
np.random.seed(231)
N, D, H1, H2, C = 2, 15, 20, 30, 10
X = np.random.randn(N, D)
y = np.random.randint(C, size=(N,))
for reg in [0, 3.14]:
print('Running check with reg = ', reg)
model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,
reg=reg, weight_scale=5e-2, dtype=np.float64,
use_batchnorm=True)
loss, grads = model.loss(X, y)
print('Initial loss: ', loss)
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)
print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])))
if reg == 0: print()
###Output
_____no_output_____
###Markdown
Batchnorm for deep networksRun the following to train a six-layer network on a subset of 1000 training examples both with and without batch normalization.
###Code
np.random.seed(231)
# Try training a very deep net with batchnorm
hidden_dims = [100, 100, 100, 100, 100]
num_train = 1000
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
weight_scale = 2e-2
bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=True)
model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=False)
bn_solver = Solver(bn_model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True, print_every=200)
bn_solver.train()
solver = Solver(model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True, print_every=200)
solver.train()
###Output
_____no_output_____
###Markdown
Run the following to visualize the results from two networks trained above. You should find that using batch normalization helps the network to converge much faster.
###Code
plt.subplot(3, 1, 1)
plt.title('Training loss')
plt.xlabel('Iteration')
plt.subplot(3, 1, 2)
plt.title('Training accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 3)
plt.title('Validation accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 1)
plt.plot(solver.loss_history, 'o', label='baseline')
plt.plot(bn_solver.loss_history, 'o', label='batchnorm')
plt.subplot(3, 1, 2)
plt.plot(solver.train_acc_history, '-o', label='baseline')
plt.plot(bn_solver.train_acc_history, '-o', label='batchnorm')
plt.subplot(3, 1, 3)
plt.plot(solver.val_acc_history, '-o', label='baseline')
plt.plot(bn_solver.val_acc_history, '-o', label='batchnorm')
for i in [1, 2, 3]:
plt.subplot(3, 1, i)
plt.legend(loc='upper center', ncol=4)
plt.gcf().set_size_inches(15, 15)
plt.show()
###Output
_____no_output_____
###Markdown
Batch normalization and initializationWe will now run a small experiment to study the interaction of batch normalization and weight initialization.The first cell will train 8-layer networks both with and without batch normalization using different scales for weight initialization. The second layer will plot training accuracy, validation set accuracy, and training loss as a function of the weight initialization scale.
###Code
np.random.seed(231)
# Try training a very deep net with batchnorm
hidden_dims = [50, 50, 50, 50, 50, 50, 50]
num_train = 1000
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
bn_solvers = {}
solvers = {}
weight_scales = np.logspace(-4, 0, num=20)
for i, weight_scale in enumerate(weight_scales):
print('Running weight scale %d / %d' % (i + 1, len(weight_scales)))
bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=True)
model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=False)
bn_solver = Solver(bn_model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=False, print_every=200)
bn_solver.train()
bn_solvers[weight_scale] = bn_solver
solver = Solver(model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=False, print_every=200)
solver.train()
solvers[weight_scale] = solver
# Plot results of weight scale experiment
best_train_accs, bn_best_train_accs = [], []
best_val_accs, bn_best_val_accs = [], []
final_train_loss, bn_final_train_loss = [], []
for ws in weight_scales:
best_train_accs.append(max(solvers[ws].train_acc_history))
bn_best_train_accs.append(max(bn_solvers[ws].train_acc_history))
best_val_accs.append(max(solvers[ws].val_acc_history))
bn_best_val_accs.append(max(bn_solvers[ws].val_acc_history))
final_train_loss.append(np.mean(solvers[ws].loss_history[-100:]))
bn_final_train_loss.append(np.mean(bn_solvers[ws].loss_history[-100:]))
plt.subplot(3, 1, 1)
plt.title('Best val accuracy vs weight initialization scale')
plt.xlabel('Weight initialization scale')
plt.ylabel('Best val accuracy')
plt.semilogx(weight_scales, best_val_accs, '-o', label='baseline')
plt.semilogx(weight_scales, bn_best_val_accs, '-o', label='batchnorm')
plt.legend(ncol=2, loc='lower right')
plt.subplot(3, 1, 2)
plt.title('Best train accuracy vs weight initialization scale')
plt.xlabel('Weight initialization scale')
plt.ylabel('Best training accuracy')
plt.semilogx(weight_scales, best_train_accs, '-o', label='baseline')
plt.semilogx(weight_scales, bn_best_train_accs, '-o', label='batchnorm')
plt.legend()
plt.subplot(3, 1, 3)
plt.title('Final training loss vs weight initialization scale')
plt.xlabel('Weight initialization scale')
plt.ylabel('Final training loss')
plt.semilogx(weight_scales, final_train_loss, '-o', label='baseline')
plt.semilogx(weight_scales, bn_final_train_loss, '-o', label='batchnorm')
plt.legend()
plt.gca().set_ylim(1.0, 3.5)
plt.gcf().set_size_inches(10, 15)
plt.show()
###Output
_____no_output_____ |
notebooks/seismic/SeismicApplet.ipynb | ###Markdown
Seismic Applet
###Code
seismic_app()
###Output
_____no_output_____
###Markdown
Seismic Applet
###Code
seismic_app()
###Output
_____no_output_____
###Markdown
Seismic Applet
###Code
seismic_app()
###Output
_____no_output_____ |
.ipynb_checkpoints/writeup_images-checkpoint.ipynb | ###Markdown
Import Packages
###Code
#importing some useful packages
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import cv2
%matplotlib inline
###Output
_____no_output_____
###Markdown
Helper Functions Below are some helper functions to help get you started. They should look familiar from the lesson!
###Code
import math
def grayscale(img):
"""Applies the Grayscale transform
This will return an image with only one color channel
but NOTE: to see the returned image as grayscale
(assuming your grayscaled image is called 'gray')
you should call plt.imshow(gray, cmap='gray')"""
return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
# Or use BGR2GRAY if you read an image with cv2.imread()
# return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
def canny(img, low_threshold, high_threshold):
"""Applies the Canny transform"""
return cv2.Canny(img, low_threshold, high_threshold)
def gaussian_blur(img, kernel_size):
"""Applies a Gaussian Noise kernel"""
return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0)
def region_of_interest(img, vertices):
"""
Applies an image mask.
Only keeps the region of the image defined by the polygon
formed from `vertices`. The rest of the image is set to black.
`vertices` should be a numpy array of integer points.
"""
#defining a blank mask to start with
mask = np.zeros_like(img)
#defining a 3 channel or 1 channel color to fill the mask with depending on the input image
if len(img.shape) > 2:
channel_count = img.shape[2] # i.e. 3 or 4 depending on your image
ignore_mask_color = (255,) * channel_count
else:
ignore_mask_color = 255
#filling pixels inside the polygon defined by "vertices" with the fill color
cv2.fillPoly(mask, vertices, ignore_mask_color)
#returning the image only where mask pixels are nonzero
masked_image = cv2.bitwise_and(img, mask)
return masked_image
def draw_lines(img, lines, color=[255, 0, 0], thickness=2):
"""
NOTE: this is the function you might want to use as a starting point once you want to
average/extrapolate the line segments you detect to map out the full
extent of the lane (going from the result shown in raw-lines-example.mp4
to that shown in P1_example.mp4).
Think about things like separating line segments by their
slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left
line vs. the right line. Then, you can average the position of each of
the lines and extrapolate to the top and bottom of the lane.
This function draws `lines` with `color` and `thickness`.
Lines are drawn on the image inplace (mutates the image).
If you want to make the lines semi-transparent, think about combining
this function with the weighted_img() function below
"""
for line in lines:
for x1,y1,x2,y2 in line:
cv2.line(img, (x1, y1), (x2, y2), color, thickness)
def hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap):
"""
`img` should be the output of a Canny transform.
Returns an image with hough lines drawn.
"""
lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)
line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8)
draw_lines(line_img, lines)
return line_img
# Python 3 has support for cool math symbols.
def weighted_img(img, initial_img, α=0.8, β=1., γ=0.):
"""
`img` is the output of the hough_lines(), An image with lines drawn on it.
Should be a blank image (all black) with lines drawn on it.
`initial_img` should be the image before any processing.
The result image is computed as follows:
initial_img * α + img * β + γ
NOTE: initial_img and img must be the same shape!
"""
return cv2.addWeighted(initial_img, α, img, β, γ)
###Output
_____no_output_____
###Markdown
Test ImagesBuild your pipeline to work on the images in the directory "test_images" **You should make sure your pipeline works well on these images before you try the videos.**
###Code
import os
test_images = os.listdir("test_images/")
def process_test_images(action):
fig=plt.figure(figsize=(16, 15))
for idx, img_name in enumerate(test_images):
frame = mpimg.imread("test_images/"+img_name)
masked = action(frame)
fig.add_subplot(3, 2, idx+1)
plt.imshow(masked, cmap='Greys_r')
def process_image(action):
fig=plt.figure(figsize=(16, 15))
image = mpimg.imread('test_images/solidYellowCurve.jpg')
actioned = action(image)
fig.add_subplot(1, 2, 1)
plt.imshow(image, cmap='Greys_r')
fig.add_subplot(1, 2, 2)
plt.imshow(actioned, cmap='Greys_r')
test_image_path = 'test_images/solidYellowCurve.jpg'
###Output
_____no_output_____
###Markdown
Build a Lane Finding Pipeline Build the pipeline and run your solution on all test_images. Make copies into the `test_images_output` directory, and you can use the images in your writeup report.Try tuning the various parameters, especially the low and high Canny thresholds as well as the Hough lines parameters. Canny for HSV
###Code
def apply_3_canny(image):
hsv = cv2.cvtColor(image, cv2.COLOR_RGB2HSV)
edges_hue = canny(image[:, :, 0], 64, 255)
edges_sat = canny(image[:, :, 1], 250, 255)
edges_val = canny(image[:, :, 2], 64, 126)
edges_all = cv2.bitwise_and(edges_hue, edges_sat)
edges_all = cv2.bitwise_and(edges_all, edges_val)
return edges_all
def draw_3_canny(image):
hsv = cv2.cvtColor(image, cv2.COLOR_RGB2HSV)
edges_hue = canny(image[:, :, 0], 64, 255)
edges_sat = canny(image[:, :, 1], 250, 255)
edges_val = canny(image[:, :, 2], 64, 126)
fig=plt.figure(figsize=(20, 5))
canvas = np.zeros_like(image)
canvas[:, :, 0] = edges_hue
canvas[:, :, 1] = edges_sat
canvas[:, :, 2] = edges_val
fig.add_subplot(1, 3, 1)
plt.imshow(image, cmap='Greys_r')
fig.add_subplot(1, 3, 2)
plt.imshow(canvas, cmap='Greys_r')
edges_all = cv2.bitwise_and(edges_hue, edges_sat)
edges_all = cv2.bitwise_and(edges_all, edges_val)
fig.add_subplot(1, 3, 3)
plt.imshow(edges_all, cmap='Greys_r')
#return edges_all
image = mpimg.imread(test_image_path)
draw_3_canny(image)
###Output
_____no_output_____
###Markdown
Region of interest
###Code
def create_region(image):
reg_bottom_width = image.shape[1]
reg_top_width = 60
reg_height = 320
top_width_diff = (image.shape[1] - reg_top_width) /2
bottom_left = [0, image.shape[0]]
bottom_right = [image.shape[1], image.shape[0]]
top_left=[top_width_diff, reg_height]
top_right = [image.shape[1] - top_width_diff, reg_height]
pts = np.array([bottom_left,top_left,top_right,bottom_right], np.int32)
return [pts]
def apply_region(image):
vertices = create_region(image)
masked_region = region_of_interest(image, vertices)
return masked_region
image = mpimg.imread(test_image_path)
vertices = create_region(image)
fig=plt.figure(figsize=(20, 5))
fig.add_subplot(1, 2, 1)
canvas = np.repeat(apply_3_canny(image)[:, :, np.newaxis], 3, axis=2)
cv2.polylines(canvas, vertices, False, color=[255, 0, 0], thickness=2)
plt.imshow(canvas, cmap='Greys_r')
fig.add_subplot(1, 2, 2)
plt.imshow(apply_region(apply_3_canny(image)), cmap='Greys_r')
###Output
_____no_output_____
###Markdown
Hough lines
###Code
def apply_hough(image):
lines = cv2.HoughLinesP(image, 2, 2*np.pi/180, 35, np.array([]), minLineLength=20, maxLineGap=20)
return lines
def draw_hough_lines(image, lines, color=[255, 0, 0]):
draw_lines(image, lines, color=color, thickness=2)
return image
image = mpimg.imread(test_image_path)
fig=plt.figure(figsize=(20, 5))
fig.add_subplot(1, 2, 1)
canvas = np.repeat(apply_3_canny(image)[:, :, np.newaxis], 3, axis=2)
edges = draw_hough_lines(canvas, apply_hough(apply_region(apply_3_canny(image))), color=[0, 255, 0])
plt.imshow(edges, cmap='Greys_r')
fig.add_subplot(1, 2, 2)
clean = draw_hough_lines(image, apply_hough(apply_region(apply_3_canny(image))), color=[0, 255, 0])
plt.imshow(clean, cmap='Greys_r')
#process_test_images(lambda image: draw_hough_lines(image, apply_hough(apply_region(apply_3_canny(image)))))
###Output
_____no_output_____
###Markdown
Extrapolated rulers
###Code
def extrapolate(image, lines, color=[255, 0, 0]):
canvas = np.zeros_like(image[:, :, 0])
draw_hough_lines(canvas, lines)
line_points_row, line_points_col = np.nonzero(canvas)
if line_points_row.size==0:
return
extrapolated = np.polyfit(line_points_row, line_points_col, 1)
poly = np.poly1d(extrapolated)
height = image.shape[0]
low_x, top_x = (poly(height), poly(320))
cv2.line(image, (int(low_x), height), (int(top_x), 320), color, thickness=2)
def draw_hough_lines_ext(image, lines, color=[255, 0, 0]):
dx = lines[:, :, 2] - lines[:, :, 0]
dy = lines[:, :, 3] - lines[:, :, 1]
tan = dy/dx
right_candidates = tan > 0.5
left_candidates = tan < -0.5
right_lines = lines[right_candidates].reshape(-1, 1 ,4)
left_lines = lines[left_candidates].reshape(-1, 1 ,4)
extrapolate(image, right_lines, color)
extrapolate(image, left_lines, color)
return image
image = mpimg.imread(test_image_path)
fig=plt.figure(figsize=(20, 5))
fig.add_subplot(1, 2, 1)
canvas = np.repeat(apply_3_canny(image)[:, :, np.newaxis], 3, axis=2)
edges = draw_hough_lines(canvas, apply_hough(apply_region(apply_3_canny(image))), color=[0, 255, 0])
edges = draw_hough_lines_ext(edges, apply_hough(apply_region(apply_3_canny(image))), color=[255, 0, 0])
plt.imshow(edges, cmap='Greys_r')
fig.add_subplot(1, 2, 2)
clean = draw_hough_lines_ext(image, apply_hough(apply_region(apply_3_canny(image))))
plt.imshow(clean, cmap='Greys_r')
# TODO: Build your pipeline that will draw lane lines on the test_images
# then save them to the test_images_output directory.
def process_pipeline(image):
result = image
#result = wy_mask(result)
result = apply_3_canny(result)
result = apply_region(result)
rgb = np.zeros_like(image)
rgb[:, :, 0] = result
h_lines = apply_hough(result)
#h_lines = filter_hough_lines(h_lines)
#draw_lines(rgb, h_lines, color=[0, 255, 0], thickness=1)
result = draw_hough_lines_ext(image, h_lines)
return result#rgb
###Output
_____no_output_____
###Markdown
Test on VideosYou know what's cooler than drawing lanes over images? Drawing lanes over video!We can test our solution on two provided videos:`solidWhiteRight.mp4``solidYellowLeft.mp4`**Note: if you get an import error when you run the next cell, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.****If you get an error that looks like this:**```NeedDownloadError: Need ffmpeg exe. You can download it by calling: imageio.plugins.ffmpeg.download()```**Follow the instructions in the error message and check out [this forum post](https://discussions.udacity.com/t/project-error-of-test-on-videos/274082) for more troubleshooting tips across operating systems.**
###Code
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
def process_image(image):
# NOTE: The output you return should be a color image (3 channel) for processing video below
# TODO: put your pipeline here,
# you should return the final output (image where lines are drawn on lanes)
result = process_pipeline(image)
return result
###Output
_____no_output_____
###Markdown
Let's try the one with the solid white lane on the right first ...
###Code
white_output = 'test_videos_output/solidWhiteRight.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
#clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4").subclip(0,5)
clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4")
white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!
%time white_clip.write_videofile(white_output, audio=False)
###Output
[MoviePy] >>>> Building video test_videos_output/solidWhiteRight.mp4
[MoviePy] Writing video test_videos_output/solidWhiteRight.mp4
###Markdown
Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice.
###Code
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(white_output))
###Output
_____no_output_____
###Markdown
Improve the draw_lines() function**At this point, if you were successful with making the pipeline and tuning parameters, you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. As mentioned previously, try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4".****Go back and modify your draw_lines function accordingly and try re-running your pipeline. The new output should draw a single, solid line over the left lane line and a single, solid line over the right lane line. The lines should start from the bottom of the image and extend out to the top of the region of interest.** Now for the one with the solid yellow lane on the left. This one's more tricky!
###Code
yellow_output = 'test_videos_output/solidYellowLeft.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
#clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4').subclip(14,15)
clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4')
yellow_clip = clip2.fl_image(process_image)
%time yellow_clip.write_videofile(yellow_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(yellow_output))
###Output
_____no_output_____
###Markdown
Writeup and SubmissionIf you're satisfied with your video outputs, it's time to make the report writeup in a pdf or markdown file. Once you have this Ipython notebook ready along with the writeup, it's time to submit for review! Here is a [link](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) to the writeup template file. Optional ChallengeTry your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project!
###Code
challenge_output = 'test_videos_output/challenge.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip3 = VideoFileClip('test_videos/challenge.mp4').subclip(0,5)
clip3 = VideoFileClip('test_videos/challenge.mp4')
challenge_clip = clip3.fl_image(process_image)
%time challenge_clip.write_videofile(challenge_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(challenge_output))
###Output
_____no_output_____ |
research/data_prep/Submission-Tag-Curation.ipynb | ###Markdown
df.head()
###Code
# drop if no text
indices_no_text = df[df.text == ''].index
df = df.drop(indices_no_text)
indices_no_text = df[df.text == '[removed]'].index
df = df.drop(indices_no_text)
indices_no_text = df[df.text == '[deleted]'].index
df = df.drop(indices_no_text)
df.head()
len(df)
# drop if no comments
indices_no_comments = df[df.comments.astype(bool) == False].index
df = df.drop(indices_no_comments)
df.head()
len(df)
# Try to determine the tags where we don't have a mapper
df_no_curated_tag = df[df.curated_tag == '']
df_no_curated_tag.head()
len(df_no_curated_tag)
for comment in df_no_curated_tag.iloc[900].comments:
print(comment)
print('\n***********************\n')
###Output
I agree that it shouldn't be the guy's responsibility to plan all dates. But if either partner tells the other that they *are* going to plan a specific date, then it is their obligation to do so.
You're the asshole for snapping at her and saying she should have done it herself. Why would she have done it herself after you literally told her you would take care of it?
As far as waiting until Friday morning to make the reservation - I think she probably overreacted, but I would be a little annoyed if I was in her shoes. Many nice restaurants book several days in advance, and some won't even accept same-day reservations. I'm sure you'd be able to find something, but it might not be the best option and you had plenty of time to plan ahead.
***********************
Yes, you are the asshole. The time to bring up that it’s not solely the guy’s responsibility to plan date nights was when she was hinting about the date in the first place, not after you got caught procrastinating.
***********************
This is a hard one.
I completely agree with you that the man should not have to do everything, but if you've agreed to do something then do it.
A lot of fancy places are full up by the actual day you want to go. She sounded as if she were worried and concerned and you didn't really register that. So in your reaction - yes it was a bit assholey but in your opinion about equality - no you aren't, but maybe you could practice a bit of diplomacy.
'babe I think it would be awesome if you took me out on a date for a change.' (Are you always expected to foot the bill?)
And if it's successful then, 'This was amazing. I love when you take care of me like this. I'd love to make this a regular thing!'
***********************
[deleted]
***********************
Sometimes is not what you say, but the way you say it that gets you a bad reaction.
***********************
###Markdown
Guess tagsUsing a simple regex voting system we should be able to try and tag most of these
###Code
import re
esh_regexes = [
r'\besh\b',
r'everyone sucks',
r'everyone[*a-z\' ,]*(shit|a[s*-]*hole)',
r'You[a-z\' ,]* all [*a-z\' ,]+a[s*-]*hole'
]
nah_regexes = [
r'\bnah\b',
r'no a[s*-]*hole\s here',
r'anyone[*a-z\' ,]*a[s*-]+hole'
]
nta_regexes = [
r'\bnta\b',
r'not[a-z*\' ,]*a[s*-]+hole'
]
yta_regexes = [
r'\byta\b',
r'you[a-z*\' ,]*a[s*-]+hole'
]
guessed_votes = []
still_no_tag = []
for i, row in tqdm(df_no_curated_tag.iterrows()):
esh_votes = 0
nah_votes = 0
nta_votes = 0
yta_votes = 0
for comment in row.comments:
for regex in esh_regexes:
if re.search(regex, comment, re.IGNORECASE):
esh_votes += 1
break
else:
for regex in nah_regexes:
if re.search(regex, comment, re.IGNORECASE):
nah_votes += 1
break
else:
for regex in nta_regexes:
if re.search(regex, comment, re.IGNORECASE):
nta_votes += 1
break
else:
for regex in yta_regexes:
if re.search(regex, comment, re.IGNORECASE):
yta_votes += 1
break
max_vote_count = max([esh_votes, nta_votes, nah_votes, yta_votes])
if max_vote_count == 0:
guessed_votes.append('')
still_no_tag.append(row.comments)
else:
# bias to lower classes
if esh_votes == max_vote_count:
guessed_votes.append('ESH')
elif nah_votes == max_vote_count:
guessed_votes.append('NAH')
elif yta_votes == max_vote_count:
guessed_votes.append('YTA')
else:
guessed_votes.append('NTA')
guessed_series = pd.Series(guessed_votes)
guessed_series.value_counts().plot(kind='bar')
re.search(r'everyone[a-z\' ,]+(shit|a[s*-]*hole)', 'everyone is an ahole here', re.IGNORECASE)
still_no_tag[18]
df_no_curated_tag['regex_tag'] = guessed_series
df_no_curated_tag.head()
for i, row in df_no_curated_tag.iterrows():
if row.regex_tag:
df.loc[i].curated_tag = row.regex_tag
df.loc[104]
df.curated_tag.value_counts().plot(kind='bar')
indices_drop_tag = df[df.curated_tag == ''].index
df = df.drop(indices_drop_tag)
indices_drop_tag = df[df.curated_tag == 'TROLL'].index
df = df.drop(indices_drop_tag)
indices_drop_tag = df[df.curated_tag == 'META'].index
df = df.drop(indices_drop_tag)
df.head()
len(df)
df.curated_tag.value_counts().plot(kind='bar')
df.to_csv('../data/curated-submission-tag-2016-2020.csv', index=False)
df.iloc[19173]
###Output
_____no_output_____ |
src/graficas_ejercicios/ejercicio_8.ipynb | ###Markdown
Punto 8
###Code
p1_x = [0,-2,-2,-2,2]
p1_y = [2,2,2,-2,2]
p2_x = [0,2,2]
p2_y = [-2,-2,0]
x = [0.5,0]
y = [0,-1]
plt.plot(p1_x,p1_y, linestyle="none", marker="o", color="red", label="P1")
plt.plot(p2_x,p2_y, linestyle="none", marker="*", color="green", label="P2")
plt.plot(x,y, marker="p")
plt.title(" PUNTO 8")
plt.grid()
plt.show()
m(x[0],x[1],y[0],y[1])
def funcion_y(x):
return 2*x-1
###Output
_____no_output_____ |
program/6_5_A2C_Advanced_ActorCritic.ipynb | ###Markdown
6.5 PyTorchでA2C(Advanced Actor-Critic)
###Code
# パッケージのimport
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import gym
# 定数の設定
ENV = 'CartPole-v0' # 使用する課題名
GAMMA = 0.99 # 時間割引率
MAX_STEPS = 200 # 1試行のstep数
NUM_EPISODES = 1000 # 最大試行回数
NUM_PROCESSES = 32 # 同時に実行する環境
NUM_ADVANCED_STEP = 5 # 何ステップ進めて報酬和を計算するのか設定
# A2Cの損失関数の計算のための定数設定
value_loss_coef = 0.5
entropy_coef = 0.01
max_grad_norm = 0.5
# メモリクラスの定義
class RolloutStorage(object):
'''Advantage学習するためのメモリクラスです'''
def __init__(self, num_steps, num_processes, obs_shape):
self.observations = torch.zeros(num_steps + 1, num_processes, 4)
self.masks = torch.ones(num_steps + 1, num_processes, 1)
self.rewards = torch.zeros(num_steps, num_processes, 1)
self.actions = torch.zeros(num_steps, num_processes, 1).long()
# 割引報酬和を格納
self.returns = torch.zeros(num_steps + 1, num_processes, 1)
self.index = 0 # insertするインデックス
def insert(self, current_obs, action, reward, mask):
'''次のindexにtransitionを格納する'''
self.observations[self.index + 1].copy_(current_obs)
self.masks[self.index + 1].copy_(mask)
self.rewards[self.index].copy_(reward)
self.actions[self.index].copy_(action)
self.index = (self.index + 1) % NUM_ADVANCED_STEP # インデックスの更新
def after_update(self):
'''Advantageするstep数が完了したら、最新のものをindex0に格納'''
self.observations[0].copy_(self.observations[-1])
self.masks[0].copy_(self.masks[-1])
def compute_returns(self, next_value):
'''Advantageするステップ中の各ステップの割引報酬和を計算する'''
# 注意:5step目から逆向きに計算しています
# 注意:5step目はAdvantage1となる。4ステップ目はAdvantage2となる。・・・
self.returns[-1] = next_value
for ad_step in reversed(range(self.rewards.size(0))):
self.returns[ad_step] = self.returns[ad_step + 1] * \
GAMMA * self.masks[ad_step + 1] + self.rewards[ad_step]
# A2Cのディープ・ニューラルネットワークの構築
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self, n_in, n_mid, n_out):
super(Net, self).__init__()
self.fc1 = nn.Linear(n_in, n_mid)
self.fc2 = nn.Linear(n_mid, n_mid)
self.actor = nn.Linear(n_mid, n_out) # 行動を決めるので出力は行動の種類数
self.critic = nn.Linear(n_mid, 1) # 状態価値なので出力は1つ
def forward(self, x):
'''ネットワークのフォワード計算を定義します'''
h1 = F.relu(self.fc1(x))
h2 = F.relu(self.fc2(h1))
critic_output = self.critic(h2) # 状態価値の計算
actor_output = self.actor(h2) # 行動の計算
return critic_output, actor_output
def act(self, x):
'''状態xから行動を確率的に求めます'''
value, actor_output = self(x)
# dim=1で行動の種類方向にsoftmaxを計算
action_probs = F.softmax(actor_output, dim=1)
action = action_probs.multinomial(num_samples=1) # dim=1で行動の種類方向に確率計算
return action
def get_value(self, x):
'''状態xから状態価値を求めます'''
value, actor_output = self(x)
return value
def evaluate_actions(self, x, actions):
'''状態xから状態価値、実際の行動actionsのlog確率とエントロピーを求めます'''
value, actor_output = self(x)
log_probs = F.log_softmax(actor_output, dim=1) # dim=1で行動の種類方向に計算
action_log_probs = log_probs.gather(1, actions) # 実際の行動のlog_probsを求める
probs = F.softmax(actor_output, dim=1) # dim=1で行動の種類方向に計算
entropy = -(log_probs * probs).sum(-1).mean()
return value, action_log_probs, entropy
# エージェントが持つ頭脳となるクラスを定義、全エージェントで共有する
import torch
from torch import optim
class Brain(object):
def __init__(self, actor_critic):
self.actor_critic = actor_critic # actor_criticはクラスNetのディープ・ニューラルネットワーク
self.optimizer = optim.Adam(self.actor_critic.parameters(), lr=0.01)
def update(self, rollouts):
'''Advantageで計算した5つのstepの全てを使って更新します'''
obs_shape = rollouts.observations.size()[2:] # torch.Size([4, 84, 84])
num_steps = NUM_ADVANCED_STEP
num_processes = NUM_PROCESSES
values, action_log_probs, entropy = self.actor_critic.evaluate_actions(
rollouts.observations[:-1].view(-1, 4),
rollouts.actions.view(-1, 1))
# 注意:各変数のサイズ
# rollouts.observations[:-1].view(-1, 4) torch.Size([80, 4])
# rollouts.actions.view(-1, 1) torch.Size([80, 1])
# values torch.Size([80, 1])
# action_log_probs torch.Size([80, 1])
# entropy torch.Size([])
values = values.view(num_steps, num_processes,
1) # torch.Size([5, 16, 1])
action_log_probs = action_log_probs.view(num_steps, num_processes, 1)
# advantage(行動価値-状態価値)の計算
advantages = rollouts.returns[:-1] - values # torch.Size([5, 16, 1])
# Criticのlossを計算
value_loss = advantages.pow(2).mean()
# Actorのgainを計算、あとでマイナスをかけてlossにする
action_gain = (action_log_probs*advantages.detach()).mean()
# detachしてadvantagesを定数として扱う
# 誤差関数の総和
total_loss = (value_loss * value_loss_coef -
action_gain - entropy * entropy_coef)
# 結合パラメータを更新
self.actor_critic.train() # 訓練モードに
self.optimizer.zero_grad() # 勾配をリセット
total_loss.backward() # バックプロパゲーションを計算
nn.utils.clip_grad_norm_(self.actor_critic.parameters(), max_grad_norm)
# 一気に結合パラメータが変化しすぎないように、勾配の大きさは最大0.5までにする
self.optimizer.step() # 結合パラメータを更新
# 今回エージェントクラスはなしです
# 実行する環境のクラスです
import copy
class Environment:
def run(self):
'''メインの実行'''
# 同時実行する環境数分、envを生成
envs = [gym.make(ENV) for i in range(NUM_PROCESSES)]
# 全エージェントが共有して持つ頭脳Brainを生成
n_in = envs[0].observation_space.shape[0] # 状態は4
n_out = envs[0].action_space.n # 行動は2
n_mid = 32
actor_critic = Net(n_in, n_mid, n_out) # ディープ・ニューラルネットワークの生成
global_brain = Brain(actor_critic)
# 格納用変数の生成
obs_shape = n_in
current_obs = torch.zeros(
NUM_PROCESSES, obs_shape) # torch.Size([16, 4])
rollouts = RolloutStorage(
NUM_ADVANCED_STEP, NUM_PROCESSES, obs_shape) # rolloutsのオブジェクト
episode_rewards = torch.zeros([NUM_PROCESSES, 1]) # 現在の試行の報酬を保持
final_rewards = torch.zeros([NUM_PROCESSES, 1]) # 最後の試行の報酬を保持
obs_np = np.zeros([NUM_PROCESSES, obs_shape]) # Numpy配列
reward_np = np.zeros([NUM_PROCESSES, 1]) # Numpy配列
done_np = np.zeros([NUM_PROCESSES, 1]) # Numpy配列
each_step = np.zeros(NUM_PROCESSES) # 各環境のstep数を記録
episode = 0 # 環境0の試行数
# 初期状態の開始
obs = [envs[i].reset() for i in range(NUM_PROCESSES)]
obs = np.array(obs)
obs = torch.from_numpy(obs).float() # torch.Size([16, 4])
current_obs = obs # 最新のobsを格納
# advanced学習用のオブジェクトrolloutsの状態の1つ目に、現在の状態を保存
rollouts.observations[0].copy_(current_obs)
# 実行ループ
for j in range(NUM_EPISODES*NUM_PROCESSES): # 全体のforループ
# advanced学習するstep数ごとに計算
for step in range(NUM_ADVANCED_STEP):
# 行動を求める
with torch.no_grad():
action = actor_critic.act(rollouts.observations[step])
# (16,1)→(16,)→tensorをNumPyに
actions = action.squeeze(1).numpy()
# 1stepの実行
for i in range(NUM_PROCESSES):
obs_np[i], reward_np[i], done_np[i], _ = envs[i].step(
actions[i])
# episodeの終了評価と、state_nextを設定
if done_np[i]: # ステップ数が200経過するか、一定角度以上傾くとdoneはtrueになる
# 環境0のときのみ出力
if i == 0:
print('%d Episode: Finished after %d steps' % (
episode, each_step[i]+1))
episode += 1
# 報酬の設定
if each_step[i] < 195:
reward_np[i] = -1.0 # 途中でこけたら罰則として報酬-1を与える
else:
reward_np[i] = 1.0 # 立ったまま終了時は報酬1を与える
each_step[i] = 0 # step数のリセット
obs_np[i] = envs[i].reset() # 実行環境のリセット
else:
reward_np[i] = 0.0 # 普段は報酬0
each_step[i] += 1
# 報酬をtensorに変換し、試行の総報酬に足す
reward = torch.from_numpy(reward_np).float()
episode_rewards += reward
# 各実行環境それぞれについて、doneならmaskは0に、継続中ならmaskは1にする
masks = torch.FloatTensor(
[[0.0] if done_ else [1.0] for done_ in done_np])
# 最後の試行の総報酬を更新する
final_rewards *= masks # 継続中の場合は1をかけ算してそのまま、done時には0を掛けてリセット
# 継続中は0を足す、done時にはepisode_rewardsを足す
final_rewards += (1 - masks) * episode_rewards
# 試行の総報酬を更新する
episode_rewards *= masks # 継続中のmaskは1なのでそのまま、doneの場合は0に
# 現在の状態をdone時には全部0にする
current_obs *= masks
# current_obsを更新
obs = torch.from_numpy(obs_np).float() # torch.Size([16, 4])
current_obs = obs # 最新のobsを格納
# メモリオブジェクトに今stepのtransitionを挿入
rollouts.insert(current_obs, action.data, reward, masks)
# advancedのfor loop終了
# advancedした最終stepの状態から予想する状態価値を計算
with torch.no_grad():
next_value = actor_critic.get_value(
rollouts.observations[-1]).detach()
# rollouts.observationsのサイズはtorch.Size([6, 16, 4])
# 全stepの割引報酬和を計算して、rolloutsの変数returnsを更新
rollouts.compute_returns(next_value)
# ネットワークとrolloutの更新
global_brain.update(rollouts)
rollouts.after_update()
# 全部のNUM_PROCESSESが200step経ち続けたら成功
if final_rewards.sum().numpy() >= NUM_PROCESSES:
print('連続成功')
break
# main学習
cartpole_env = Environment()
cartpole_env.run()
###Output
[33mWARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype.[0m
[33mWARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype.[0m
[33mWARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype.[0m
[33mWARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype.[0m
[33mWARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype.[0m
[33mWARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype.[0m
[33mWARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype.[0m
[33mWARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype.[0m
[33mWARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype.[0m
[33mWARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype.[0m
[33mWARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype.[0m
[33mWARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype.[0m
[33mWARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype.[0m
[33mWARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype.[0m
[33mWARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype.[0m
[33mWARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype.[0m
[33mWARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype.[0m
[33mWARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype.[0m
[33mWARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype.[0m
[33mWARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype.[0m
[33mWARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype.[0m
[33mWARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype.[0m
[33mWARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype.[0m
[33mWARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype.[0m
[33mWARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype.[0m
[33mWARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype.[0m
[33mWARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype.[0m
[33mWARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype.[0m
[33mWARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype.[0m
[33mWARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype.[0m
[33mWARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype.[0m
[33mWARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype.[0m
0 Episode: Finished after 11 steps
1 Episode: Finished after 11 steps
2 Episode: Finished after 12 steps
3 Episode: Finished after 41 steps
4 Episode: Finished after 23 steps
5 Episode: Finished after 15 steps
6 Episode: Finished after 11 steps
7 Episode: Finished after 10 steps
8 Episode: Finished after 16 steps
9 Episode: Finished after 15 steps
10 Episode: Finished after 24 steps
11 Episode: Finished after 30 steps
12 Episode: Finished after 23 steps
13 Episode: Finished after 32 steps
14 Episode: Finished after 27 steps
15 Episode: Finished after 66 steps
16 Episode: Finished after 29 steps
17 Episode: Finished after 16 steps
18 Episode: Finished after 20 steps
19 Episode: Finished after 17 steps
20 Episode: Finished after 28 steps
21 Episode: Finished after 19 steps
22 Episode: Finished after 98 steps
23 Episode: Finished after 58 steps
24 Episode: Finished after 46 steps
25 Episode: Finished after 132 steps
26 Episode: Finished after 99 steps
27 Episode: Finished after 200 steps
28 Episode: Finished after 16 steps
29 Episode: Finished after 68 steps
30 Episode: Finished after 17 steps
31 Episode: Finished after 200 steps
32 Episode: Finished after 200 steps
連続成功
|
hw3/cs285/scripts/run_hw3_dqn.ipynb | ###Markdown
SetupYou will need to make a copy of this notebook in your Google Drive before you can edit the homework files. You can do so with **File → Save a copy in Drive**.
###Code
#@title mount your Google Drive
#@markdown Your work will be stored in a folder called `cs285_f2020` by default to prevent Colab instance timeouts from deleting your edits.
import os
from google.colab import drive
drive.mount('/content/gdrive')
#@title set up mount symlink
DRIVE_PATH = '/content/gdrive/My\ Drive/cs285_f2020'
DRIVE_PYTHON_PATH = DRIVE_PATH.replace('\\', '')
if not os.path.exists(DRIVE_PYTHON_PATH):
%mkdir $DRIVE_PATH
## the space in `My Drive` causes some issues,
## make a symlink to avoid this
SYM_PATH = '/content/cs285_f2020'
if not os.path.exists(SYM_PATH):
!ln -s $DRIVE_PATH $SYM_PATH
#@title apt install requirements
#@markdown Run each section with Shift+Enter
#@markdown Double-click on section headers to show code.
!apt update
!apt install -y --no-install-recommends \
build-essential \
curl \
git \
gnupg2 \
make \
cmake \
ffmpeg \
swig \
libz-dev \
unzip \
zlib1g-dev \
libglfw3 \
libglfw3-dev \
libxrandr2 \
libxinerama-dev \
libxi6 \
libxcursor-dev \
libgl1-mesa-dev \
libgl1-mesa-glx \
libglew-dev \
libosmesa6-dev \
lsb-release \
ack-grep \
patchelf \
wget \
xpra \
xserver-xorg-dev \
xvfb \
python-opengl \
ffmpeg > /dev/null 2>&1
!pip install opencv-python==3.4.0.12
#@title download mujoco
MJC_PATH = '{}/mujoco'.format(SYM_PATH)
if not os.path.exists(MJC_PATH):
%mkdir $MJC_PATH
%cd $MJC_PATH
if not os.path.exists(os.path.join(MJC_PATH, 'mujoco200')):
!wget -q https://www.roboti.us/download/mujoco200_linux.zip
!unzip -q mujoco200_linux.zip
%mv mujoco200_linux mujoco200
%rm mujoco200_linux.zip
#@title update mujoco paths
import os
os.environ['LD_LIBRARY_PATH'] += ':{}/mujoco200/bin'.format(MJC_PATH)
os.environ['MUJOCO_PY_MUJOCO_PATH'] = '{}/mujoco200'.format(MJC_PATH)
os.environ['MUJOCO_PY_MJKEY_PATH'] = '{}/mjkey.txt'.format(MJC_PATH)
## installation on colab does not find *.so files
## in LD_LIBRARY_PATH, copy over manually instead
!cp $MJC_PATH/mujoco200/bin/*.so /usr/lib/x86_64-linux-gnu/
###Output
_____no_output_____
###Markdown
Ensure your `mjkey.txt` is in /content/cs285_f2020/mujoco before this step
###Code
#@title clone and install mujoco-py
%cd $MJC_PATH
if not os.path.exists('mujoco-py'):
!git clone https://github.com/openai/mujoco-py.git
%cd mujoco-py
%pip install -e .
## cythonize at the first import
import mujoco_py
#@title clone homework repo
#@markdown Note that this is the same codebase from homework 1,
#@markdown so you may need to move your old `homework_fall2020`
#@markdown folder in order to clone the repo again.
#@markdown **Don't delete your old work though!**
#@markdown You will need it for this assignment.
%cd $SYM_PATH
!git clone https://github.com/berkeleydeeprlcourse/homework_fall2020.git
%cd homework_fall2020/hw3
%pip install -r requirements_colab.txt -f https://download.pytorch.org/whl/torch_stable.html
%pip install -e .
#@title set up virtual display
from pyvirtualdisplay import Display
display = Display(visible=0, size=(1400, 900))
display.start()
# For later
from cs285.infrastructure.colab_utils import (
wrap_env,
show_video
)
#@title test virtual display
#@markdown If you see a video of a four-legged ant fumbling about, setup is complete!
import gym
import matplotlib
matplotlib.use('Agg')
env = wrap_env(gym.make("Ant-v2"))
observation = env.reset()
for i in range(10):
env.render(mode='rgb_array')
obs, rew, term, _ = env.step(env.action_space.sample() )
if term:
break;
env.close()
print('Loading video...')
show_video()
###Output
_____no_output_____
###Markdown
Editing CodeTo edit code, click the folder icon on the left menu. Navigate to the corresponding file (`cs285_f2020/...`). Double click a file to open an editor. There is a timeout of about ~12 hours with Colab while it is active (and less if you close your browser window). We sync your edits to Google Drive so that you won't lose your work in the event of an instance timeout, but you will need to re-mount your Google Drive and re-install packages with every new instance. Run DQN and Double DQN
###Code
#@title imports
import os
import time
from cs285.infrastructure.rl_trainer import RL_Trainer
from cs285.agents.dqn_agent import DQNAgent
from cs285.infrastructure.dqn_utils import get_env_kwargs
%load_ext autoreload
%autoreload 2
#@title runtime arguments
class Args:
def __getitem__(self, key):
return getattr(self, key)
def __setitem__(self, key, val):
setattr(self, key, val)
def __contains__(self, key):
return hasattr(self, key)
env_name = 'MsPacman-v0' #@param ['MsPacman-v0', 'LunarLander-v3', 'PongNoFrameSkip-v4']
exp_name = 'q3_dqn' #@param
## PDF will tell you how to set ep_len
## and discount for each environment
ep_len = 200 #@param {type: "integer"}
#@markdown batches and steps
batch_size = 32 #@param {type: "integer"}
eval_batch_size = 1000 #@param {type: "integer"}
num_agent_train_steps_per_iter = 1 #@param {type: "integer"}
num_critic_updates_per_agent_update = 1 #@param {type: "integer"}
#@markdown Q-learning parameters
double_q = False #@param {type: "boolean"}
#@markdown system
save_params = False #@param {type: "boolean"}
no_gpu = False #@param {type: "boolean"}
which_gpu = 0 #@param {type: "integer"}
seed = 1 #@param {type: "integer"}
#@markdown logging
## default is to not log video so
## that logs are small enough to be
## uploaded to gradscope
video_log_freq = -1 #@param {type: "integer"}
scalar_log_freq = 10000#@param {type: "integer"}
args = Args()
## ensure compatibility with hw1 code
args['train_batch_size'] = args['batch_size']
if args['video_log_freq'] > 0:
import warnings
warnings.warn(
'''\nLogging videos will make eventfiles too'''
'''\nlarge for the autograder. Set video_log_freq = -1'''
'''\nfor the runs you intend to submit.''')
#@title create directories for logging
data_path = '''/content/cs285_f2020/''' \
'''homework_fall2020/hw3/data'''
if not (os.path.exists(data_path)):
os.makedirs(data_path)
logdir = 'hw3_' + args.exp_name + '_' + args.env_name + '_' + time.strftime("%d-%m-%Y_%H-%M-%S")
logdir = os.path.join(data_path, logdir)
args['logdir'] = logdir
if not(os.path.exists(logdir)):
os.makedirs(logdir)
print("LOGGING TO: ", logdir)
#@title Define Q-function trainer
class Q_Trainer(object):
def __init__(self, params):
self.params = params
train_args = {
'num_agent_train_steps_per_iter': params['num_agent_train_steps_per_iter'],
'num_critic_updates_per_agent_update': params['num_critic_updates_per_agent_update'],
'train_batch_size': params['batch_size'],
'double_q': params['double_q'],
}
env_args = get_env_kwargs(params['env_name'])
for k, v in env_args.items():
params[k] = v
self.params['agent_class'] = DQNAgent
self.params['agent_params'] = params
self.params['train_batch_size'] = params['batch_size']
self.params['env_wrappers'] = env_args['env_wrappers']
self.rl_trainer = RL_Trainer(self.params)
def run_training_loop(self):
self.rl_trainer.run_training_loop(
self.params['num_timesteps'],
collect_policy = self.rl_trainer.agent.actor,
eval_policy = self.rl_trainer.agent.actor,
)
#@title run training
trainer = Q_Trainer(args)
trainer.run_training_loop()
#@markdown You can visualize your runs with tensorboard from within the notebook
## requires tensorflow==2.3.0
%load_ext tensorboard
%tensorboard --logdir /content/cs285_f2020/homework_fall2020/hw3/data/
###Output
_____no_output_____
###Markdown
SetupYou will need to make a copy of this notebook in your Google Drive before you can edit the homework files. You can do so with **File → Save a copy in Drive**.
###Code
#@title mount your Google Drive
#@markdown Your work will be stored in a folder called `cs285_f2021` by default to prevent Colab instance timeouts from deleting your edits.
import os
from google.colab import drive
drive.mount('/content/gdrive')
#@title set up mount symlink
DRIVE_PATH = '/content/gdrive/My\ Drive/cs285_f2021'
DRIVE_PYTHON_PATH = DRIVE_PATH.replace('\\', '')
if not os.path.exists(DRIVE_PYTHON_PATH):
%mkdir $DRIVE_PATH
## the space in `My Drive` causes some issues,
## make a symlink to avoid this
SYM_PATH = '/content/cs285_f2021'
if not os.path.exists(SYM_PATH):
!ln -s $DRIVE_PATH $SYM_PATH
#@title apt install requirements
#@markdown Run each section with Shift+Enter
#@markdown Double-click on section headers to show code.
!apt update
!apt install -y --no-install-recommends \
build-essential \
curl \
git \
gnupg2 \
make \
cmake \
ffmpeg \
swig \
libz-dev \
unzip \
zlib1g-dev \
libglfw3 \
libglfw3-dev \
libxrandr2 \
libxinerama-dev \
libxi6 \
libxcursor-dev \
libgl1-mesa-dev \
libgl1-mesa-glx \
libglew-dev \
libosmesa6-dev \
lsb-release \
ack-grep \
patchelf \
wget \
xpra \
xserver-xorg-dev \
xvfb \
python-opengl \
ffmpeg > /dev/null 2>&1
!pip install opencv-python==4.4.0.42
#@title download mujoco
MJC_PATH = '{}/mujoco'.format(SYM_PATH)
if not os.path.exists(MJC_PATH):
%mkdir $MJC_PATH
%cd $MJC_PATH
if not os.path.exists(os.path.join(MJC_PATH, 'mujoco200')):
!wget -q https://www.roboti.us/download/mujoco200_linux.zip
!unzip -q mujoco200_linux.zip
%mv mujoco200_linux mujoco200
%rm mujoco200_linux.zip
#@title update mujoco paths
import os
os.environ['LD_LIBRARY_PATH'] += ':{}/mujoco200/bin'.format(MJC_PATH)
os.environ['MUJOCO_PY_MUJOCO_PATH'] = '{}/mujoco200'.format(MJC_PATH)
os.environ['MUJOCO_PY_MJKEY_PATH'] = '{}/mjkey.txt'.format(MJC_PATH)
## installation on colab does not find *.so files
## in LD_LIBRARY_PATH, copy over manually instead
!cp $MJC_PATH/mujoco200/bin/*.so /usr/lib/x86_64-linux-gnu/
###Output
_____no_output_____
###Markdown
Ensure your `mjkey.txt` is in /content/cs285_f2021/mujoco before this step
###Code
#@title clone and install mujoco-py
%cd $MJC_PATH
if not os.path.exists('mujoco-py'):
!git clone https://github.com/openai/mujoco-py.git
%cd mujoco-py
%pip install -e .
## cythonize at the first import
import mujoco_py
#@title clone homework repo
#@markdown Note that this is the same codebase from homework 1,
#@markdown so you may need to move your old `homework_fall2021`
#@markdown folder in order to clone the repo again.
#@markdown **Don't delete your old work though!**
#@markdown You will need it for this assignment.
%cd $SYM_PATH
!git clone https://github.com/berkeleydeeprlcourse/homework_fall2021.git
%cd homework_fall2021/hw3
%pip install -r requirements_colab.txt -f https://download.pytorch.org/whl/torch_stable.html
%pip install -e .
#@title set up the Ms. Pacman environment
import urllib.request
urllib.request.urlretrieve('http://www.atarimania.com/roms/Roms.rar','Roms.rar')
!pip install unrar
!unrar x Roms.rar
!mkdir rars
!mv HC\ ROMS.zip rars
!mv ROMS.zip rars
!python -m atari_py.import_roms rars
#@title set up virtual display
from pyvirtualdisplay import Display
display = Display(visible=0, size=(1400, 900))
display.start()
# For later
from cs285.infrastructure.colab_utils import (
wrap_env,
show_video
)
#@title test virtual display
#@markdown If you see a video of a four-legged ant fumbling about, setup is complete!
import gym
import matplotlib
matplotlib.use('Agg')
env = wrap_env(gym.make("Ant-v2"))
observation = env.reset()
for i in range(10):
env.render(mode='rgb_array')
obs, rew, term, _ = env.step(env.action_space.sample() )
if term:
break;
env.close()
print('Loading video...')
show_video()
###Output
_____no_output_____
###Markdown
Editing CodeTo edit code, click the folder icon on the left menu. Navigate to the corresponding file (`cs285_f2021/...`). Double click a file to open an editor. There is a timeout of about ~12 hours with Colab while it is active (and less if you close your browser window). We sync your edits to Google Drive so that you won't lose your work in the event of an instance timeout, but you will need to re-mount your Google Drive and re-install packages with every new instance. Run DQN and Double DQN
###Code
#@title imports
import os
import time
from cs285.infrastructure.rl_trainer import RL_Trainer
from cs285.agents.dqn_agent import DQNAgent
from cs285.infrastructure.dqn_utils import get_env_kwargs
#@title runtime arguments
class Args:
def __getitem__(self, key):
return getattr(self, key)
def __setitem__(self, key, val):
setattr(self, key, val)
def __contains__(self, key):
return hasattr(self, key)
env_name = 'MsPacman-v0' #@param ['MsPacman-v0', 'LunarLander-v3', 'PongNoFrameSkip-v4']
exp_name = 'q3_dqn' #@param
## PDF will tell you how to set ep_len
## and discount for each environment
ep_len = 200 #@param {type: "integer"}
#@markdown batches and steps
batch_size = 32 #@param {type: "integer"}
eval_batch_size = 1000 #@param {type: "integer"}
num_agent_train_steps_per_iter = 1 #@param {type: "integer"}
num_critic_updates_per_agent_update = 1 #@param {type: "integer"}
#@markdown Q-learning parameters
double_q = False #@param {type: "boolean"}
#@markdown system
save_params = False #@param {type: "boolean"}
no_gpu = False #@param {type: "boolean"}
which_gpu = 0 #@param {type: "integer"}
seed = 1 #@param {type: "integer"}
#@markdown logging
## default is to not log video so
## that logs are small enough to be
## uploaded to gradscope
video_log_freq = -1 #@param {type: "integer"}
scalar_log_freq = 10000#@param {type: "integer"}
args = Args()
## ensure compatibility with hw1 code
args['train_batch_size'] = args['batch_size']
if args['video_log_freq'] > 0:
import warnings
warnings.warn(
'''\nLogging videos will make eventfiles too'''
'''\nlarge for the autograder. Set video_log_freq = -1'''
'''\nfor the runs you intend to submit.''')
#@title create directories for logging
data_path = '''/content/cs285_f2021/''' \
'''homework_fall2021/hw3/data'''
if not (os.path.exists(data_path)):
os.makedirs(data_path)
logdir = args.exp_name + '_' + args.env_name + '_' + time.strftime("%d-%m-%Y_%H-%M-%S")
logdir = os.path.join(data_path, logdir)
args['logdir'] = logdir
if not(os.path.exists(logdir)):
os.makedirs(logdir)
print("LOGGING TO: ", logdir)
#@title Define Q-function trainer
class Q_Trainer(object):
def __init__(self, params):
self.params = params
train_args = {
'num_agent_train_steps_per_iter': params['num_agent_train_steps_per_iter'],
'num_critic_updates_per_agent_update': params['num_critic_updates_per_agent_update'],
'train_batch_size': params['batch_size'],
'double_q': params['double_q'],
}
env_args = get_env_kwargs(params['env_name'])
for k, v in env_args.items():
params[k] = v
self.params['agent_class'] = DQNAgent
self.params['agent_params'] = params
self.params['train_batch_size'] = params['batch_size']
self.params['env_wrappers'] = env_args['env_wrappers']
self.rl_trainer = RL_Trainer(self.params)
def run_training_loop(self):
self.rl_trainer.run_training_loop(
self.params['num_timesteps'],
collect_policy = self.rl_trainer.agent.actor,
eval_policy = self.rl_trainer.agent.actor,
)
#@title run training
trainer = Q_Trainer(args)
trainer.run_training_loop()
#@markdown You can visualize your runs with tensorboard from within the notebook
## requires tensorflow==2.3.0
%load_ext tensorboard
%tensorboard --logdir /content/cs285_f2021/homework_fall2021/hw3/data/
###Output
_____no_output_____ |
DLyakhov_graph_embeddings_small_training_set.ipynb | ###Markdown
Node Classification without Graph Neural Networks by graph embeddings Setup
###Code
!pip install spektral==0.6.2 -q
import os
import pandas as pd
import numpy as np
import networkx as nx
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import random
import gensim
import spektral
###Output
_____no_output_____
###Markdown
Graph embeddings for baseline NN model Let's encode structural information of graph into embeddings and try to feed this to the baseline model. Let's decode terms information to latent space of $R^{d}$ and then mix them with structural info of the model Spektral cora dataset load
###Code
edges, features, labels, train_mask, val_mask, test_mask = spektral.datasets.citation.load_data(dataset_name='cora')
###Output
Loading cora dataset
Pre-processing node features
###Markdown
Verify train train val test masks shapes
###Code
edges.shape, features.shape, labels.shape, train_mask.shape, val_mask.shape, test_mask.shape
###Output
_____no_output_____
###Markdown
Verify sizes
###Code
sum(train_mask), sum(test_mask), sum(val_mask)
###Output
_____no_output_____
###Markdown
Compare with the Dataset from archiveThe Cora dataset consists of 2,708 scientific papers classified into one of seven classes.The citation network consists of 5,429 links. Each paper has a binary word vector of size1,433, indicating the presence of a corresponding word.Download the datasetThe dataset has two tap-separated files: `cora.cites` and `cora.content`.1. The `cora.cites` includes the citation records with two columns:`cited_paper_id` (target) and `citing_paper_id` (source).2. The `cora.content` includes the paper content records with 1,435 columns:`paper_id`, `subject`, and 1,433 binary features.Let's download the dataset.
###Code
zip_file = keras.utils.get_file(
fname="cora.tgz",
origin="https://linqs-data.soe.ucsc.edu/public/lbc/cora.tgz",
extract=True,
)
data_dir = os.path.join(os.path.dirname(zip_file), "cora")
###Output
Downloading data from https://linqs-data.soe.ucsc.edu/public/lbc/cora.tgz
172032/168052 [==============================] - 0s 1us/step
180224/168052 [================================] - 0s 1us/step
###Markdown
Process and visualize the datasetThen we load the citations data into a Pandas DataFrame.
###Code
citations = pd.read_csv(
os.path.join(data_dir, "cora.cites"),
sep="\t",
header=None,
names=["target", "source"],
)
print("Citations shape:", citations.shape)
###Output
Citations shape: (5429, 2)
###Markdown
Now we display a sample of the `citations` DataFrame.The `target` column includes the paper ids cited by the paper ids in the `source` column.
###Code
citations.sample(frac=1).head()
###Output
_____no_output_____
###Markdown
Now let's load the papers data into a Pandas DataFrame.
###Code
column_names = ["paper_id"] + [f"term_{idx}" for idx in range(1433)] + ["subject"]
papers = pd.read_csv(
os.path.join(data_dir, "cora.content"), sep="\t", header=None, names=column_names,
)
print("Papers shape:", papers.shape)
###Output
Papers shape: (2708, 1435)
###Markdown
Now we display a sample of the `papers` DataFrame. The DataFrame includes the `paper_id`and the `subject` columns, as well as 1,433 binary column representing whether a term existsin the paper or not. Let's display the count of the papers in each subject.
###Code
print(papers.subject.value_counts())
###Output
Neural_Networks 818
Probabilistic_Methods 426
Genetic_Algorithms 418
Theory 351
Case_Based 298
Reinforcement_Learning 217
Rule_Learning 180
Name: subject, dtype: int64
###Markdown
We convert the paper ids and the subjects into zero-based indices.
###Code
class_values = sorted(papers["subject"].unique())
class_idx = {name: id for id, name in enumerate(class_values)}
paper_idx = {name: idx for idx, name in enumerate(sorted(papers["paper_id"].unique()))}
papers["paper_id"] = papers["paper_id"].apply(lambda name: paper_idx[name])
citations["source"] = citations["source"].apply(lambda name: paper_idx[name])
citations["target"] = citations["target"].apply(lambda name: paper_idx[name])
papers["subject"] = papers["subject"].apply(lambda value: class_idx[value])
###Output
_____no_output_____
###Markdown
Check datasets are isomorphic
###Code
citations
y_pd = papers.sort_values('paper_id')['subject'].to_numpy()
y_pd
y_sk = np.argmax(labels, axis=1)
y_sk
{x: sum((y_sk == x).astype(int)) for x in range(7)}
{x: sum((y_pd == x).astype(int)) for x in range(7)}
# Read graph
weighted_citation = citations.copy()
weighted_citation['weight'] = 1
weighted_citation
nx_graph_pd = nx.from_pandas_edgelist(weighted_citation, edge_attr=['weight'])
nx_graph_pd.nodes.__len__()
# Read graph
nx_graph = nx.convert_matrix.from_numpy_array(edges.toarray())
nx.set_node_attributes(nx_graph, 1, 'weight')
nx_graph.nodes.__len__()
import networkx.algorithms.isomorphism as iso
sum(dict(nx_graph.degree(nx_graph.nodes())).values())
sum(dict(nx_graph_pd.degree(nx_graph_pd.nodes())).values())
###Output
_____no_output_____
###Markdown
Node2Vec
###Code
P = Q = 1
DIRECTED = True
NUM_WALKS = 20
WALK_LEN = 50
WINDOW_LEN = 5
LATENT_DIM_LEN = 100
DROPOUT_RATE = 0.01
###Output
_____no_output_____
###Markdown
Implementation 0 Random walk graph implementation
###Code
class Graph():
def __init__(self, nx_G, is_directed, p, q):
self.G = nx_G
self.is_directed = is_directed
self.p = p
self.q = q
def node2vec_walk(self, walk_length, start_node):
'''
Simulate a random walk starting from start node.
'''
G = self.G
alias_nodes = self.alias_nodes
alias_edges = self.alias_edges
walk = [start_node]
while len(walk) < walk_length:
cur = walk[-1]
cur_nbrs = sorted(G.neighbors(cur))
if len(cur_nbrs) > 0:
if len(walk) == 1:
walk.append(cur_nbrs[alias_draw(alias_nodes[cur][0], alias_nodes[cur][1])])
else:
prev = walk[-2]
next = cur_nbrs[alias_draw(alias_edges[(prev, cur)][0],
alias_edges[(prev, cur)][1])]
walk.append(next)
else:
break
return walk
def simulate_walks(self, num_walks, walk_length):
'''
Repeatedly simulate random walks from each node.
'''
G = self.G
walks = []
nodes = list(G.nodes())
print('Walk iteration:')
for walk_iter in range(num_walks):
print(str(walk_iter+1), '/', str(num_walks))
random.shuffle(nodes)
for node in nodes:
walks.append(self.node2vec_walk(walk_length=walk_length, start_node=node))
return walks
def get_alias_edge(self, src, dst):
'''
Get the alias edge setup lists for a given edge.
'''
G = self.G
p = self.p
q = self.q
unnormalized_probs = []
for dst_nbr in sorted(G.neighbors(dst)):
if dst_nbr == src:
unnormalized_probs.append(G[dst][dst_nbr]['weight']/p)
elif G.has_edge(dst_nbr, src):
unnormalized_probs.append(G[dst][dst_nbr]['weight'])
else:
unnormalized_probs.append(G[dst][dst_nbr]['weight']/q)
norm_const = sum(unnormalized_probs)
normalized_probs = [float(u_prob)/norm_const for u_prob in unnormalized_probs]
return alias_setup(normalized_probs)
def preprocess_transition_probs(self):
'''
Preprocessing of transition probabilities for guiding the random walks.
'''
G = self.G
is_directed = self.is_directed
alias_nodes = {}
for node in G.nodes():
unnormalized_probs = [G[node][nbr]['weight'] for nbr in sorted(G.neighbors(node))]
norm_const = sum(unnormalized_probs)
normalized_probs = [float(u_prob)/norm_const for u_prob in unnormalized_probs]
alias_nodes[node] = alias_setup(normalized_probs)
alias_edges = {}
triads = {}
for edge in G.edges():
if not is_directed:
alias_edges[edge] = self.get_alias_edge(edge[0], edge[1])
else:
alias_edges[edge] = self.get_alias_edge(edge[0], edge[1])
alias_edges[(edge[1], edge[0])] = self.get_alias_edge(edge[1], edge[0])
self.alias_nodes = alias_nodes
self.alias_edges = alias_edges
return
def alias_setup(probs):
'''
Compute utility lists for non-uniform sampling from discrete distributions.
'''
K = len(probs)
q = np.zeros(K)
J = np.zeros(K, dtype=np.int)
smaller = []
larger = []
for kk, prob in enumerate(probs):
q[kk] = K*prob
if q[kk] < 1.0:
smaller.append(kk)
else:
larger.append(kk)
while len(smaller) > 0 and len(larger) > 0:
small = smaller.pop()
large = larger.pop()
J[small] = large
q[large] = q[large] + q[small] - 1.0
if q[large] < 1.0:
smaller.append(large)
else:
larger.append(large)
return J, q
def alias_draw(J, q):
'''
Draw sample from a non-uniform discrete distribution using alias sampling.
'''
K = len(J)
kk = int(np.floor(np.random.rand()*K))
if np.random.rand() < q[kk]:
return kk
else:
return J[kk]
# Read graph
nx_graph = nx.convert_matrix.from_numpy_array(edges.toarray())
nx.set_node_attributes(nx_graph, 1, 'weight')
nx_graph.nodes.__len__()
# Make random walks
G = Graph(nx_graph, DIRECTED, P, Q)
G.preprocess_transition_probs()
walks = np.array(G.simulate_walks(NUM_WALKS, WALK_LEN))
# Hack to use Word2vec as Node2vec ^_^
walks_str = [list(map(str, walk)) for walk in walks if np.any(walk)]
model = gensim.models.Word2Vec(walks_str, size=LATENT_DIM_LEN, window=WINDOW_LEN)
node2vec_emb = np.empty((len(nx_graph.nodes()), LATENT_DIM_LEN))
for node in nx_graph.nodes():
node2vec_emb[node] = model.wv[str(node)]
###Output
_____no_output_____
###Markdown
Implementation 1
###Code
!pip install node2vec
from node2vec import Node2Vec
node2vec = Node2Vec(nx_graph, dimensions=100, walk_length=30, num_walks=20, workers=1)
# Embed nodes
model = node2vec.fit(window=10, min_count=1, batch_words=4)
node2vec_emb = np.empty((len(nx_graph.nodes()), 100))
for node in nx_graph.nodes():
node2vec_emb[node] = model.wv[str(node)]
walks.shape
walks[1001,:]
node2vec_emb.shape
###Output
_____no_output_____
###Markdown
Let's build classifier on both node2vec embeddings and terms NN
###Code
def display_learning_curves(history):
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15, 5))
ax1.plot(history.history["loss"])
ax1.plot(history.history["val_loss"])
ax1.legend(["train", "test"], loc="upper right")
ax1.set_xlabel("Epochs")
ax1.set_ylabel("Loss")
ax2.plot(history.history["acc"])
ax2.plot(history.history["val_acc"])
ax2.legend(["train", "test"], loc="upper right")
ax2.set_xlabel("Epochs")
ax2.set_ylabel("Accuracy")
plt.show()
def create_ffn(hidden_units, dropout_rate, name=None):
fnn_layers = []
for units in hidden_units:
fnn_layers.append(layers.BatchNormalization())
fnn_layers.append(layers.Dropout(dropout_rate))
fnn_layers.append(layers.Dense(units, activation=tf.nn.gelu))
return keras.Sequential(fnn_layers, name=name)
hidden_units = [32, 32]
learning_rate = 0.01
dropout_rate = 0.5
num_epochs = 300
batch_size = 256
###Output
_____no_output_____
###Markdown
Large nn model
###Code
def create_graph_emb_model(terms_count, emb_dim, num_classes, dropout_rate=0.2):
inputs_terms = layers.Input(shape=(terms_count,), name="input_terms", )#batch_size=140)
inputs_graph_emb = layers.Input(shape=(emb_dim,), name="input_graph_emb", )#batch_size=140)
x = create_ffn([int(terms_count / 2), emb_dim], dropout_rate, name=f"ffn_block1")(inputs_terms)
# Add embeddings and recived term emb
x = layers.Add()([x, inputs_graph_emb])
x = create_ffn([int(emb_dim / i) for i in range(2, 3)], dropout_rate, name=f"ffn_block4")(x)
logits = layers.Dense(num_classes, name="logits")(x)
# Create the model.
return keras.Model(inputs=[inputs_terms, inputs_graph_emb], outputs=logits, name="emb_graph_model")
graph_emb_model = create_graph_emb_model(features.shape[1], LATENT_DIM_LEN, labels.shape[1], DROPOUT_RATE)
graph_emb_model.summary()
###Output
Model: "emb_graph_model"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_terms (InputLayer) [(None, 1433)] 0
__________________________________________________________________________________________________
ffn_block1 (Sequential) (None, 100) 1107040 input_terms[0][0]
__________________________________________________________________________________________________
input_graph_emb (InputLayer) [(None, 100)] 0
__________________________________________________________________________________________________
add (Add) (None, 100) 0 ffn_block1[0][0]
input_graph_emb[0][0]
__________________________________________________________________________________________________
ffn_block4 (Sequential) (None, 50) 5450 add[0][0]
__________________________________________________________________________________________________
logits (Dense) (None, 7) 357 ffn_block4[0][0]
==================================================================================================
Total params: 1,112,847
Trainable params: 1,108,349
Non-trainable params: 4,498
__________________________________________________________________________________________________
###Markdown
Small nn model
###Code
def create_graph_emb_model(terms_count, emb_dim, num_classes, dropout_rate=0.2):
inputs_terms = layers.Input(shape=(terms_count,), name="input_terms", )#batch_size=140)
inputs_graph_emb = layers.Input(shape=(emb_dim,), name="input_graph_emb", )#batch_size=140)
x = layers.Dense(emb_dim)(inputs_terms)
x = layers.multiply([x, inputs_graph_emb])
logits = layers.Dense(num_classes)(x)
return keras.Model(inputs=[inputs_terms, inputs_graph_emb], outputs=logits, name="emb_graph_model")
graph_emb_model = create_graph_emb_model(features.shape[1],
LATENT_DIM_LEN,
labels.shape[1], DROPOUT_RATE)
graph_emb_model.summary()
# Create train and test features as a numpy array.
x_train_terms = features[train_mask].toarray()
x_train_emb = node2vec_emb[train_mask]
x_test_terms = features[test_mask].toarray()
x_test_emb = node2vec_emb[test_mask]
x_val_terms = features[val_mask].toarray()
x_val_emb = node2vec_emb[val_mask]
# Create train and test targets as a numpy array.
y_train = labels[train_mask]
y_test = labels[test_mask]
y_val = labels[val_mask]
shapes = x_train_terms.shape, x_train_emb.shape
shapes
# Check model is callable
graph_emb_model([x_train_terms, x_train_emb]).shape
def run_emb_experiment(model, x_train, y_train, x_val, y_val):
# Compile the model.
model.compile(
optimizer=keras.optimizers.Adam(learning_rate),
loss=keras.losses.CategoricalCrossentropy(label_smoothing=0.1),
metrics=[keras.metrics.CategoricalAccuracy(name="acc")],
)
# Create an early stopping callback.
early_stopping = keras.callbacks.EarlyStopping(
monitor="val_acc", patience=100, restore_best_weights=True
)
reduce_on_plateu = tf.keras.callbacks.ReduceLROnPlateau('val_acc', patience=10, factor=0.5)
# Fit the model.
history = model.fit(
x=x_train,
y=y_train,
validation_data=(x_val, y_val),#val_data,
epochs=300,
batch_size=1,
callbacks=[early_stopping, reduce_on_plateu],
)
return history
learning_rate = 1e-1
history = run_emb_experiment(graph_emb_model, [x_train_terms, x_train_emb], y_train, [x_val_terms, x_val_emb], y_val)
display_learning_curves(history)
_, test_accuracy = graph_emb_model.evaluate(x=[x_test_terms, x_test_emb], y=y_test, verbose=1)
print(f"Test accuracy: {round(test_accuracy * 100, 2)}%")
###Output
32/32 [==============================] - 0s 2ms/step - loss: 8.3409 - acc: 0.2700
Test accuracy: 27.0%
###Markdown
Classical ml models
###Code
from sklearn.metrics import classification_report
x_train = np.concatenate((x_train_emb, x_train_terms), axis=1)
x_test = np.concatenate((x_test_emb, x_test_terms), axis=1)
###Output
_____no_output_____
###Markdown
Random forest
###Code
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import AdaBoostClassifier
RT = RandomForestClassifier(max_depth=10, random_state=42)
RT.fit(x_train_emb, y_train)
pred = RT.predict(x_test_emb)
report = classification_report(y_true=y_test, y_pred=pred)
print(report)
###Output
precision recall f1-score support
0 0.97 0.24 0.38 130
1 0.87 0.36 0.51 91
2 0.97 0.42 0.59 144
3 0.95 0.06 0.12 319
4 1.00 0.16 0.28 149
5 1.00 0.30 0.46 103
6 1.00 0.19 0.32 64
micro avg 0.96 0.21 0.35 1000
macro avg 0.97 0.25 0.38 1000
weighted avg 0.96 0.21 0.33 1000
samples avg 0.21 0.21 0.21 1000
###Markdown
KNN
###Code
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import classification_report
neigh = KNeighborsClassifier(n_neighbors=5)
neigh.fit(x_train_emb, y_train)
pred = neigh.predict(x_test_emb)
report = classification_report(y_true=y_test, y_pred=pred)
print(report)
###Output
precision recall f1-score support
0 0.83 0.58 0.68 130
1 0.70 0.84 0.76 91
2 0.72 0.92 0.81 144
3 0.88 0.39 0.54 319
4 0.73 0.73 0.73 149
5 0.78 0.67 0.72 103
6 0.71 0.66 0.68 64
micro avg 0.76 0.63 0.69 1000
macro avg 0.76 0.68 0.70 1000
weighted avg 0.79 0.63 0.67 1000
samples avg 0.63 0.63 0.63 1000
###Markdown
XG BOOST
###Code
from sklearn.ensemble import GradientBoostingClassifier
xg = GradientBoostingClassifier(n_estimators=130, learning_rate=0.07, max_depth=2, random_state=5)
xg.fit(x_train_emb, np.argmax(y_train, axis=1))
pred = xg.predict(x_test_emb)
report = classification_report(y_true=np.argmax(y_test, axis=1), y_pred=pred)
print(report)
###Output
precision recall f1-score support
0 0.49 0.54 0.51 130
1 0.54 0.71 0.61 91
2 0.75 0.67 0.71 144
3 0.79 0.41 0.54 319
4 0.44 0.62 0.51 149
5 0.55 0.63 0.59 103
6 0.39 0.67 0.49 64
accuracy 0.56 1000
macro avg 0.56 0.61 0.57 1000
weighted avg 0.62 0.56 0.56 1000
|
code/chap05-Mine.ipynb | ###Markdown
Modeling and Simulation in PythonChapter 5Copyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
###Code
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
###Output
_____no_output_____
###Markdown
Reading dataPandas is a library that provides tools for reading and processing data. `read_html` reads a web page from a file or the Internet and creates one `DataFrame` for each table on the page.
###Code
from pandas import read_html
###Output
_____no_output_____
###Markdown
The data directory contains a downloaded copy of https://en.wikipedia.org/wiki/World_population_estimatesThe arguments of `read_html` specify the file to read and how to interpret the tables in the file. The result, `tables`, is a sequence of `DataFrame` objects; `len(tables)` reports the length of the sequence.
###Code
filename = 'data/World_population_estimates.html'
tables = read_html(filename, header=0, index_col=0, decimal='M')
len(tables)
###Output
_____no_output_____
###Markdown
We can select the `DataFrame` we want using the bracket operator. The tables are numbered from 0, so `tables[2]` is actually the third table on the page.`head` selects the header and the first five rows.
###Code
table2 = tables[2]
table2.head()
###Output
_____no_output_____
###Markdown
`tail` selects the last five rows.
###Code
table2.tail()
###Output
_____no_output_____
###Markdown
Long column names are awkard to work with, but we can replace them with abbreviated names.
###Code
table2.columns = ['census', 'prb', 'un', 'maddison',
'hyde', 'tanton', 'biraben', 'mj',
'thomlinson', 'durand', 'clark']
###Output
_____no_output_____
###Markdown
Here's what the DataFrame looks like now.
###Code
table2.head()
###Output
_____no_output_____
###Markdown
The first column, which is labeled `Year`, is special. It is the **index** for this `DataFrame`, which means it contains the labels for the rows.Some of the values use scientific notation; for example, `2.544000e+09` is shorthand for $2.544 \cdot 10^9$ or 2.544 billion.`NaN` is a special value that indicates missing data. SeriesWe can use dot notation to select a column from a `DataFrame`. The result is a `Series`, which is like a `DataFrame` with a single column.
###Code
census = table2.census
census.head()
census.tail()
###Output
_____no_output_____
###Markdown
Like a `DataFrame`, a `Series` contains an index, which labels the rows.`1e9` is scientific notation for $1 \cdot 10^9$ or 1 billion. From here on, we will work in units of billions.
###Code
un = table2.un / 1e9
un.head()
census = table2.census / 1e9
census.head()
###Output
_____no_output_____
###Markdown
Here's what these estimates look like.
###Code
plot(census, ':', label='US Census')
plot(un, '--', label='UN DESA')
decorate(xlabel='Year',
ylabel='World population (billion)')
savefig('figs/chap03-fig01.pdf')
###Output
Saving figure to file figs/chap03-fig01.pdf
###Markdown
The following expression computes the elementwise differences between the two series, then divides through by the UN value to produce [relative errors](https://en.wikipedia.org/wiki/Approximation_error), then finds the largest element.So the largest relative error between the estimates is about 1.3%.
###Code
max(abs(census - un) / un) * 100
###Output
_____no_output_____
###Markdown
**Exercise:** Break down that expression into smaller steps and display the intermediate results, to make sure you understand how it works.1. Compute the elementwise differences, `census - un`2. Compute the absolute differences, `abs(census - un)`3. Compute the relative differences, `abs(census - un) / un`4. Compute the percent differences, `abs(census - un) / un * 100`
###Code
census-un
abs(census-un)
abs(census-un)/un
abs(census-un)/un *100
###Output
_____no_output_____
###Markdown
`max` and `abs` are built-in functions provided by Python, but NumPy also provides version that are a little more general. When you import `modsim`, you get the NumPy versions of these functions. Constant growth We can select a value from a `Series` using bracket notation. Here's the first element:
###Code
census[1950]
###Output
_____no_output_____
###Markdown
And the last value.
###Code
census[2016]
###Output
_____no_output_____
###Markdown
But rather than "hard code" those dates, we can get the first and last labels from the `Series`:
###Code
t_0 = get_first_label(census)
t_end = get_last_label(census)
elapsed_time = t_end - t_0
###Output
_____no_output_____
###Markdown
And we can get the first and last values:
###Code
p_0 = get_first_value(census)
p_end = get_last_value(census)
###Output
_____no_output_____
###Markdown
Then we can compute the average annual growth in billions of people per year.
###Code
total_growth = p_end - p_0
annual_growth = total_growth / elapsed_time
###Output
_____no_output_____
###Markdown
TimeSeries Now let's create a `TimeSeries` to contain values generated by a linear growth model.
###Code
results = TimeSeries()
###Output
_____no_output_____
###Markdown
Initially the `TimeSeries` is empty, but we can initialize it so the starting value, in 1950, is the 1950 population estimated by the US Census.
###Code
results[t_0] = census[t_0]
results
###Output
_____no_output_____
###Markdown
After that, the population in the model grows by a constant amount each year.
###Code
for t in linrange(t_0, t_end):
results[t+1] = results[t] + annual_growth
###Output
_____no_output_____
###Markdown
Here's what the results looks like, compared to the actual data.
###Code
plot(census, ':', label='US Census')
plot(un, '--', label='UN DESA')
plot(results, color='gray', label='model')
decorate(xlabel='Year',
ylabel='World population (billion)',
title='Constant growth')
savefig('figs/chap03-fig02.pdf')
###Output
Saving figure to file figs/chap03-fig02.pdf
###Markdown
The model fits the data pretty well after 1990, but not so well before. Exercises**Optional Exercise:** Try fitting the model using data from 1970 to the present, and see if that does a better job.Hint: 1. Copy the code from above and make a few changes. Test your code after each small change.2. Make sure your `TimeSeries` starts in 1950, even though the estimated annual growth is based on later data.3. You might want to add a constant to the starting value to match the data better.
###Code
t_20 = 1970
p_20 = census[t_20]
results = TimeSeries()
results[t_0] = census[t_0]
results
for t in linrange(t_20, t_end):
results[t+1] = results[t] + annual_growth
plot(census, ':', label='US Census')
plot(un, '--', label='UN DESA')
plot(results, color='gray', label='model')
decorate(xlabel='Year',
ylabel='World population (billion)',
title='Constant growth')
savefig('figs/chap03-fig02.pdf')
###Output
_____no_output_____
###Markdown
Modeling and Simulation in PythonChapter 5Copyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
###Code
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
###Output
_____no_output_____
###Markdown
Reading dataPandas is a library that provides tools for reading and processing data. `read_html` reads a web page from a file or the Internet and creates one `DataFrame` for each table on the page.
###Code
from pandas import read_html
###Output
_____no_output_____
###Markdown
The data directory contains a downloaded copy of https://en.wikipedia.org/wiki/World_population_estimatesThe arguments of `read_html` specify the file to read and how to interpret the tables in the file. The result, `tables`, is a sequence of `DataFrame` objects; `len(tables)` reports the length of the sequence.
###Code
filename = 'data/World_population_estimates.html'
tables = read_html(filename, header=0, index_col=0, decimal='M')
len(tables)
###Output
_____no_output_____
###Markdown
We can select the `DataFrame` we want using the bracket operator. The tables are numbered from 0, so `tables[2]` is actually the third table on the page.`head` selects the header and the first five rows.
###Code
table2 = tables[2]
table2.head()
###Output
_____no_output_____
###Markdown
`tail` selects the last five rows.
###Code
table2.tail()
###Output
_____no_output_____
###Markdown
Long column names are awkard to work with, but we can replace them with abbreviated names.
###Code
table2.columns = ['census', 'prb', 'un', 'maddison',
'hyde', 'tanton', 'biraben', 'mj',
'thomlinson', 'durand', 'clark']
###Output
_____no_output_____
###Markdown
Here's what the DataFrame looks like now.
###Code
table2.head()
###Output
_____no_output_____
###Markdown
The first column, which is labeled `Year`, is special. It is the **index** for this `DataFrame`, which means it contains the labels for the rows.Some of the values use scientific notation; for example, `2.544000e+09` is shorthand for $2.544 \cdot 10^9$ or 2.544 billion.`NaN` is a special value that indicates missing data. SeriesWe can use dot notation to select a column from a `DataFrame`. The result is a `Series`, which is like a `DataFrame` with a single column.
###Code
census = table2.census
census.head()
census.tail()
###Output
_____no_output_____
###Markdown
Like a `DataFrame`, a `Series` contains an index, which labels the rows.`1e9` is scientific notation for $1 \cdot 10^9$ or 1 billion. From here on, we will work in units of billions.
###Code
un = table2.un / 1e9
un.head()
census = table2.census / 1e9
census.head()
###Output
_____no_output_____
###Markdown
Here's what these estimates look like.
###Code
plot(census, ':', label='US Census')
plot(un, '--', label='UN DESA')
decorate(xlabel='Year',
ylabel='World population (billion)')
savefig('figs/chap03-fig01.pdf')
###Output
Saving figure to file figs/chap03-fig01.pdf
###Markdown
The following expression computes the elementwise differences between the two series, then divides through by the UN value to produce [relative errors](https://en.wikipedia.org/wiki/Approximation_error), then finds the largest element.So the largest relative error between the estimates is about 1.3%.
###Code
max(abs(census - un) / un) * 100
###Output
_____no_output_____
###Markdown
**Exercise:** Break down that expression into smaller steps and display the intermediate results, to make sure you understand how it works.1. Compute the elementwise differences, `census - un`2. Compute the absolute differences, `abs(census - un)`3. Compute the relative differences, `abs(census - un) / un`4. Compute the percent differences, `abs(census - un) / un * 100`
###Code
census - un
abs(census - un)
abs(census - un)/un
abs(census - un) / un * 100
###Output
_____no_output_____
###Markdown
`max` and `abs` are built-in functions provided by Python, but NumPy also provides version that are a little more general. When you import `modsim`, you get the NumPy versions of these functions. Constant growth We can select a value from a `Series` using bracket notation. Here's the first element:
###Code
census[1950]
###Output
_____no_output_____
###Markdown
And the last value.
###Code
census[2016]
###Output
_____no_output_____
###Markdown
But rather than "hard code" those dates, we can get the first and last labels from the `Series`:
###Code
t_0 = get_first_label(census)
t_end = get_last_label(census)
elapsed_time = t_end - t_0
###Output
_____no_output_____
###Markdown
And we can get the first and last values:
###Code
p_0 = get_first_value(census)
p_end = get_last_value(census)
###Output
_____no_output_____
###Markdown
Then we can compute the average annual growth in billions of people per year.
###Code
total_growth = p_end - p_0
annual_growth = total_growth / elapsed_time
###Output
_____no_output_____
###Markdown
TimeSeries Now let's create a `TimeSeries` to contain values generated by a linear growth model.
###Code
results = TimeSeries()
###Output
_____no_output_____
###Markdown
Initially the `TimeSeries` is empty, but we can initialize it so the starting value, in 1950, is the 1950 population estimated by the US Census.
###Code
results[t_0] = census[t_0]
results
###Output
_____no_output_____
###Markdown
After that, the population in the model grows by a constant amount each year.
###Code
for t in linrange(t_0, t_end):
results[t+1] = results[t] + annual_growth
###Output
_____no_output_____
###Markdown
Here's what the results looks like, compared to the actual data.
###Code
plot(census, ':', label='US Census')
plot(un, '--', label='UN DESA')
plot(results, color='gray', label='model')
decorate(xlabel='Year',
ylabel='World population (billion)',
title='Constant growth')
savefig('figs/chap03-fig02.pdf')
###Output
Saving figure to file figs/chap03-fig02.pdf
###Markdown
The model fits the data pretty well after 1990, but not so well before. Exercises**Optional Exercise:** Try fitting the model using data from 1970 to the present, and see if that does a better job.Hint: 1. Copy the code from above and make a few changes. Test your code after each small change.2. Make sure your `TimeSeries` starts in 1950, even though the estimated annual growth is based on later data.3. You might want to add a constant to the starting value to match the data better.
###Code
t_0 = 1970
t_end = get_last_label(census)
elapsed_time = 2016-1970
p_0 = census[1970]
p_end = census[2016]
total_growth = p_end - p_0
annual_growth = total_growth / elapsed_time
print (t_0, t_end, elapsed_time, p_0, p_end, total_growth, annual_growth)
results = TimeSeries()
results[t_0] = census[t_0]
for t in linrange(t_0, t_end):
results[t+1] = results[t] + annual_growth
plot(census, ':', label='US Census')
plot(un, '--', label='UN DESA')
plot(results, color='gray', label='model')
decorate(xlabel='Year',
ylabel='World population (billion)',
title='Constant growth')
savefig('figs/chap03-fig02.pdf')
###Output
Saving figure to file figs/chap03-fig02.pdf
|
notebook/StoltzaAndMacnae1998_simple_impulse.ipynb | ###Markdown
$A(t,T) = \Sigma_i A_i e^{-t/\tau_i} / (1 + e^{-T/2\tau_i})$
###Code
def AofT(time,T, ai, taui):
return ai*np.exp(-time/taui)/(1.+np.exp(-T/(2*taui)))
from SimPEG import *
import sys
sys.path.append("./DoubleLog/")
from plotting import mapDat
class LinearSurvey(Survey.BaseSurvey):
nD = None
def __init__(self, time, **kwargs):
self.time = time
self.nD = time.size
def projectFields(self, u):
return u
class LinearProblem(Problem.BaseProblem):
surveyPair = LinearSurvey
def __init__(self, mesh, G, **kwargs):
Problem.BaseProblem.__init__(self, mesh, **kwargs)
self.G = G
def fields(self, m, u=None):
return self.G.dot(m)
def Jvec(self, m, v, u=None):
return self.G.dot(v)
def Jtvec(self, m, v, u=None):
return self.G.T.dot(v)
###Output
_____no_output_____
###Markdown
Simple exponential basis $$ \mathbf{A}\mathbf{\alpha} = \mathbf{d}$$
###Code
time = np.cumsum(np.r_[0., 1e-5*np.ones(10), 5e-5*np.ones(10), 1e-4*np.ones(10), 5e-4*np.ones(10), 1e-3*np.ones(10)])
# time = np.cumsum(np.r_[0., 1e-5*np.ones(10), 5e-5*np.ones(10),1e-4*np.ones(10), 5e-4*np.ones(10)])
M = 41
tau = np.logspace(-4.5, -1, M)
N = time.size
A = np.zeros((N, M))
for j in range(M):
A[:,j] = np.exp(-time/tau[j])//tau[j]
mtrue = np.zeros(M)
np.random.seed(1)
inds = np.random.random_integers(0, 41, size=5)
mtrue[inds] = np.r_[-10, 2, 1, 4, 5]
out = np.dot(A,mtrue)
fig = plt.figure(figsize=(6,4.5))
ax = plt.subplot(111)
for i, ind in enumerate(inds):
temp, dum, dum = mapDat(mtrue[inds][i]*np.exp(-time/tau[ind])/tau[j], 1e-1, stretch=2)
plt.semilogx(time, temp, 'k', alpha = 0.5)
outmap, ticks, tickLabels = mapDat(out, 1e-1, stretch=2)
ax.semilogx(time, outmap, 'k', lw=2)
ax.set_yticks(ticks)
ax.set_yticklabels(tickLabels)
# ax.set_ylim(ticks.min(), ticks.max())
ax.set_ylim(ticks.min(), ticks.max())
ax.set_xlim(time.min(), time.max())
ax.grid(True)
# from pymatsolver import MumpsSolver
abs(survey.dobs).min()
mesh = Mesh.TensorMesh([M])
prob = LinearProblem(mesh, A)
survey = LinearSurvey(time)
survey.pair(prob)
survey.makeSyntheticData(mtrue, std=0.01)
# survey.dobs = out
reg = Regularization.BaseRegularization(mesh)
dmis = DataMisfit.l2_DataMisfit(survey)
dmis.Wd = 1./(0.05*abs(survey.dobs)+0.05*300.)
opt = Optimization.ProjectedGNCG(maxIter=20)
# opt = Optimization.InexactGaussNewton(maxIter=20)
opt.lower = -1e-10
invProb = InvProblem.BaseInvProblem(dmis, reg, opt)
invProb.beta = 1e-4
beta = Directives.BetaSchedule()
beta.coolingFactor = 2
target = Directives.TargetMisfit()
inv = Inversion.BaseInversion(invProb, directiveList=[beta, target])
m0 = np.zeros_like(survey.mtrue)
mrec = inv.run(m0)
plt.semilogx(tau, mtrue, '.')
plt.semilogx(tau, mrec, '.')
fig = plt.figure(figsize=(6,4.5))
ax = plt.subplot(111)
obsmap, ticks, tickLabels = mapDat(survey.dobs, 1e0, stretch=2)
predmap, dum, dum = mapDat(invProb.dpred, 1e0, stretch=2)
ax.loglog(time, survey.dobs, 'k', lw=2)
ax.loglog(time, invProb.dpred, 'k.', lw=2)
# ax.set_yticks(ticks)
# ax.set_yticklabels(tickLabels)
# ax.set_ylim(ticks.min(), ticks.max())
# ax.set_ylim(ticks.min(), ticks.max())
ax.set_xlim(time.min(), time.max())
ax.grid(True)
time = np.cumsum(np.r_[0., 1e-5*np.ones(10), 5e-5*np.ones(10), 1e-4*np.ones(10), 5e-4*np.ones(10), 1e-3*np.ones(10)])
N = time.size
A = np.zeros((N, M))
for j in range(M):
A[:,j] = np.exp(-time/tau[j]) /tau[j]
mfund = mtrue.copy()
mfund[mfund<0.] = 0.
obs = np.dot(A, mtrue)
fund = np.dot(A, mfund)
pred = np.dot(A, mrec)
ip = obs-fund
ipobs = obs-pred
plt.loglog(time, obs, 'k.-', lw=2)
plt.loglog(time, -obs, 'k--', lw=2)
plt.loglog(time, fund, 'b.', lw=2)
plt.loglog(time, pred, 'b-', lw=2)
plt.loglog(time, -ip, 'r--', lw=2)
plt.loglog(time, abs(ipobs), 'r.', lw=2)
plt.ylim(abs(obs).min(), abs(obs).max())
plt.xlim(time.min(), time.max())
###Output
_____no_output_____ |
BotV1.ipynb | ###Markdown
Automated EDA Hi, This kernel is automatically generated by the [Data Geek Bot](https://github.com/Ankur3107/Kaggle-Data-Geek-Bot) - The kaggle bot to generate automated EDA for any dataset. Currently, Bot will give the EDA of 1st data which is present in data directory. Loading Packages
###Code
library(dplyr)
library(reshape)
library(data.table)
library(data.table)
library(formattable)
library(gridExtra)
library(funModeling)
###Output
_____no_output_____
###Markdown
Loading Data
###Code
input_dir = '../input/'
csv_files = list.files(input_dir, recursive = T, full.names = T)
csv_files = csv_files[grep('.csv', csv_files)]
csv_files
data = read.csv(csv_files[1], stringsAsFactors = F)
###Output
_____no_output_____
###Markdown
Helper Functions
###Code
# Function 1 : For ploting missing value
plot_missing <- function(data, title = NULL, ggtheme = theme_gray(), theme_config = list("legend.position" = c("bottom"))) {
## Declare variable first to pass R CMD check
feature <- num_missing <- pct_missing <- group <- NULL
## Check if input is data.table
is_data_table <- is.data.table(data)
## Detect input data class
data_class <- class(data)
## Set data to data.table
if (!is_data_table) data <- data.table(data)
## Extract missing value distribution
missing_value <- data.table(
"feature" = names(data),
"num_missing" = sapply(data, function(x) {sum(is.na(x))})
)
missing_value[, feature := factor(feature, levels = feature[order(-rank(num_missing))])]
missing_value[, pct_missing := num_missing / nrow(data)]
missing_value[pct_missing < 0.05, group := "Good"]
missing_value[pct_missing >= 0.05 & pct_missing < 0.4, group := "OK"]
missing_value[pct_missing >= 0.4 & pct_missing < 0.8, group := "Bad"]
missing_value[pct_missing >= 0.8, group := "Remove"][]
## Set data class back to original
if (!is_data_table) class(missing_value) <- data_class
## Create ggplot object
output <- ggplot(missing_value, aes_string(x = "feature", y = "num_missing", fill = "group")) +
geom_bar(stat = "identity") +
geom_text(aes(label = paste0(round(100 * pct_missing, 2), "%"))) +
scale_fill_manual("Group", values = c("Good" = "#1a9641", "OK" = "#a6d96a", "Bad" = "#fdae61", "Remove" = "#d7191c"), breaks = c("Good", "OK", "Bad", "Remove")) +
scale_y_continuous(labels = comma) +
coord_flip() +
xlab("Features") + ylab("Number of missing rows") +
ggtitle(title) +
ggtheme + theme_linedraw()+
do.call(theme, theme_config)
## Print plot
print(output)
## Set return object
return(invisible(missing_value))
}
# Function 2: For plotting histogram
plot_histogram <- function(data, title = NULL, ggtheme = theme_gray(), theme_config = list(), ...) {
if (!is.data.table(data)) data <- data.table(data)
## Stop if no continuous features
if (split_columns(data)$num_continuous == 0) stop("No Continuous Features")
## Get continuous features
continuous <- split_columns(data)$continuous
## Get dimension
n <- nrow(continuous)
p <- ncol(continuous)
## Calculate number of pages
pages <- ceiling(p / 16L)
for (pg in seq.int(pages)) {
## Subset data by column
subset_data <- continuous[, seq.int(16L * pg - 15L, min(p, 16L * pg)), with = FALSE]
setnames(subset_data, make.names(names(subset_data)))
n_col <- ifelse(ncol(subset_data) %% 4L, ncol(subset_data) %/% 4L + 1L, ncol(subset_data) %/% 4L)
## Create ggplot object
plot <- lapply(
seq_along(subset_data),
function(j) {
x <- na.omit(subset_data[, j, with = FALSE])
ggplot(x, aes_string(x = names(x))) +
geom_histogram(bins = 30L, ...,fill='#92b7ef') +
scale_x_continuous(labels = comma) +
scale_y_continuous(labels = comma) +
ylab("Frequency") +
ggtheme + theme_linedraw()+
do.call(theme, theme_config)
}
)
## Print plot object
if (pages > 1) {
suppressWarnings(do.call(grid.arrange, c(plot, ncol = n_col, nrow = 4L, top = title, bottom = paste("Page", pg))))
} else {
suppressWarnings(do.call(grid.arrange, c(plot, top = title)))
}
}
}
# Function 3 : Getting missing values
.getAllMissing <- function(dt) {
if (!is.data.table(dt)) dt <- data.table(dt)
sapply(dt, function(x) {
sum(is.na(x)) == length(x)
})
}
# Function 4 : Spliting columns
split_columns <- function(data) {
## Check if input is data.table
is_data_table <- is.data.table(data)
## Detect input data class
data_class <- class(data)
## Set data to data.table
if (!is_data_table) data <- data.table(data)
## Find indicies for continuous features
all_missing_ind <- .getAllMissing(data)
ind <- sapply(data[, which(!all_missing_ind), with = FALSE], is.numeric)
## Count number of discrete, continuous and all-missing features
n_all_missing <- sum(all_missing_ind)
n_continuous <- sum(ind)
n_discrete <- ncol(data) - n_continuous - n_all_missing
## Create object for continuous features
continuous <- data[, which(ind), with = FALSE]
## Create object for discrete features
discrete <- data[, which(!ind), with = FALSE]
## Set data class back to original
if (!is_data_table) class(discrete) <- class(continuous) <- data_class
## Set return object
return(
list(
"discrete" = discrete,
"continuous" = continuous,
"num_discrete" = n_discrete,
"num_continuous" = n_continuous,
"num_all_missing" = n_all_missing
)
)
}
# Function 5 : plotting density plot for numerical variable
plot_density <- function(data, title = NULL, ggtheme = theme_gray(), theme_config = list(), ...) {
if (!is.data.table(data)) data <- data.table(data)
## Stop if no continuous features
if (split_columns(data)$num_continuous == 0) stop("No Continuous Features")
## Get continuous features
continuous <- split_columns(data)$continuous
## Get dimension
n <- nrow(continuous)
p <- ncol(continuous)
## Calculate number of pages
pages <- ceiling(p / 16L)
for (pg in seq.int(pages)) {
## Subset data by column
subset_data <- continuous[, seq.int(16L * pg - 15L, min(p, 16L * pg)), with = FALSE]
setnames(subset_data, make.names(names(subset_data)))
n_col <- ifelse(ncol(subset_data) %% 4L, ncol(subset_data) %/% 4L + 1L, ncol(subset_data) %/% 4L)
## Create ggplot object
plot <- lapply(
seq_along(subset_data),
function(j) {
x <- na.omit(subset_data[, j, with = FALSE])
ggplot(x, aes_string(x = names(x))) +
geom_density(...,fill="#e2c5e5") +
scale_x_continuous(labels = comma) +
scale_y_continuous(labels = percent) +
ylab("Density") +
ggtheme + theme_linedraw()+
do.call(theme, theme_config)
}
)
## Print plot object
if (pages > 1) {
suppressWarnings(do.call(grid.arrange, c(plot, ncol = n_col, nrow = 4L, top = title, bottom = paste("Page", pg))))
} else {
suppressWarnings(do.call(grid.arrange, c(plot, top = title)))
}
}
}
###Output
_____no_output_____
###Markdown
Exploratory Data Analysis Structure & Dimension
###Code
print(paste("The Dataset have",dim(data)[1],"data points and ", dim(data)[2], "Features."))
print("Let's see the head of data.")
print(head(data))
###Output
_____no_output_____
###Markdown
Profiling Data InputProbably one of the first steps, when we get a new dataset to analyze, is to know if there are missing values (NA in R) and the data type.The **df_status** function coming in funModeling can help us by showing these numbers in relative and percentage values. It also retrieves the infinite and zeros statistics.* **q_zeros:** quantity of zeros (p_zeros: in percent)* **q_inf:** quantity of infinite values (p_inf: in percent)* **q_na:** quantity of NA (p_na: in percent)* **type:** factor or numeric* **unique:** quantity of unique values
###Code
df_status(data)
###Output
_____no_output_____
###Markdown
Missing Value Analysis
###Code
plot_missing(data)
###Output
_____no_output_____
###Markdown
Categorical Feature Analysis
###Code
getDataFrameWith50Categories <- function(df){
factorDF <- mutate_all(df, function(x) as.factor(x))
features <- names(factorDF)
for(feature in features){
if(length(levels(factorDF[,feature]))>50){
factorDF[feature] <- NULL
}
}
factorDF
}
categoricalData <- getDataFrameWith50Categories(data)
###Output
_____no_output_____
###Markdown
Describe Categorical Features
###Code
describe(categoricalData)
###Output
_____no_output_____
###Markdown
Categorical Features Plotting
###Code
freq(categoricalData)
###Output
_____no_output_____
###Markdown
Numerical Feature Analysis
###Code
getNumericalDF <- function(df){
numericDF <- df
features <- names(numericDF)
for(feature in features){
if(!is.numeric(df[,feature])){
numericDF[feature] <- NULL
}
}
numericDF
}
numericalData <- getNumericalDF(data) # which are num/int data type
numericalDataFeature <- names(numericalData)
categoricalDataFeature <- names(categoricalData)
numericalData <- select(numericalData, -one_of(categoricalDataFeature)) # select that feature which are not categoricalDataFeature
###Output
_____no_output_____
###Markdown
* **variable:** variable name* **mean:** the well-known mean or average* **std_dev:** standard deviation, a measure of dispersion or spread around the mean value. A value around 0 means almost no variation (thus, it seems more like a constant); on the other side, it is harder to set what high is, but we can tell that the higher the variation the greater the spread. Chaos may look like infinite standard variation. The unit is the same as the mean so that it can be compared.* **variation_coef:** variation coefficient=std_dev/mean. Because the std_dev is an absolute number, it’s good to have an indicator that puts it in a relative number, comparing the std_dev against the mean A value of 0.22 indicates the std_dev is 22% of the mean If it were close to 0 then the variable tends to be more centered around the mean. If we compare two classifiers, then we may prefer the one with less std_dev and variation_coef on its accuracy.* **p_01, p_05, p_25, p_50, p_75, p_95, p_99:** Percentiles at 1%, 5%, 25%, and so on. Later on in this chapter is a complete review about percentiles.
###Code
profiling_num(numericalData)
###Output
_____no_output_____
###Markdown
Numerical Feature Plotting (Histogram)
###Code
plot_histogram(numericalData)
###Output
_____no_output_____
###Markdown
Numerical Feature Plotting (Density)
###Code
plot_density(numericalData)
###Output
_____no_output_____ |
notebook/2017-11-22_make_dmel_atlas_sampletable.ipynb | ###Markdown
I need to create a dmel atlas sample table for running the RNA-seq pipeline. This atlas is data from Haiwang's project, but I have downloaded some of it as part of the SRA.
###Code
# %load ../start.py
# Imports
import os
import sys
from tempfile import TemporaryDirectory
import re
import numpy as np
import pandas as pd
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
# Project level imports
sys.path.insert(0, '../../lib')
from larval_gonad.notebook import Nb
# Setup notebook
nbconfig = Nb.setup_notebook()
# Turn on cache
from joblib import Memory
memory = Memory(cachedir=nbconfig.cache, verbose=0)
import GEOparse
import Bio.Entrez as Entrez
Entrez.email = nbconfig.email
def get_srr(x):
res = Entrez.efetch('sra', id=x).read()
return list(set(re.findall('SRR\d+', res)))
nbconfig
ATLAS = 'GSE99574'
# Download GEO entry
tmp = TemporaryDirectory()
gse = GEOparse.get_GEO(ATLAS, destdir=tmp.name, silent=True)
data = []
for gsm, dat in gse.gsms.items():
record = {}
if dat.metadata['organism_ch1'][0] == 'Drosophila melanogaster':
record['samplename'] = dat.metadata['title'][0]
if 'leftover' in record['samplename']:
continue
if 'ercc' in record['samplename']:
continue
for attr in dat.metadata['characteristics_ch1']:
if 'strain' in attr:
record['strain'] = re.match(r'.*(w1118|Oregon-R).*', attr).groups()[0]
elif 'Sex' in attr:
record['sex'] = re.match(r'.*(Male|Female).*', attr).groups()[0].lower()
elif 'replicate' in attr:
record['replicate'] = re.match(r'.*(\d+).*', attr).groups()[0]
elif 'tissue' in attr:
record['tissue'] = re.match(r'tissue: (.*)',
attr).groups()[0].replace(' ', '_')
elif 'plate and well id' in attr:
groups = re.match('.*Plate(\d)_([A-Z])(\d)', attr).groups()
record['plate'] = groups[0]
record['row'] = groups[1]
record['col'] = groups[2]
for rel in dat.metadata['relation']:
if 'SRA' in rel:
record['srx'] = re.match('.*(SRX\d+).*', rel).groups()[0]
record['srr'] = get_srr(record['srx'])
data.append(record)
header = [
'samplename',
'srx',
'srr',
'sex',
'strain',
'tissue',
'replicate',
'plate',
'row',
'col'
]
df = pd.DataFrame(data)[header]
# rename gonad to ovary or testes
df.loc[(df.sex == 'female') & (df.tissue == 'gonad'), 'tissue'] = 'ovary'
df.loc[(df.sex == 'male') & (df.tissue == 'gonad'), 'tissue'] = 'testes'
# unwind SRR
rows = []
for _, row in df.iterrows():
for srr in set(row.srr):
curr = row.copy()
curr['srr'] = srr
curr['samplename'] = curr['samplename'] + '_' + srr
rows.append(curr)
sampletable = pd.concat(rows, axis=1).T
# Add group for lcdb-wf
sampletable['group'] = sampletable['sex'] + '_' + sampletable['tissue']
sampletable.to_csv('../../dmel-atlas-wf/config/sampletable.tsv', sep='\t', index=False)
###Output
_____no_output_____ |
notebooks/ingest.ipynb | ###Markdown
IngestIn this notebook, we read a wiki text snippet and save the data to an S3 bucket
###Code
import os
from pathlib import Path
import warnings
from dotenv import load_dotenv, find_dotenv
import boto3
# from datasets import load_dataset
# from ipywidgets import FloatProgress
warnings.filterwarnings("ignore")
load_dotenv(find_dotenv())
## Create a .env file on your local with the correct configs
s3_endpoint_url = os.getenv("S3_ENDPOINT")
s3_access_key = os.getenv("S3_ACCESS_KEY")
s3_secret_key = os.getenv("S3_SECRET_KEY")
s3_bucket = os.getenv("S3_BUCKET")
# Create an S3 client
s3 = boto3.client(
service_name="s3",
aws_access_key_id=s3_access_key,
aws_secret_access_key=s3_secret_key,
endpoint_url=s3_endpoint_url,
)
text = 'The music was composed by Hitoshi Sakimoto , who had also worked on the previous Valkyria Chronicles games . When he originally heard about the project , he thought it would be a light tone similar to other Valkyria Chronicles games , but found the themes much darker than expected . An early theme he designed around his original vision of the project was rejected . He <unk> the main theme about seven times through the music production due to this need to <unk> the game . The main theme was initially recorded using orchestra , then Sakimoto removed elements such as the guitar and bass , then adjusted the theme using a synthesizer before <unk> segments such as the guitar piece on their own before incorporating them into the theme . The rejected main theme was used as a hopeful tune that played during the game \'s ending . The battle themes were designed around the concept of a " modern battle " divorced from a fantasy scenario by using modern musical instruments , constructed to create a sense of <unk> . While Sakimoto was most used to working with synthesized music , he felt that he needed to incorporate live instruments such as orchestra and guitar . The guitar was played by <unk> <unk> , who also arranged several of the later tracks . The game \'s opening theme song '
text
###Output
_____no_output_____
###Markdown
Upload to S3
###Code
destination_path = Path('data/raw')
if not os.path.exists(destination_path):
destination_path.mkdir(parents=True, exist_ok=True)
file_path = destination_path.joinpath('wiki.txt')
with open(file_path, 'w') as file:
for item in text:
file.write('%s' % item)
key = 'op1-pipelines/wiki.txt'
s3.upload_file(Bucket=s3_bucket, Key=key, Filename=str(file_path))
###Output
_____no_output_____
###Markdown
This notebook can be used to generate CSV files containing patient clinical data, and image metadata for each patient and image file within the NCCID data. To use these tools you need to provide a `BASE_PATH` that points to the location of the data that has been pulled from the NCCID S3 bucket, where your local directory structure should match the original S3 structure. If you have split the data into training/test/validation sets, each subdirectory should have the same structure as the original S3 bucket and the below pipeline should be run separately for each of the dataset splits. You can set the local path to your NCCID data below by changing the `DEFAULT_PATH` variable or alternatively set as an environment variable, `NCCID_DATA_DIR` in e.g., `.bashrc`.
###Code
# Edit this to update your local NCCID data path
DEFAULT_PATH = "/project/data/training"
BASE_PATH = Path(os.getenv("NCCID_DATA_DIR", DEFAULT_PATH))
print(BASE_PATH)
###Output
_____no_output_____
###Markdown
Imaging Metadata For the imaging metadata, a separate CSV is generated for each imaging modality: X-ray, CT, MRI. Three steps are performed: `select_image_files` - traverses the directory tree finding all files of the imaging modality. For X-ray is it recommended to set `select_all = True` to process all available X-ray files. Whereas, for 3D modalities, CT, and MRI, `select_first = True` is recommened to select only the first file of each imaging volume, to speed up run time and reduce redundancy of information. `ingest_dicom_jsons` - reads the DICOM header information for each file. `pydicom_to_df` - converts the DICOM metadata into a pandas DataFrame where the rows are images and columns are the DICOM attributes. The resulting DataFrames are saved as CSV files in `data/`
###Code
# subdirectories
XRAY_SUBDIR = "xray-metadata"
CT_SUBDIR = "ct-metadata"
MRI_SUBDIR = "mri-metadata"
# 1. finding image file lists within the subdirs
xray_files = etl.select_image_files(BASE_PATH / XRAY_SUBDIR, select_all=True)
ct_files = etl.select_image_files(BASE_PATH / CT_SUBDIR, select_first=True)
mri_files = etl.select_image_files(BASE_PATH / MRI_SUBDIR, select_first=True)
# 2. process image metadata
xray_datasets = etl.ingest_dicom_jsons(xray_files)
ct_datasets = etl.ingest_dicom_jsons(ct_files)
mri_datasets = etl.ingest_dicom_jsons(mri_files)
# 3. converting to DataFrame
xrays = etl.pydicom_to_df(xray_datasets)
cts = etl.pydicom_to_df(ct_datasets)
mris = etl.pydicom_to_df(mri_datasets)
# check structure of DFs
xrays.head()
# Save as csv
xrays.to_csv("data/xrays.csv")
cts.to_csv("data/cts.csv")
mris.to_csv("data/mris.csv")
###Output
_____no_output_____
###Markdown
Patient Clinical Data For patient clinical data, the most recent data file (for COVID-positive) or status file (for COVID-negative) is parsed for each patient in the directory tree. The resulting DataFrame is generated using `patient_jsons_to_df`, where rows are patients and columns are data fields. Three fields that are not in the original jsons files are included in the DataFrame: `filename_earliest_date` - earlist data/status file present for the patient. `filename_latest_date` - latest data/status file present for the patient. This is the file from which the rest of the patient's data has been pulled. `filename_covid_status` - indicates it the patient is in the COVID-postive or COVID-negative cohort, based on whether they have every been submitted with a data file (which are only present for positive patients.
###Code
PATIENT_SUBDIR = "data"
# process patient clinical data
patient_files = list(os.walk(BASE_PATH / PATIENT_SUBDIR))
patients = etl.patient_jsons_to_df(patient_files)
patients.head()
###Output
_____no_output_____
###Markdown
Clean and enrich The cleaning pipeline can be run on the resulting patients DataFrame to improve quality. In addition, missing values in the patient DataFrame for Sex and Age, can be filled using the DICOM image headers. This step generates two new columns `sex_update` and `age_update`, from the cleaned columns `sex`, `age`.
###Code
# cleaning
patients = clean_data_df(patients, patient_df_pipeline)
# enriching
images = [xrays, cts, mris] # list all image DFs
patients = etl.patient_data_dicom_update(patients, images)
patients.head()
print(f"Sex Unknowns before merging with dicom: {(patients['sex']=='Unknown').sum()}")
print(f"Sex Unknowns after merging with dicom: {(patients['sex_update']=='Unknown').sum()}")
print("------")
print(f"Age NaNs before merging with dicom: {patients['age'].isnull().sum()}")
print(f"Age New after merging with dicom: {patients['age_update'].isnull().sum()}")
# save to csv
patients.to_csv("data/patients.csv")
###Output
_____no_output_____
###Markdown
IngestIn this notebook, we read a wiki text snippet and save the data to an S3 bucket
###Code
import os
import warnings
from dotenv import load_dotenv, find_dotenv
import boto3
from datasets import load_dataset
warnings.filterwarnings("ignore")
load_dotenv(find_dotenv())
## Create a .env file on your local with the correct configs
s3_endpoint_url = os.getenv("S3_ENDPOINT")
s3_access_key = os.getenv("S3_ACCESS_KEY")
s3_secret_key = os.getenv("S3_SECRET_KEY")
s3_bucket = os.getenv("S3_BUCKET")
# Create an S3 client
s3 = boto3.client(
service_name="s3",
aws_access_key_id=s3_access_key,
aws_secret_access_key=s3_secret_key,
endpoint_url=s3_endpoint_url,
)
# ! jupyter labextension install @jupyter-widgets/jupyterlab-manager
dataset = load_dataset('wikitext', 'wikitext-2-v1')
dataset
text = dataset['train']['text'][0:50]
text = list(filter(None, text))
text
###Output
_____no_output_____
###Markdown
Upload to S3
###Code
file_path = '../data/raw/wiki.txt'
with open(file_path, 'w') as file:
for item in text:
file.write('%s\n' % item)
key = 'op1-pipelines/wiki.txt'
s3.upload_file(Bucket=s3_bucket, Key=key, Filename=str(file_path))
###Output
_____no_output_____
###Markdown
Ingesting live data from ccxt Getting lost in bcolz process, so...Settling for: https://github.com/enigmampc/catalyst/issues/65Pulled __main__.py out of project to remove exchange limitationsPulled exchange_bundle.py out of project to download data from ccxt api and create csv file before ingesting.Also currently wipes out all old daily or minute data when same frequency is run againHave to build a normalizer for the ohclv data returned from ccxt, without a start and end, it is random.Using a command like: `kryptobot ingest-exchange -x binance -f daily -i eth_usdt,btc_usdt --csv create``kryptobot ingest-exchange -x hitbtc -f minute -i smart_btc --csv create``kryptobot ingest-exchange -x cryptopia -f minute -i etn_btc --csv create`
###Code
from kryptobot.catalyst_extensions.exchange.exchange_bundle import ExchangeBundle
from catalyst.exchange.bundle_utils import range_in_bundle
exchange_name = 'binance'
data_frequency = 'daily'
include_symbols = 'ltc_btc'
exchange_bundle = ExchangeBundle(exchange_name)
ingest = exchange_bundle.ingest(
data_frequency=data_frequency,
include_symbols=include_symbols,
# exclude_symbols=params['exclude_symbols'],
# start=start,
# end=end,
show_progress=True,
# show_breakdown=params['show_breakdown'],
# show_report=params['show_report'],
# csv=params['csv']
)
print(ingest)
# catalyst ingest -b binance -c
import ccxt
import json
import pandas as pd
symbol = 'ltc_btc'
ccxt_symbol = symbol.replace('_', '/').upper()
exchange = ccxt.binance()
ohlcv = json.dumps(exchange.fetch_ohlcv(ccxt_symbol, '1d'))
# print(ohlcv)
raw = pd.read_json(ohlcv)
raw.columns = ['date','open','high','low','close','volume']
raw['date'] = pd.to_datetime(raw['date'], unit='ms')
raw.set_index('date', inplace=True)
scale = 1
raw.loc[:, 'open'] /= scale
raw.loc[:, 'high'] /= scale
raw.loc[:, 'low'] /= scale
raw.loc[:, 'close'] /= scale
raw.loc[:, 'volume'] *= scale
raw.reset_index(level=0, inplace=True)
raw.insert(0, 'symbol', symbol)
raw.tail()
import bcolz
from catalyst.data.us_equity_pricing import BcolzDailyBarReader
# pricing_path = '/root/.catalyst/data/exchanges/poloniex/minute_bundle/00/33/003314.bcolz/'
# pricing_path = 'root/.catalyst/data/exchanges/bitfinex/temp_bundles/bitfinex-daily-avt_btc-2017/89/18/891835.bcolz/'
pricing_path = '/root/.catalyst/data/exchanges/bitfinex/daily_bundle/00/89/008966.bcolz/'
ct = bcolz.ctable(rootdir=pricing_path)
df = ct[["open", "high", "low", "close", "volume"]].todataframe()
df.tail()
# cls(BcolzDailyBarReader(pricing_path))
###Output
_____no_output_____ |
RandomForest-RFECV-BreakStatus-RoW-RandomizedSearch.ipynb | ###Markdown
In this note book the following steps are taken:1. Remove highly correlated attributes2. Find the best hyper parameters for estimator3. Find the most important features by tunned random forest4. Find f1 score of the tunned full model5. Find best hyper parameter of model with selected features6. Find f1 score of the tuned seleccted model7. Compare the two f1 scores
###Code
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.feature_selection import RFECV,RFE
from sklearn.model_selection import train_test_split, GridSearchCV, KFold,RandomizedSearchCV
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
from sklearn import metrics
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score,f1_score
import numpy as np
from sklearn.metrics import make_scorer
f1_score = make_scorer(f1_score)
#import data
Data=pd.read_csv("RoW-Transfomed-Data-BS-NoBreak - Copy.csv")
X = Data.iloc[:,:-1]
y = Data.iloc[:,-1]
#split test and training set.
np.random.seed(60)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3,
random_state = 1000)
#Define estimator and model
classifiers = {}
classifiers.update({"Random Forest": RandomForestClassifier(random_state=1000)})
#Define range of hyperparameters for estimator
np.random.seed(60)
parameters = {}
parameters.update({"Random Forest": { "classifier__n_estimators": [100,105,110,115,120,125,130,135,140,145,150,155,160,170,180,190,200],
# "classifier__n_estimators": [2,4,5,6,7,8,9,10,20,30,40,50,60,70,80,90,100,110,120,130,140,150,160,170,180,190,200],
#"classifier__class_weight": [None, "balanced"],
"classifier__max_features": ["auto", "sqrt", "log2"],
"classifier__max_depth" : [4,6,8,10,11,12,13,14,15,16,17,18,19,20,22],
#"classifier__max_depth" : [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20],
"classifier__criterion" :["gini", "entropy"]
}})
# Make correlation matrix
corr_matrix = X_train.corr(method = "spearman").abs()
# Draw the heatmap
sns.set(font_scale = 1.0)
f, ax = plt.subplots(figsize=(11, 9))
sns.heatmap(corr_matrix, cmap= "YlGnBu", square=True, ax = ax)
f.tight_layout()
plt.savefig("correlation_matrix.png", dpi = 1080)
# Select upper triangle of matrix
upper = corr_matrix.where(np.triu(np.ones(corr_matrix.shape), k = 1).astype(np.bool))
# Find index of feature columns with correlation greater than 0.8
to_drop = [column for column in upper.columns if any(upper[column] > 0.8)]
# Drop features
X_train = X_train.drop(to_drop, axis = 1)
X_test = X_test.drop(to_drop, axis = 1)
X_train
FEATURE_IMPORTANCE = {"Random Forest"}
selected_classifier = "Random Forest"
classifier = classifiers[selected_classifier]
scaler = StandardScaler()
steps = [("scaler", scaler), ("classifier", classifier)]
pipeline = Pipeline(steps = steps)
#Define parameters that we want to use in gridsearch cv
param_grid = parameters[selected_classifier]
# Initialize GridSearch object for estimator
gscv = RandomizedSearchCV(pipeline, param_grid, cv = 3, n_jobs= -1, verbose = 1, scoring = f1_score, n_iter=30)
# Fit gscv (Tunes estimator)
print(f"Now tuning {selected_classifier}. Go grab a beer or something.")
gscv.fit(X_train, np.ravel(y_train))
#Getting the best hyperparameters
best_params = gscv.best_params_
best_params
#Getting the best score of model
best_score = gscv.best_score_
best_score
#Check overfitting of the estimator
from sklearn.model_selection import cross_val_score
mod = RandomForestClassifier(#class_weight= None,
criterion= 'entropy',
max_depth= 18,
max_features= 'sqrt',
n_estimators= 105 ,random_state=10000)
scores_test = cross_val_score(mod, X_test, y_test, scoring='f1', cv=5)
scores_test
tuned_params = {item[12:]: best_params[item] for item in best_params}
classifier.set_params(**tuned_params)
#Find f1 score of the model with all features (Model is tuned for all features)
results={}
model=classifier.set_params(criterion= 'entropy',
max_depth= 18,
max_features= 'sqrt',
n_estimators= 105 ,random_state=10000)
model.fit(X_train,y_train)
y_pred = model.predict(X_test)
F1 = metrics.f1_score(y_test, y_pred)
results = {"classifier": model,
"Best Parameters": best_params,
"Training f1": best_score*100,
"Test f1": F1*100}
results
# Select Features using RFECV
class PipelineRFE(Pipeline):
# Source: https://ramhiser.com/post/2018-03-25-feature-selection-with-scikit-learn-pipeline/
def fit(self, X, y=None, **fit_params):
super(PipelineRFE, self).fit(X, y, **fit_params)
self.feature_importances_ = self.steps[-1][-1].feature_importances_
return self
steps = [("scaler", scaler), ("classifier", classifier)]
pipe = PipelineRFE(steps = steps)
np.random.seed(60)
# Initialize RFECV object
feature_selector = RFECV(pipe, cv = 5, step = 1, verbose = 1)
# Fit RFECV
feature_selector.fit(X_train, np.ravel(y_train))
# Get selected features
feature_names = X_train.columns
selected_features = feature_names[feature_selector.support_].tolist()
performance_curve = {"Number of Features": list(range(1, len(feature_names) + 1)),
"F1": feature_selector.grid_scores_}
performance_curve = pd.DataFrame(performance_curve)
# Performance vs Number of Features
# Set graph style
sns.set(font_scale = 1.75)
sns.set_style({"axes.facecolor": "1.0", "axes.edgecolor": "0.85", "grid.color": "0.85",
"grid.linestyle": "-", 'axes.labelcolor': '0.4', "xtick.color": "0.4",
'ytick.color': '0.4'})
colors = sns.color_palette("RdYlGn", 20)
line_color = colors[3]
marker_colors = colors[-1]
# Plot
f, ax = plt.subplots(figsize=(13, 6.5))
sns.lineplot(x = "Number of Features", y = "F1", data = performance_curve,
color = line_color, lw = 4, ax = ax)
sns.regplot(x = performance_curve["Number of Features"], y = performance_curve["F1"],
color = marker_colors, fit_reg = False, scatter_kws = {"s": 200}, ax = ax)
# Axes limits
plt.xlim(0.5, len(feature_names)+0.5)
plt.ylim(0.60, 1)
# Generate a bolded horizontal line at y = 0
ax.axhline(y = 0.625, color = 'black', linewidth = 1.3, alpha = .7)
# Turn frame off
ax.set_frame_on(False)
# Tight layout
plt.tight_layout()
#Define new training and test set based based on selected features by RFECV
X_train_rfecv = X_train[selected_features]
X_test_rfecv= X_test[selected_features]
np.random.seed(60)
classifier.fit(X_train_rfecv, np.ravel(y_train))
#Finding important features
np.random.seed(60)
feature_importance = pd.DataFrame(selected_features, columns = ["Feature Label"])
feature_importance["Feature Importance"] = classifier.feature_importances_
feature_importance = feature_importance.sort_values(by="Feature Importance", ascending=False)
feature_importance
# Initialize GridSearch object for model with selected features
np.random.seed(60)
gscv = RandomizedSearchCV(pipeline, param_grid, cv = 3, n_jobs= -1, verbose = 1, scoring = f1_score, n_iter=30)
#Tuning random forest classifier with selected features
np.random.seed(60)
gscv.fit(X_train_rfecv,y_train)
#Getting the best parameters of model with selected features
best_params = gscv.best_params_
best_params
#Getting the score of model with selected features
best_score = gscv.best_score_
best_score
#Check overfitting of the tuned model with selected features
from sklearn.model_selection import cross_val_score
mod = RandomForestClassifier(#class_weight= None,
criterion= 'entropy',
max_depth= 18,
max_features= 'sqrt',
n_estimators= 105 ,random_state=10000)
scores_test = cross_val_score(mod, X_test_rfecv, y_test, scoring='f1', cv=5)
scores_test
results={}
model=classifier.set_params(criterion= 'entropy',
max_depth= 18,
max_features= 'sqrt',
n_estimators= 105 ,random_state=10000)
scores_test = cross_val_score(mod, X_test_rfecv, y_test, scoring='f1', cv=5)
model.fit(X_train_rfecv,y_train)
y_pred = model.predict(X_test_rfecv)
F1 = metrics.f1_score(y_test, y_pred)
results = {"classifier": model,
"Best Parameters": best_params,
"Training f1": best_score*100,
"Test f1": F1*100}
results
###Output
_____no_output_____ |
notebooks/kl-divergence-mse.ipynb | ###Markdown
IntroductionThis question came up during a Journal Club meeting, in which we were discussing the difference between KL divergence and MSE as a deep learning metric. The paper of interest is [here](https://www.biorxiv.org/content/early/2016/10/17/081380); in it, the loss function is KL divergence between a transcription start site sequencing (TSS-Seq) data and its predictions.In order to probe this further, I decided to run some simulations. Here are the results.
###Code
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import entropy
from sklearn.metrics import mean_squared_error as mse
from matplotlib.gridspec import GridSpec
%load_ext autoreload
%autoreload 2
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
###Output
_____no_output_____
###Markdown
DataLet's simulate what the TSS-Seq data will look like. It is essentially a vector of numbers. Real TSS-Seq data will have a peak centered on a certain place; in this case, I'll just draw random integers.
###Code
tss_data = np.random.randint(low=0, high=50, size=(500,))
plt.plot(tss_data)
###Output
_____no_output_____
###Markdown
Loss FunctionsWe can use MSE as a loss function. MSE between a data and itself should be zero.
###Code
def mse(a, b, axis=0):
"""
Compute MSE per axis.
"""
diff_sq = (a - b)**2
return diff_sq.mean(axis=axis)
def sigmoid(x):
return 1 / (1 + np.exp(-x))
mse(sigmoid(tss_data), sigmoid(tss_data))
###Output
_____no_output_____
###Markdown
[KL-divergence](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.entropy.html) between a dataset and itself should also be zero.
###Code
entropy(sigmoid(tss_data), sigmoid(tss_data))
###Output
_____no_output_____
###Markdown
Comparing two random vectors.
###Code
from itertools import product
from mpl_toolkits.axes_grid1.inset_locator import inset_axes, zoomed_inset_axes
def compare_kl_mse(ax, a_lam=0, b_lam=0):
mses = []
entropys = []
for i in range(1000):
a_draws = np.random.poisson(a_lam, size=10000)
a = np.histogram(a_draws)[0]
b_draws = np.random.poisson(b_lam, size=10000)
b = np.histogram(b_draws)[0]
mses.append(mse(a, b))
entropys.append(entropy(a/a.sum(), b/b.sum()))
ax.scatter(mses, entropys, alpha=0.5)
ax.set_xlabel('mse')
ax.set_ylabel('kl-divergence')
ax.set_title(f'a_lam:{a_lam}, b_lam:{b_lam}')
# Inset histogram of distribution
ax_ins = inset_axes(ax, width="30%", height="30%", loc=4)
ax_ins.hist(a_draws, normed=True, bins=np.arange(min(a_draws), max(a_draws)), color='blue', alpha=0.3)
ax_ins.hist(b_draws, normed=True, bins=np.arange(min(b_draws), max(b_draws)), color='orange', alpha=0.3)
ax_ins.patch.set_alpha(0)
despine(ax_ins)
remove_ticks(ax_ins)
return ax
def remove_ticks(ax):
"""
Remove all ticks and tick labels from an axes.
"""
ax.set_xticks([])
ax.set_xticklabels([])
ax.set_yticks([])
ax.set_yticklabels([])
def despine(ax):
"""
Remove all spines from an axes.
"""
for spine in ['top', 'right', 'bottom', 'left']:
ax.spines[spine].set_visible(False)
def format_ax(ax, i, j, nrows, ncols):
"""
Formats the axes object to be nice-looking.
"""
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
return ax
nrows = 4
ncols = 4
fig, axes = plt.subplots(nrows, ncols, figsize=(15,15), sharex=True, sharey=True)
for i, j in product(range(1, nrows+1), range(1, ncols+1)):
ax = axes[i-1, j-1]
ax = compare_kl_mse(ax, a_lam=i, b_lam=j)
ax = format_ax(ax, i-1, j-1, nrows, ncols)
plt.tight_layout()
a_lam = 3
b_lam = 4
mses = []
entropys = []
for i in range(1000):
a_draws = np.random.poisson(a_lam, size=10000)
a = np.histogram(a_draws)[0]
b_draws = np.random.poisson(b_lam, size=10000)
b = np.histogram(b_draws)[0]
mses.append(mse(a, b))
entropys.append(entropy(a/a.sum(), b/b.sum()))
np.arange(1, 10)
###Output
_____no_output_____ |
land_cover_NDVI/Land_Cover-Classification.ipynb | ###Markdown
Data Description Data Set Information:This dataset was derived from geospatial data from two sources: 1) Landsat time-series satellite imagery from the years 2014-2015, and 2) crowdsourced georeferenced polygons with land cover labels obtained from OpenStreetMap. The crowdsourced polygons cover only a small part of the image area, and are used used to extract training data from the image for classifying the rest of the image. The main challenge with the dataset is that both the imagery and the crowdsourced data contain noise (due to cloud cover in the images and innaccurate labeling/digitizing of polygons).Files in zip folder-The 'training.csv' file contains the training data for classification. Do not use this file to evaluate classification accuracy because it contains noise (many class labeling errors).-The 'testing.csv' file contains testing data to evaluate the classification accuracy. This file does not contain any class labeling errors. Attribute Information:class: The land cover class (impervious, farm, forest, grass, orchard, water) __[note: this is the target variable to classify]__. max_ndvi: the maximum NDVI (normalized difference vegetation index) value derived from the time-series of satellite images.20150720_N - 20140101_N : NDVI values extracted from satellite images acquired between January 2014 and July 2015, in reverse chronological order (dates given in the format yyyymmdd).[Data Source](https://archive.ics.uci.edu/ml/datasets/Crowdsourced+Mapping) Import Libraries
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.preprocessing import LabelEncoder, StandardScaler
from sklearn.feature_selection import SelectPercentile, SelectFromModel, RFE
from sklearn.model_selection import train_test_split, cross_val_score
from sklearn.ensemble import RandomForestClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import SVC
from sklearn.pipeline import Pipeline, make_pipeline
from sklearn.metrics import plot_confusion_matrix, accuracy_score, average_precision_score, classification_report
%matplotlib inline
###Output
_____no_output_____
###Markdown
Import Data
###Code
train = pd.read_csv('training.csv')
test = pd.read_csv('testing.csv')
test.info()
train.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 300 entries, 0 to 299
Data columns (total 29 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 class 300 non-null object
1 max_ndvi 300 non-null float64
2 20150720_N 300 non-null float64
3 20150602_N 300 non-null float64
4 20150517_N 300 non-null float64
5 20150501_N 300 non-null float64
6 20150415_N 300 non-null float64
7 20150330_N 300 non-null float64
8 20150314_N 300 non-null float64
9 20150226_N 300 non-null float64
10 20150210_N 300 non-null float64
11 20150125_N 300 non-null float64
12 20150109_N 300 non-null float64
13 20141117_N 300 non-null float64
14 20141101_N 300 non-null float64
15 20141016_N 300 non-null float64
16 20140930_N 300 non-null float64
17 20140813_N 300 non-null float64
18 20140626_N 300 non-null float64
19 20140610_N 300 non-null float64
20 20140525_N 300 non-null float64
21 20140509_N 300 non-null float64
22 20140423_N 300 non-null float64
23 20140407_N 300 non-null float64
24 20140322_N 300 non-null float64
25 20140218_N 300 non-null float64
26 20140202_N 300 non-null float64
27 20140117_N 300 non-null float64
28 20140101_N 300 non-null float64
dtypes: float64(28), object(1)
memory usage: 68.1+ KB
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 10545 entries, 0 to 10544
Data columns (total 29 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 class 10545 non-null object
1 max_ndvi 10545 non-null float64
2 20150720_N 10545 non-null float64
3 20150602_N 10545 non-null float64
4 20150517_N 10545 non-null float64
5 20150501_N 10545 non-null float64
6 20150415_N 10545 non-null float64
7 20150330_N 10545 non-null float64
8 20150314_N 10545 non-null float64
9 20150226_N 10545 non-null float64
10 20150210_N 10545 non-null float64
11 20150125_N 10545 non-null float64
12 20150109_N 10545 non-null float64
13 20141117_N 10545 non-null float64
14 20141101_N 10545 non-null float64
15 20141016_N 10545 non-null float64
16 20140930_N 10545 non-null float64
17 20140813_N 10545 non-null float64
18 20140626_N 10545 non-null float64
19 20140610_N 10545 non-null float64
20 20140525_N 10545 non-null float64
21 20140509_N 10545 non-null float64
22 20140423_N 10545 non-null float64
23 20140407_N 10545 non-null float64
24 20140322_N 10545 non-null float64
25 20140218_N 10545 non-null float64
26 20140202_N 10545 non-null float64
27 20140117_N 10545 non-null float64
28 20140101_N 10545 non-null float64
dtypes: float64(28), object(1)
memory usage: 2.3+ MB
###Markdown
Get Insight
###Code
train.head(3)
sns.countplot(train['class'])
## Encoding class label to numeric
encoder = LabelEncoder()
classes = encoder.fit_transform(train['class'])
train['class'] = classes
sns.countplot(train['class'])
list(zip(encoder.classes_, sorted(train['class'].unique())))
train.tail()
sns.distplot(train)
plt.title('Data Distribution')
plt.figure(figsize=(25,10))
class_ndvi = train[train['class'] == 0]
sns.distplot(class_ndvi['max_ndvi'], label='Farm')
class_ndvi = train[train['class'] == 1]
sns.distplot(class_ndvi['max_ndvi'], label='forest')
class_ndvi = train[train['class'] == 2]
sns.distplot(class_ndvi['max_ndvi'], label='grass')
class_ndvi = train[train['class'] == 3]
sns.distplot(class_ndvi['max_ndvi'], label='impervious')
class_ndvi = train[train['class'] == 4]
sns.distplot(class_ndvi['max_ndvi'], label='orchard')
class_ndvi = train[train['class'] == 5]
sns.distplot(class_ndvi['max_ndvi'], label='water')
plt.legend();
## adding mean and median column
median = train.iloc[:,2:].median(axis=1)
mean = train.iloc[:,2:].mean(axis=1)
train['median_NDVI'] = median
train['mean_NDVI'] = mean
train.head()
plt.figure(figsize=(25,10))
class_ndvi = train[train['class'] == 0]
sns.distplot(class_ndvi['mean_NDVI'], label='Farm')
class_ndvi = train[train['class'] == 1]
sns.distplot(class_ndvi['mean_NDVI'], label='forest')
class_ndvi = train[train['class'] == 2]
sns.distplot(class_ndvi['mean_NDVI'], label='grass')
class_ndvi = train[train['class'] == 3]
sns.distplot(class_ndvi['mean_NDVI'], label='impervious')
class_ndvi = train[train['class'] == 4]
sns.distplot(class_ndvi['mean_NDVI'], label='orchard')
class_ndvi = train[train['class'] == 5]
sns.distplot(class_ndvi['mean_NDVI'], label='water')
plt.legend();
plt.figure(figsize=(25,10))
class_ndvi = train[train['class'] == 0]
sns.distplot(class_ndvi['median_NDVI'], label='Farm')
class_ndvi = train[train['class'] == 1]
sns.distplot(class_ndvi['median_NDVI'], label='forest')
class_ndvi = train[train['class'] == 2]
sns.distplot(class_ndvi['median_NDVI'], label='grass')
class_ndvi = train[train['class'] == 3]
sns.distplot(class_ndvi['median_NDVI'], label='impervious')
class_ndvi = train[train['class'] == 4]
sns.distplot(class_ndvi['median_NDVI'], label='orchard')
class_ndvi = train[train['class'] == 5]
sns.distplot(class_ndvi['median_NDVI'], label='water')
plt.legend();
sns.distplot(train['mean_NDVI'], label='mean')
sns.distplot(train['median_NDVI'], label='median')
sns.distplot(train['max_ndvi'], label='Max')
plt.title('Maen, Median and Max NDVI Distribusion')
plt.legend();
plt.scatter(train['mean_NDVI'], train['max_ndvi'], c=train['class'], alpha=.5);
plt.title("Mean vs Max NDVI");
plt.scatter(train['20140101_N'], train['20140117_N'], c=train['class'], alpha=0.5);
plt.title('sample from 2 coloumn NDVI');
###Output
_____no_output_____
###Markdown
Preprocessing
###Code
# split and scaling the data
X_train, X_val, y_train, y_val = train_test_split(train.iloc[:, 1:], train['class'], stratify=train['class'], test_size=.2)
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_val_scal = scaler.transform(X_val)
def model_valid_score(model, X_train, y_train):
model.fit(X_train, y_train)
print(classification_report(y_train, model.predict(X_train)))
plot_confusion_matrix(model, X_train, y_train)
print("General accuration Score: ",accuracy_score(y_train, model.predict(X_train)))
###Output
_____no_output_____
###Markdown
Feature engineering and selection ANOVA
###Code
select = SelectPercentile(percentile=70) # keep 70% of the features
select.fit(X_train_scaled, y_train)
X_selected = select.transform(X_train_scaled)
print("X selected: ", X_selected.shape)
print("X shape: ", X_train.shape)
print()
mask = select.get_support()
# feature = list(zip(train.iloc[:,1:].columns, mask))
print(mask)
# visualize the mask. black is True, white is False
plt.matshow(mask.reshape(1, -1), cmap='gray_r');
# with selected feature
model_valid_score(SVC(), X_selected, y_train)
# without selected feature
model_valid_score(SVC(), X_train_scaled, y_train)
###Output
precision recall f1-score support
0 0.93 0.92 0.93 1153
1 0.98 0.99 0.99 5945
2 0.96 0.85 0.90 357
3 0.91 0.93 0.92 775
4 1.00 0.57 0.73 42
5 0.99 0.84 0.91 164
accuracy 0.97 8436
macro avg 0.96 0.85 0.89 8436
weighted avg 0.97 0.97 0.97 8436
General accuration Score: 0.9672830725462305
###Markdown
Model Based Selection
###Code
select_m = SelectFromModel(RandomForestClassifier(n_estimators=300), threshold='median')
select_m.fit(X_train_scaled, y_train)
X_select_model = select_m.transform(X_train_scaled)
mask = select_m.get_support()
print(mask)
# visualize the mask. black is True, white is False
plt.matshow(mask.reshape(1, -1), cmap='gray_r')
X_select_model.shape
model_valid_score(SVC(), X_select_model, y_train)
###Output
precision recall f1-score support
0 0.90 0.87 0.88 1153
1 0.96 0.99 0.98 5945
2 0.89 0.76 0.82 357
3 0.88 0.88 0.88 775
4 1.00 0.10 0.17 42
5 0.96 0.79 0.87 164
accuracy 0.94 8436
macro avg 0.93 0.73 0.77 8436
weighted avg 0.94 0.94 0.94 8436
General accuration Score: 0.9449976292081556
###Markdown
Recursive Feature Elimination (RFE)
###Code
select_rfe = RFE(RandomForestClassifier(n_estimators=50), n_features_to_select=15)
select_rfe.fit(X_train_scaled, y_train)
X_select_RFE = select_rfe.transform(X_train_scaled)
mask = select_rfe.get_support()
print(mask)
# visualize the mask. black is True, white is False
plt.matshow(mask.reshape(1, -1), cmap='gray_r')
model_valid_score(SVC(), X_select_RFE, y_train)
###Output
precision recall f1-score support
0 0.89 0.87 0.88 1153
1 0.96 0.99 0.98 5945
2 0.94 0.79 0.86 357
3 0.89 0.88 0.88 775
4 1.00 0.19 0.32 42
5 0.98 0.79 0.87 164
accuracy 0.95 8436
macro avg 0.94 0.75 0.80 8436
weighted avg 0.95 0.95 0.94 8436
General accuration Score: 0.9467757230915126
###Markdown
without feature elimination accruray score slightly better, so I don't eliminate the feature Model Building and Selection
###Code
from sklearn.model_selection import KFold
from sklearn.ensemble import GradientBoostingClassifier
k_fold = KFold(n_splits=6)
def model_selection(model, X_train, y_train):
model.fit(X_train, y_train)
cros_val = cross_val_score(model, X_train, y_train, cv=k_fold, scoring='accuracy')
return np.mean(cros_val)
mods = {"knn": KNeighborsClassifier(n_neighbors=10), "SVC": SVC(kernel='rbf'), "RFC": RandomForestClassifier(), "GBC":GradientBoostingClassifier(n_estimators=50)}
score = {}
for key, val in mods.items():
score[key] = model_selection(val, X_train_scaled, y_train)
score
###Output
_____no_output_____
###Markdown
Pipeline and GridSearch
###Code
from sklearn.model_selection import GridSearchCV
pipe = make_pipeline(StandardScaler(), SVC())
param = {"svc__C": [ 1, 3, 5, 6, 7, 20, 100],}
grid = GridSearchCV(pipe, param, scoring="accuracy", n_jobs=-1, cv=k_fold)
grid.fit(X_train, y_train)
print("Score: ", grid.best_score_,"\nBest estimator: ", grid.best_estimator_)
grid.best_params_
pipe = make_pipeline(StandardScaler(), SVC(C=6))
pipe.fit(X_train, y_train)
pipe.score(X_val, y_val)
###Output
_____no_output_____
###Markdown
Applying model to test data
###Code
#feature adding
median = test.iloc[:,2:].median(axis=1)
mean = test.iloc[:,2:].mean(axis=1)
test['median_NDVI'] = median
test['mean_NDVI'] = mean
test.head()
test['class'] = encoder.transform(test['class'])
test.tail(2)
#Split data and label
X, y = test.iloc[:,1:], test['class']
pipe.score(X, y)
###Output
_____no_output_____ |
examples/lrfinder_mnist.ipynb | ###Markdown
MNIST example with 3-conv. layer networkThis example demonstrates the usage of `LRFinder` with a 3-conv. layer network on the MNIST dataset.
###Code
%matplotlib inline
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import torchvision.transforms as transforms
from torch.utils.data import DataLoader
from torchvision.datasets import MNIST
try:
from torch_lr_finder import LRFinder
except ImportError:
# Run from source
import sys
sys.path.insert(0, '..')
from torch_lr_finder import LRFinder
###Output
_____no_output_____
###Markdown
Loading MNIST
###Code
mnist_pwd = "../data"
batch_size= 256
transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))])
trainset = MNIST(mnist_pwd, train=True, download=True, transform=transform)
trainloader = DataLoader(trainset, batch_size=batch_size, shuffle=True, num_workers=0)
testset = MNIST(mnist_pwd, train=False, download=True, transform=transform)
testloader = DataLoader(testset, batch_size=batch_size * 2, shuffle=False, num_workers=0)
###Output
_____no_output_____
###Markdown
Model
###Code
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
self.conv2_drop = nn.Dropout2d()
self.fc1 = nn.Linear(320, 50)
self.fc2 = nn.Linear(50, 10)
def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), 2))
x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
x = x.view(-1, 320)
x = F.relu(self.fc1(x))
x = F.dropout(x, training=self.training)
x = self.fc2(x)
return F.log_softmax(x, dim=1)
model = Net()
# Evaluate the initial weights
named_params = model.named_parameters()
for name, module in named_params:
print(f'- Module {name}\n \t \t mean: {module.mean()} \n \t \t max: {module.max()} \n \t \t min: {module.min()}')
###Output
- Module conv1.weight
mean: -0.008412416093051434
max: 0.19864432513713837
min: -0.20022745430469513
- Module conv1.bias
mean: -0.049846068024635315
max: 0.1648939996957779
min: -0.19886578619480133
- Module conv2.weight
mean: -0.0003912231477443129
max: 0.0681159719824791
min: -0.06377794593572617
- Module conv2.bias
mean: 0.009952106513082981
max: 0.061483364552259445
min: -0.05263204500079155
- Module fc1.weight
mean: 0.00047862465726211667
max: 0.058644477277994156
min: -0.056776463985443115
- Module fc1.bias
mean: 0.001839962205849588
max: 0.05351809412240982
min: -0.0542694628238678
- Module fc2.weight
mean: -0.004528908059000969
max: 0.14259041845798492
min: -0.1416158825159073
- Module fc2.bias
mean: 0.03527223318815231
max: 0.13590747117996216
min: -0.13963232934474945
###Markdown
Training loss (fastai)This learning rate test range follows the same procedure used by fastai. The model is trained for `num_iter` iterations while the learning rate is increased from its initial value specified by the optimizer algorithm to `end_lr`. The increase can be linear (`step_mode="linear"`) or exponential (`step_mode="exp"`); linear provides good results for small ranges while exponential is recommended for larger ranges.
###Code
criterion = nn.NLLLoss()
optimizer = optim.SGD(model.parameters(), lr=0.001, momentum=0.5)
lr_finder = LRFinder(model, optimizer, criterion, device="cuda")
lr_finder.range_test(trainloader, end_lr=10, num_iter=100, step_mode="exp")
###Output
_____no_output_____
###Markdown
Note that the loss in the loss vs. learning rate plot is the **training** loss.
###Code
lr_finder.plot(log_lr=True)
# Show some lrs
lr_finder.plot(log_lr=True, show_lr=[1e-1, 0.1, 0.2, 0.3, 0.4, 0.5])
# Evaluate weights after finding the lr
named_params = model.named_parameters()
for name, module in named_params:
print(f'- Module {name}\n \t \t mean: {module.mean()} \n \t \t max: {module.max()} \n \t \t min: {module.min()}')
###Output
- Module conv1.weight
mean: -0.5631905794143677
max: 1.0295658111572266
min: -7.622307777404785
- Module conv1.bias
mean: -0.9652428030967712
max: 0.6579212546348572
min: -4.344000339508057
- Module conv2.weight
mean: -0.3166534900665283
max: 0.1688554584980011
min: -37.87096405029297
- Module conv2.bias
mean: -0.42159876227378845
max: -0.021250568330287933
min: -4.632826805114746
- Module fc1.weight
mean: -0.11736834794282913
max: 0.8992581963539124
min: -26.487070083618164
- Module fc1.bias
mean: -0.3351338803768158
max: -0.014729334972798824
min: -2.045773983001709
- Module fc2.weight
mean: -0.004810352344065905
max: 2.647152900695801
min: -7.845766544342041
- Module fc2.bias
mean: 0.015598462894558907
max: 0.3336434066295624
min: -0.4912267327308655
###Markdown
To restore the model and optimizer to their initial state use the `reset()` method.
###Code
lr_finder.reset()
# Evaluate weights after reseting the lr
named_params = model.named_parameters()
for name, module in named_params:
print(f'- Module {name}\n \t \t mean: {module.mean()} \n \t \t max: {module.max()} \n \t \t min: {module.min()}')
###Output
- Module conv1.weight
mean: 3.215736069250852e-05
max: 0.19959957897663116
min: -0.1992667019367218
- Module conv1.bias
mean: 0.019651666283607483
max: 0.16063104569911957
min: -0.17914514243602753
- Module conv2.weight
mean: -0.0006517112487927079
max: 0.06324551999568939
min: -0.06322894990444183
- Module conv2.bias
mean: -0.0022223014384508133
max: 0.05429922044277191
min: -0.06234666705131531
- Module fc1.weight
mean: -0.00022833132243249565
max: 0.05589595437049866
min: -0.05590146407485008
- Module fc1.bias
mean: -0.005663219839334488
max: 0.05146395415067673
min: -0.05483570694923401
- Module fc2.weight
mean: 0.0023726827930659056
max: 0.14094178378582
min: -0.14012092351913452
- Module fc2.bias
mean: -0.03946481645107269
max: 0.12451539933681488
min: -0.13725951313972473
###Markdown
We can also run the test with a different starting learning rate without creating a new optimizer using the `start_lr` parameter.
###Code
lr_finder.range_test(trainloader, start_lr=0.02, end_lr=1.5, num_iter=100, step_mode="linear")
lr_finder.plot(log_lr=False)
lr_finder.reset()
lr_finder.range_test(trainloader, start_lr=0.02, end_lr=1.5, num_iter=100, step_mode="exp")
lr_finder.plot(log_lr=False)
lr_finder.reset()
###Output
_____no_output_____
###Markdown
Validation loss (Leslie N. Smith)If a dataloader is passed to `LRFinder.range_test()` through the `val_loader` parameter the model is evaluated on that dataset after each iteration. The evaluation loss is more sensitive to instability therefore it provides a more precise view of when the divergence occurs. The disadvantage is that it takes significantly longer to run.This version of the learning rate range test is described in [Cyclical Learning Rates for Training Neural Networks by Leslie N. Smith](https://arxiv.org/abs/1506.01186).
###Code
lr_finder.range_test(trainloader, val_loader=testloader, end_lr=10, num_iter=100, step_mode="exp")
###Output
_____no_output_____
###Markdown
Note that the loss in the loss vs. learning rate plot is the **evaluation** loss.
###Code
lr_finder.plot(skip_end=1)
###Output
LR suggestion: steepest gradient
Suggested LR: 2.54E-03
###Markdown
To restore the model and optimizer to their initial state use the `reset()` method.
###Code
lr_finder.reset()
# Linear
lr_finder.range_test(trainloader, val_loader=testloader, end_lr=1, num_iter=100, step_mode="exp")
lr_finder.plot(log_lr=True)
lr_finder.reset()
###Output
_____no_output_____
###Markdown
Evaluate found learning rates
###Code
criterion = nn.NLLLoss()
def train_one_epoch(model, optimizer, epoch=0):
model.train()
total_loss = 0.
for iter, (imgs, targets) in enumerate(trainloader):
if iter > 1000:
break
imgs = imgs.cuda()
targets = targets.cuda()
optimizer.zero_grad()
outputs = model(imgs)
loss = criterion(outputs, targets)
loss.backward()
optimizer.step()
total_loss += loss.detach().cpu().item()
print(f'\n[Training| Epoch {epoch}| loss: {total_loss/(iter+1)}')
return model
def evaluation(model):
total_loss = 0.
model.eval()
for iter, (imgs, targets) in enumerate(testloader):
imgs = imgs.cuda()
targets = targets.cuda()
with torch.no_grad():
outputs = model(imgs)
loss = criterion(outputs, targets)
total_loss += loss
print(f'\n[Eval] Avg loss: {total_loss/len(testloader)}\n===============')
lrs = [2.42e-1, 4.09e-1, 1.62e-1, 2.54e-3, 2.01e-3]
import torch
import pdb
torch.manual_seed(0)
for lr in lrs:
print(f'\n==================== LR = {lr} ======================\n')
model = Net().cuda()
optimizer = optim.SGD(model.parameters(), lr=lr, momentum=0.5)
for epoch in range(3):
trained_model = train_one_epoch(model, optimizer, epoch)
evaluation(trained_model)
###Output
==================== LR = 0.242 ======================
[Training| Epoch 0| loss: 0.5596843419556922
[Eval] Avg loss: 0.12352388352155685
===============
[Training| Epoch 1| loss: 0.2682568329445859
[Eval] Avg loss: 0.07258277386426926
===============
[Training| Epoch 2| loss: 0.22360476239564572
[Eval] Avg loss: 0.06536003202199936
===============
==================== LR = 0.409 ======================
[Training| Epoch 0| loss: 0.7231562564981745
[Eval] Avg loss: 0.11664595454931259
===============
[Training| Epoch 1| loss: 0.3759063603396111
[Eval] Avg loss: 0.1170477494597435
===============
[Training| Epoch 2| loss: 0.32550881872785853
[Eval] Avg loss: 0.10846175998449326
===============
==================== LR = 0.162 ======================
[Training| Epoch 0| loss: 0.5673917628349142
[Eval] Avg loss: 0.13004760444164276
===============
[Training| Epoch 1| loss: 0.27389642336267106
[Eval] Avg loss: 0.07550141960382462
===============
[Training| Epoch 2| loss: 0.22202362979346132
[Eval] Avg loss: 0.062488801777362823
===============
==================== LR = 0.00254 ======================
[Training| Epoch 0| loss: 2.268298054756002
[Eval] Avg loss: 2.1581778526306152
===============
[Training| Epoch 1| loss: 2.023527922021582
[Eval] Avg loss: 1.6050752401351929
===============
[Training| Epoch 2| loss: 1.5393410637023601
[Eval] Avg loss: 0.9529599547386169
===============
==================== LR = 0.00201 ======================
[Training| Epoch 0| loss: 2.2789655005678218
[Eval] Avg loss: 2.2260220050811768
===============
[Training| Epoch 1| loss: 2.1744791629466604
[Eval] Avg loss: 1.981873869895935
===============
[Training| Epoch 2| loss: 1.8234486351621912
[Eval] Avg loss: 1.2440943717956543
===============
###Markdown
MNIST example with 3-conv. layer networkThis example demonstrates the usage of `LRFinder` with a 3-conv. layer network on the MNIST dataset.
###Code
%matplotlib inline
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import torchvision.transforms as transforms
from torch.utils.data import DataLoader
from torchvision.datasets import MNIST
try:
from torch_lr_finder import LRFinder
except ImportError:
# Run from source
import sys
sys.path.insert(0, '..')
from torch_lr_finder import LRFinder
###Output
/home/davidtvs-nb/lr-finder-rev/py3/pytorch-lr-finder/env/lib/python3.6/site-packages/torch_lr_finder/lr_finder.py:5: TqdmExperimentalWarning: Using `tqdm.autonotebook.tqdm` in notebook mode. Use `tqdm.tqdm` instead to force console mode (e.g. in jupyter console)
from tqdm.autonotebook import tqdm
###Markdown
Loading MNIST
###Code
mnist_pwd = "../data"
batch_size= 256
transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))])
trainset = MNIST(mnist_pwd, train=True, download=True, transform=transform)
trainloader = DataLoader(trainset, batch_size=batch_size, shuffle=True, num_workers=0)
testset = MNIST(mnist_pwd, train=False, download=True, transform=transform)
testloader = DataLoader(testset, batch_size=batch_size * 2, shuffle=False, num_workers=0)
###Output
_____no_output_____
###Markdown
Model
###Code
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
self.conv2_drop = nn.Dropout2d()
self.fc1 = nn.Linear(320, 50)
self.fc2 = nn.Linear(50, 10)
def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), 2))
x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
x = x.view(-1, 320)
x = F.relu(self.fc1(x))
x = F.dropout(x, training=self.training)
x = self.fc2(x)
return F.log_softmax(x, dim=1)
model = Net()
###Output
_____no_output_____
###Markdown
Training loss (fastai)This learning rate test range follows the same procedure used by fastai. The model is trained for `num_iter` iterations while the learning rate is increased from its initial value specified by the optimizer algorithm to `end_lr`. The increase can be linear (`step_mode="linear"`) or exponential (`step_mode="exp"`); linear provides good results for small ranges while exponential is recommended for larger ranges.
###Code
criterion = nn.NLLLoss()
optimizer = optim.SGD(model.parameters(), lr=0.001, momentum=0.5)
lr_finder = LRFinder(model, optimizer, criterion, device="cuda")
lr_finder.range_test(trainloader, end_lr=10, num_iter=100, step_mode="exp")
###Output
_____no_output_____
###Markdown
Note that the loss in the loss vs. learning rate plot is the **training** loss.
###Code
lr_finder.plot()
###Output
_____no_output_____
###Markdown
To restore the model and optimizer to their initial state use the `reset()` method.
###Code
lr_finder.reset()
###Output
_____no_output_____
###Markdown
We can also run the test with a different starting learning rate without creating a new optimizer using the `start_lr` parameter.
###Code
lr_finder.range_test(trainloader, start_lr=0.02, end_lr=1.5, num_iter=100, step_mode="exp")
lr_finder.plot()
lr_finder.reset()
###Output
_____no_output_____
###Markdown
Validation loss (Leslie N. Smith)If a dataloader is passed to `LRFinder.range_test()` through the `val_loader` parameter the model is evaluated on that dataset after each iteration. The evaluation loss is more sensitive to instability therefore it provides a more precise view of when the divergence occurs. The disadvantage is that it takes significantly longer to run.This version of the learning rate range test is described in [Cyclical Learning Rates for Training Neural Networks by Leslie N. Smith](https://arxiv.org/abs/1506.01186).
###Code
lr_finder.range_test(trainloader, val_loader=testloader, end_lr=10, num_iter=100, step_mode="exp")
###Output
_____no_output_____
###Markdown
Note that the loss in the loss vs. learning rate plot is the **evaluation** loss.
###Code
lr_finder.plot(skip_end=1)
###Output
_____no_output_____
###Markdown
To restore the model and optimizer to their initial state use the `reset()` method.
###Code
lr_finder.reset()
###Output
_____no_output_____
###Markdown
MNIST example with 3-conv. layer networkThis example demonstrates the usage of `LRFinder` with a 3-conv. layer network on the MNIST dataset.
###Code
%matplotlib inline
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import torchvision.transforms as transforms
from torch.utils.data import DataLoader
from torchvision.datasets import MNIST
from torch_lr_finder import LRFinder
###Output
/home/davidtvs/datascience/pytorch/pytorch-lr-finder/env/lib/python3.6/site-packages/tqdm/autonotebook/__init__.py:14: TqdmExperimentalWarning: Using `tqdm.autonotebook.tqdm` in notebook mode. Use `tqdm.tqdm` instead to force console mode (e.g. in jupyter console)
" (e.g. in jupyter console)", TqdmExperimentalWarning)
###Markdown
Loading MNIST
###Code
mnist_pwd = "../data"
batch_size= 256
transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))])
trainset = MNIST(mnist_pwd, train=True, download=True, transform=transform)
trainloader = DataLoader(trainset, batch_size=batch_size, shuffle=True, num_workers=0)
testset = MNIST(mnist_pwd, train=False, download=True, transform=transform)
testloader = DataLoader(testset, batch_size=batch_size * 2, shuffle=False, num_workers=0)
###Output
_____no_output_____
###Markdown
Model
###Code
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
self.conv2_drop = nn.Dropout2d()
self.fc1 = nn.Linear(320, 50)
self.fc2 = nn.Linear(50, 10)
def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), 2))
x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
x = x.view(-1, 320)
x = F.relu(self.fc1(x))
x = F.dropout(x, training=self.training)
x = self.fc2(x)
return F.log_softmax(x, dim=1)
model = Net()
###Output
_____no_output_____
###Markdown
Training loss (fastai)This learning rate test range follows the same procedure used by fastai. The model is trained for `num_iter` iterations while the learning rate is increased from its initial value specified by the optimizer algorithm to `end_lr`. The increase can be linear (`step_mode="linear"`) or exponential (`step_mode="exp"`); linear provides good results for small ranges while exponential is recommended for larger ranges.
###Code
criterion = nn.NLLLoss()
optimizer = optim.SGD(model.parameters(), lr=0.001, momentum=0.5)
lr_finder = LRFinder(model, optimizer, criterion, device="cuda")
lr_finder.range_test(trainloader, end_lr=10, num_iter=100, step_mode="exp")
###Output
_____no_output_____
###Markdown
Note that the loss in the loss vs. learning rate plot is the **training** loss.
###Code
lr_finder.plot()
###Output
_____no_output_____
###Markdown
To restore the model and optimizer to their initial state use the `reset()` method.
###Code
lr_finder.reset()
###Output
_____no_output_____
###Markdown
Validation loss (Leslie N. Smith)If a dataloader is passed to `LRFinder.range_test()` through the `val_loader` parameter the model is evaluated on that dataset after each iteration. The evaluation loss is more sensitive to instability therefore it provides a more precise view of when the divergence occurs. The disadvantage is that it takes significantly longer to run.This version of the learning rate range test is described in [Cyclical Learning Rates for Training Neural Networks by Leslie N. Smith](https://arxiv.org/abs/1506.01186).
###Code
lr_finder.range_test(trainloader, val_loader=testloader, end_lr=10, num_iter=100, step_mode="exp")
###Output
_____no_output_____
###Markdown
Note that the loss in the loss vs. learning rate plot is the **evaluation** loss.
###Code
lr_finder.plot(skip_end=1)
###Output
_____no_output_____
###Markdown
To restore the model and optimizer to their initial state use the `reset()` method.
###Code
lr_finder.reset()
###Output
_____no_output_____
###Markdown
MNIST example with 3-conv. layer networkThis example demonstrates the usage of `LRFinder` with a 3-conv. layer network on the MNIST dataset.
###Code
%matplotlib inline
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import torchvision.transforms as transforms
from torch.utils.data import DataLoader
from torchvision.datasets import MNIST
import sys
sys.path.append('..')
from lr_finder import LRFinder
###Output
/home/davidtvs/datascience/pytorch/pytorch-lr-finder/env/lib/python3.6/site-packages/tqdm/autonotebook/__init__.py:14: TqdmExperimentalWarning: Using `tqdm.autonotebook.tqdm` in notebook mode. Use `tqdm.tqdm` instead to force console mode (e.g. in jupyter console)
" (e.g. in jupyter console)", TqdmExperimentalWarning)
###Markdown
Loading MNIST
###Code
mnist_pwd = "../data"
batch_size= 256
transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))])
trainset = MNIST(mnist_pwd, train=True, download=True, transform=transform)
trainloader = DataLoader(trainset, batch_size=batch_size, shuffle=True, num_workers=0)
testset = MNIST(mnist_pwd, train=False, download=True, transform=transform)
testloader = DataLoader(testset, batch_size=batch_size * 2, shuffle=False, num_workers=0)
###Output
_____no_output_____
###Markdown
Model
###Code
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
self.conv2_drop = nn.Dropout2d()
self.fc1 = nn.Linear(320, 50)
self.fc2 = nn.Linear(50, 10)
def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), 2))
x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
x = x.view(-1, 320)
x = F.relu(self.fc1(x))
x = F.dropout(x, training=self.training)
x = self.fc2(x)
return F.log_softmax(x, dim=1)
model = Net()
###Output
_____no_output_____
###Markdown
Training loss (fastai)This learning rate test range follows the same procedure used by fastai. The model is trained for `num_iter` iterations while the learning rate is increased from its initial value specified by the optimizer algorithm to `end_lr`. The increase can be linear (`step_mode="linear"`) or exponential (`step_mode="exp"`); linear provides good results for small ranges while exponential is recommended for larger ranges.
###Code
criterion = nn.NLLLoss()
optimizer = optim.SGD(model.parameters(), lr=0.001, momentum=0.5)
lr_finder = LRFinder(model, optimizer, criterion, device="cuda")
lr_finder.range_test(trainloader, end_lr=10, num_iter=100, step_mode="exp")
###Output
_____no_output_____
###Markdown
Note that the loss in the loss vs. learning rate plot is the **training** loss.
###Code
lr_finder.plot()
###Output
_____no_output_____
###Markdown
To restore the model and optimizer to their initial state use the `reset()` method.
###Code
lr_finder.reset()
###Output
_____no_output_____
###Markdown
Validation loss (Leslie N. Smith)If a dataloader is passed to `LRFinder.range_test()` through the `val_loader` parameter the model is evaluated on that dataset after each iteration. The evaluation loss is more sensitive to instability therefore it provides a more precise view of when the divergence occurs. The disadvantage is that it takes significantly longer to run.This version of the learning rate range test is described in [Cyclical Learning Rates for Training Neural Networks by Leslie N. Smith](https://arxiv.org/abs/1506.01186).
###Code
lr_finder.range_test(trainloader, val_loader=testloader, end_lr=10, num_iter=100, step_mode="exp")
###Output
_____no_output_____
###Markdown
Note that the loss in the loss vs. learning rate plot is the **evaluation** loss.
###Code
lr_finder.plot(skip_end=1)
###Output
_____no_output_____
###Markdown
To restore the model and optimizer to their initial state use the `reset()` method.
###Code
lr_finder.reset()
###Output
_____no_output_____ |
Facial_expression_detection_tensorflow_CNN_2_Classes.ipynb | ###Markdown
Tensorflow with Convolution Layers Example - 2 classes facial expression detection sample
###Code
#Python v3
#Tensorflow with Convolution Layers Example - 2 classes facial expression detection sample
# Colab code - Adapted From Coursera - DeepLearning.AI TensorFlow CNN course
import os
import zipfile
import tensorflow as tf
from tensorflow.keras.optimizers import RMSprop
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import numpy as np
from google.colab import files
from keras.preprocessing import image
local_zip = '/tmp/facial_expressions.zip'
zip_ref = zipfile.ZipFile(local_zip, 'r')
zip_ref.extractall('/tmp/facial_expressions')
local_zip = '/tmp/facial_expressions_validation.zip'
zip_ref = zipfile.ZipFile(local_zip, 'r')
zip_ref.extractall('/tmp/facial_expressions_validation')
zip_ref.close()
train_happy_dir = os.path.join('/tmp/facial_expressions/happy')
train_sad_dir = os.path.join('/tmp/facial_expressions/sad')
validation_happy_dir = os.path.join('/tmp/facial_expressions_validation/happy')
validation_sad_dir = os.path.join('/tmp/facial_expressions_validation/sad')
model = tf.keras.models.Sequential([
# Note the input shape is the desired size of the image 300x300 with 3 bytes color
# This is the first convolution
tf.keras.layers.Conv2D(16, (3,3), activation='relu', input_shape=(300, 300, 3)),
tf.keras.layers.MaxPooling2D(2, 2),
# The second convolution
tf.keras.layers.Conv2D(32, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# The third convolution
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# The fourth convolution
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# The fifth convolution
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# Flatten the results to feed into a DNN
tf.keras.layers.Flatten(),
# 512 neuron hidden layer
tf.keras.layers.Dense(512, activation='relu'),
# Only 1 output neuron. It will contain a value from 0-1 where 0 for 1 class ('horses') and 1 for the other ('humans')
tf.keras.layers.Dense(1, activation='sigmoid')
])
model.compile(loss='binary_crossentropy',
optimizer=RMSprop(lr=0.001),
metrics=['accuracy'])
# All images will be rescaled by 1./255
train_datagen = ImageDataGenerator(rescale=1/255)
validation_datagen = ImageDataGenerator(rescale=1/255)
# Flow training images in batches of 128 using train_datagen generator
train_generator = train_datagen.flow_from_directory(
'/tmp/facial_expressions/', # This is the source directory for training images
target_size=(300, 300), # All images will be resized to 300x300
batch_size=128,
class_mode='binary')
# Flow training images in batches of 128 using train_datagen generator
validation_generator = validation_datagen.flow_from_directory(
'/tmp/facial_expressions_validation/', # This is the source directory for training images
target_size=(300, 300), # All images will be resized to 300x300
batch_size=32,
class_mode='binary')
history = model.fit(
train_generator,
steps_per_epoch=1,
epochs=50,
verbose=1,
validation_data = validation_generator,
validation_steps=1)
uploaded = files.upload()
#print(train_generator.class_indices)
for fn in uploaded.keys():
# predicting images
path = '/content/' + fn
img = image.load_img(path, target_size=(300, 300))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
images = np.vstack([x])
classes = model.predict(images, batch_size=10)
print(classes[0])
if classes[0]>0.5:
print(fn + " is sad")
else:
print(fn + " is happy")
###Output
_____no_output_____ |
Casos de estudo/Brasil/Covid-19Brazil/statsBrazil.ipynb | ###Markdown
Compare Brazil with most punished Countries
###Code
from datetime import date
import pandas as pd
import matplotlib as mlp
import matplotlib.pyplot as plt
import numpy as np
import csv
import os
from matplotlib.pyplot import pie, axis, show
%matplotlib inline
%load_ext sql
user = os.getenv('user')
password = os.getenv('1234')
connection_string = "postgresql://postgres:1234@localhost/Miebiom".format(user=user, password=password)
%sql $connection_string
#
%sql select concelho from "import"
result = %sql SELECT country_region, confirmed FROM worldcases WHERE confirmed>100000
# usando pandas
dataframe = result.DataFrame()
plt.rcParams.update(plt.rcParamsDefault)
plt.style.use('grayscale')
today = date.today()
d1 = today.strftime("%d/%m/%Y")
state = dataframe['country_region']
totalCases = dataframe['confirmed']
fig, ax=plt.subplots()
ax.bar(dataframe.index, dataframe['confirmed'])
ax.set_xticks(np.arange(len(list(state))))
ax.set_xticklabels(dataframe['country_region'],rotation=90,fontsize='12')
ax.set_title('Number of cases at '+ d1, fontsize=22)
ax.set_ylabel('# Cases')
#plt.ylim([0,2000000])
plt.yscale("log")
with open("C:/OSGeo4W64/bin/Andre/Covid-19Brazil/Tables/cases-brazil-total.csv", 'r') as f:
mycsv = csv.reader(f)
mycsv = list(mycsv)
text = mycsv[1][2]
print("Total number of cases are "+text)
plt.savefig('fig/worldnumberCases.jpg', bbox_inches='tight')
plt.show()
###Output
* postgresql://postgres:***@localhost/Miebiom
278 rows affected.
* postgresql://postgres:***@localhost/Miebiom
12 rows affected.
Total number of cases are 469850
###Markdown
Pie with world cases - over 100.000 cases.
###Code
import os
import pandas as pd
from matplotlib.pyplot import pie, axis, show
import matplotlib.pyplot as plt
%matplotlib inline
%load_ext sql
user = os.getenv('user')
password = os.getenv('1234')
connection_string = "postgresql://postgres:1234@localhost/Miebiom".format(user=user, password=password)
%sql $connection_string
#
%sql select concelho from "import"
result = %sql SELECT country_region, confirmed FROM worldcases WHERE confirmed>100000
# usando pandas
df = result.DataFrame()
#----------------
plt.rcParams.update(plt.rcParamsDefault)
plt.style.use('default')
state = df['country_region']
totalCases = df['confirmed']
x=[]
y=[]
x=list(totalCases)
y=list(state)
patches, texts, autotexts = plt.pie(x, labels=y,
autopct='%.0f%%',
textprops={'size': 'x-large'},
shadow=True, radius=1) #change size
plt.setp(autotexts, size='large')
plt.savefig('fig/pie.jpg', bbox_inches='tight')
plt.show()
###Output
The sql extension is already loaded. To reload it, use:
%reload_ext sql
* postgresql://postgres:***@localhost/Miebiom
278 rows affected.
* postgresql://postgres:***@localhost/Miebiom
12 rows affected.
###Markdown
To be able to creat a graph with number of cases per date was necessary construct a .csv from another csv with some restrictions. This code sum all the columns and write another .csv with a column title and the sum only.
###Code
import csv
#dataframe = pd.read_csv("C:/OSGeo4W64/bin/Andre/Covid-19Brazil/Tables/ML/time_series_covid19_confirmed_global.csv", index_col=1, encoding='utf-8')
from codecs import EncodedFile
import csv
r = csv.reader(open('C:/OSGeo4W64/bin/Andre/Covid-19Brazil/Tables/time_series_covid19_confirmed_global.csv', "rt", encoding='utf-8')) # Here your csv file
lines = list(r)
i=0
x=0
z=0
q=0
lista=[]
cabeca=[]
for j in range (0, len(lines[0])-4):
lista.append(0.0)
for line in lines:
if i==0:
i+=1
for val in line:
if q==0 or q==1 or q==2 or q==3:
q+=1
else:
cabeca.append(val)
q+=1
pass
else:
for val in line:
if x==0 or x==1 or x==2 or x==3:
x+=1
else:
lista[z]+=float(val)
z+=1
x+=1
x=0
z=0
#lista.insert(0,"Total")
with open('C:/OSGeo4W64/bin/Andre/Covid-19Brazil/Tables/time_total_world.csv', 'w', newline='') as myfile:
wr = csv.writer(myfile, quoting=csv.QUOTE_ALL)
wr.writerow(cabeca)
wr.writerow(lista)
myfile.close()
from datetime import date
import pandas as pd
import matplotlib as mlp
import matplotlib.pyplot as plt
import numpy as np
import csv
plt.rcParams.update(plt.rcParamsDefault)
plt.style.use('default')
today = date.today()
d1 = today.strftime("%d/%m/%Y")
with open('C:/OSGeo4W64/bin/Andre/Covid-19Brazil/Tables/time_total_world.csv', newline='') as f:
reader = csv.reader(f)
data = list(reader)
print(len(data[0]))
print(len(data[1]))
#dataframe = pd.read_csv("C:/OSGeo4W64/bin/Andre/Covid-19Brazil/Tables/time_total_world.csv", index_col=1, encoding='utf-8')
for i in range(0, len(data[1])):
data[1][i] = float(data[1][i])
df = pd.DataFrame({'Day':data[0], 'Cases':data[1]})
ax =df.plot.bar(x='Day', y='Cases', rot=90,figsize=(30,30))
ax.set_title("Number of cases per day in all world")
plt.savefig('fig/world_number_cases.jpg', bbox_inches='tight')
plt.show()
###Output
128
128
|
09_matrix_factorization/src/py_part_9_kaggle_GLRM_example.ipynb | ###Markdown
License ***Copyright (C) 2017 J. Patrick Hall, [email protected] is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. *** Kaggle House Prices with GLRM Matrix Factorization Example Imports and inits
###Code
import h2o
from h2o.estimators.glrm import H2OGeneralizedLowRankEstimator
from h2o.estimators.glm import H2OGeneralizedLinearEstimator
from h2o.grid.grid_search import H2OGridSearch
h2o.init(max_mem_size='12G') # give h2o as much memory as possible
h2o.no_progress() # turn off h2o progress bars
import matplotlib as plt
%matplotlib inline
import numpy as np
import pandas as pd
###Output
Checking whether there is an H2O instance running at http://localhost:54321 ..... not found.
Attempting to start a local H2O server...
Java Version: java version "1.8.0_112"; Java(TM) SE Runtime Environment (build 1.8.0_112-b16); Java HotSpot(TM) 64-Bit Server VM (build 25.112-b16, mixed mode)
Starting server from /Users/phall/anaconda3/lib/python3.6/site-packages/h2o/backend/bin/h2o.jar
Ice root: /var/folders/tc/0ss1l73113j3wdyjsxmy1j2r0000gn/T/tmpd4usa_h6
JVM stdout: /var/folders/tc/0ss1l73113j3wdyjsxmy1j2r0000gn/T/tmpd4usa_h6/h2o_phall_started_from_python.out
JVM stderr: /var/folders/tc/0ss1l73113j3wdyjsxmy1j2r0000gn/T/tmpd4usa_h6/h2o_phall_started_from_python.err
Server is running at http://127.0.0.1:54321
Connecting to H2O server at http://127.0.0.1:54321 ... successful.
###Markdown
Helper Functions Determine data types
###Code
def get_type_lists(frame, rejects=['Id', 'SalePrice']):
"""Creates lists of numeric and categorical variables.
:param frame: The frame from which to determine types.
:param rejects: Variable names not to be included in returned lists.
:return: Tuple of lists for numeric and categorical variables in the frame.
"""
nums, cats = [], []
for key, val in frame.types.items():
if key not in rejects:
if val == 'enum':
cats.append(key)
else:
nums.append(key)
print('Numeric =', nums)
print()
print('Categorical =', cats)
return nums, cats
###Output
_____no_output_____
###Markdown
Impute with GLRM
###Code
def glrm_num_impute(role, frame):
""" Helper function for imputing numeric variables using GLRM.
:param role: Role of frame to be imputed.
:param frame: H2OFrame to be imputed.
:return: H2OFrame of imputed numeric features.
"""
# count missing values in training data numeric columns
print(role + ' missing:\n', [cnt for cnt in frame.nacnt() if cnt != 0.0])
# initialize GLRM
matrix_complete_glrm = H2OGeneralizedLowRankEstimator(
k=10, # create 10 features
transform='STANDARDIZE', # <- seems very important
gamma_x=0.001, # regularization on values in X
gamma_y=0.05, # regularization on values in Y
impute_original=True)
# train GLRM
matrix_complete_glrm.train(training_frame=frame, x=original_nums)
# plot iteration history to ensure convergence
matrix_complete_glrm.score_history().plot(x='iterations', y='objective', title='GLRM Score History')
# impute numeric inputs by multiply the calculated xi and yj for the missing values in train
num_impute = matrix_complete_glrm.predict(frame)
# count missing values in imputed set
print('imputed ' + role + ' missing:\n', [cnt for cnt in num_impute.nacnt() if cnt != 0.0])
return num_impute
###Output
_____no_output_____
###Markdown
Embed with GLRM
###Code
def glrm_cat_embed(frame):
""" Helper function for embedding caetgorical variables using GLRM.
:param frame: H2OFrame to be embedded.
:return: H2OFrame of embedded categorical features.
"""
# initialize GLRM
cat_embed_glrm = H2OGeneralizedLowRankEstimator(
k=50,
transform='STANDARDIZE',
loss='Quadratic',
regularization_x='Quadratic',
regularization_y='L1',
gamma_x=0.25,
gamma_y=0.5)
# train GLRM
cat_embed_glrm.train(training_frame=frame, x=cats)
# plot iteration history to ensure convergence
cat_embed_glrm.score_history().plot(x='iterations', y='objective', title='GLRM Score History')
# extracted embedded features
cat_embed = h2o.get_frame(cat_embed_glrm._model_json['output']['representation_name'])
return cat_embed
###Output
_____no_output_____
###Markdown
Import data
###Code
train = h2o.import_file('../../03_regression/data/train.csv')
test = h2o.import_file('../../03_regression/data/test.csv')
# bug fix - from Keston
dummy_col = np.random.rand(test.shape[0])
test = test.cbind(h2o.H2OFrame(dummy_col))
cols = test.columns
cols[-1] = 'SalePrice'
test.columns = cols
print(train.shape)
print(test.shape)
original_nums, cats = get_type_lists(train)
###Output
Numeric = ['MSSubClass', 'LotFrontage', 'LotArea', 'OverallQual', 'OverallCond', 'YearBuilt', 'YearRemodAdd', 'MasVnrArea', 'BsmtFinSF1', 'BsmtFinSF2', 'BsmtUnfSF', 'TotalBsmtSF', '1stFlrSF', '2ndFlrSF', 'LowQualFinSF', 'GrLivArea', 'BsmtFullBath', 'BsmtHalfBath', 'FullBath', 'HalfBath', 'BedroomAbvGr', 'KitchenAbvGr', 'TotRmsAbvGrd', 'Fireplaces', 'GarageYrBlt', 'GarageCars', 'GarageArea', 'WoodDeckSF', 'OpenPorchSF', 'EnclosedPorch', '3SsnPorch', 'ScreenPorch', 'PoolArea', 'MiscVal', 'MoSold', 'YrSold']
Categorical = ['MSZoning', 'Street', 'Alley', 'LotShape', 'LandContour', 'Utilities', 'LotConfig', 'LandSlope', 'Neighborhood', 'Condition1', 'Condition2', 'BldgType', 'HouseStyle', 'RoofStyle', 'RoofMatl', 'Exterior1st', 'Exterior2nd', 'MasVnrType', 'ExterQual', 'ExterCond', 'Foundation', 'BsmtQual', 'BsmtCond', 'BsmtExposure', 'BsmtFinType1', 'BsmtFinType2', 'Heating', 'HeatingQC', 'CentralAir', 'Electrical', 'KitchenQual', 'Functional', 'FireplaceQu', 'GarageType', 'GarageFinish', 'GarageQual', 'GarageCond', 'PavedDrive', 'PoolQC', 'Fence', 'MiscFeature', 'SaleType', 'SaleCondition']
###Markdown
Split into to train and validation (before doing data prep!!!)
###Code
train, valid = train.split_frame([0.7], seed=12345)
print(train.shape)
print(valid.shape)
###Output
(1001, 81)
(459, 81)
###Markdown
Impute numeric missing using GLRM matrix completion Training data
###Code
train_num_impute = glrm_num_impute('training', train)
train_num_impute.head()
###Output
_____no_output_____
###Markdown
Validation data
###Code
valid_num_impute = glrm_num_impute('validation', valid)
###Output
validation missing:
[80.0, 1.0, 33.0]
imputed validation missing:
[]
###Markdown
Test data
###Code
test_num_impute = glrm_num_impute('test', test)
###Output
test missing:
[227.0, 15.0, 1.0, 1.0, 1.0, 1.0, 2.0, 2.0, 78.0, 1.0, 1.0]
imputed test missing:
[]
###Markdown
Embed categorical vars using GLRM Training data
###Code
train_cat_embed = glrm_cat_embed(train)
###Output
_____no_output_____
###Markdown
Validation data
###Code
valid_cat_embed = glrm_cat_embed(valid)
###Output
_____no_output_____
###Markdown
Test data
###Code
test_cat_embed = glrm_cat_embed(test)
###Output
_____no_output_____
###Markdown
Merge imputed and embedded frames
###Code
imputed_embedded_train = train[['Id', 'SalePrice']].cbind(train_num_impute).cbind(train_cat_embed)
imputed_embedded_valid = valid[['Id', 'SalePrice']].cbind(valid_num_impute).cbind(valid_cat_embed)
imputed_embedded_test = test[['Id', 'SalePrice']].cbind(test_num_impute).cbind(test_cat_embed)
###Output
_____no_output_____
###Markdown
Redefine numerics and explore
###Code
imputed_embedded_nums, cats = get_type_lists(imputed_embedded_train)
print('Imputed and encoded numeric training data:')
imputed_embedded_train.describe()
print('--------------------------------------------------------------------------------')
print('Imputed and encoded numeric validation data:')
imputed_embedded_valid.describe()
print('--------------------------------------------------------------------------------')
print('Imputed and encoded numeric test data:')
imputed_embedded_test.describe()
###Output
Imputed and encoded numeric training data:
Rows:1001
Cols:88
###Markdown
Train model on imputed, embedded features
###Code
h2o.show_progress() # turn on progress bars
# Check log transform - looks good
%matplotlib inline
imputed_embedded_train['SalePrice'].log().as_data_frame().hist()
# Execute log transform
imputed_embedded_train['SalePrice'] = imputed_embedded_train['SalePrice'].log()
imputed_embedded_valid['SalePrice'] = imputed_embedded_valid['SalePrice'].log()
print(imputed_embedded_train[0:3, 'SalePrice'])
###Output
_____no_output_____
###Markdown
Train GLM on imputed, embedded inputs
###Code
alpha_opts = [0.01, 0.25, 0.5, 0.99] # always keep some L2
hyper_parameters = {"alpha":alpha_opts}
# initialize grid search
grid = H2OGridSearch(
H2OGeneralizedLinearEstimator(
family="gaussian",
lambda_search=True,
seed=12345),
hyper_params=hyper_parameters)
# train grid
grid.train(y='SalePrice',
x=imputed_embedded_nums,
training_frame=imputed_embedded_train,
validation_frame=imputed_embedded_valid)
# show grid search results
print(grid.show())
best = grid.get_grid()[0]
print(best)
# plot top frame values
yhat_frame = imputed_embedded_valid.cbind(best.predict(imputed_embedded_valid))
print(yhat_frame[0:10, ['SalePrice', 'predict']])
# plot sorted predictions
yhat_frame_df = yhat_frame[['SalePrice', 'predict']].as_data_frame()
yhat_frame_df.sort_values(by='predict', inplace=True)
yhat_frame_df.reset_index(inplace=True, drop=True)
_ = yhat_frame_df.plot(title='Ranked Predictions Plot')
# Shutdown H2O - this will erase all your unsaved frames and models in H2O
h2o.cluster().shutdown(prompt=True)
###Output
Are you sure you want to shutdown the H2O instance running at http://127.0.0.1:54321 (Y/N)? y
H2O session _sid_acfd closed.
###Markdown
License ***Copyright (C) 2017 J. Patrick Hall, [email protected] is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. *** Kaggle House Prices with GLRM Matrix Factorization Example Imports and inits
###Code
import h2o
from h2o.estimators.glrm import H2OGeneralizedLowRankEstimator
from h2o.estimators.glm import H2OGeneralizedLinearEstimator
from h2o.grid.grid_search import H2OGridSearch
h2o.init(max_mem_size='12G') # give h2o as much memory as possible
h2o.no_progress() # turn off h2o progress bars
import matplotlib as plt
%matplotlib inline
import numpy as np
import pandas as pd
###Output
Checking whether there is an H2O instance running at http://localhost:54321..... not found.
Attempting to start a local H2O server...
Java Version: java version "1.8.0_112"; Java(TM) SE Runtime Environment (build 1.8.0_112-b16); Java HotSpot(TM) 64-Bit Server VM (build 25.112-b16, mixed mode)
Starting server from /Users/phall/anaconda/lib/python3.5/site-packages/h2o/backend/bin/h2o.jar
Ice root: /var/folders/tc/0ss1l73113j3wdyjsxmy1j2r0000gn/T/tmpyvknifvq
JVM stdout: /var/folders/tc/0ss1l73113j3wdyjsxmy1j2r0000gn/T/tmpyvknifvq/h2o_phall_started_from_python.out
JVM stderr: /var/folders/tc/0ss1l73113j3wdyjsxmy1j2r0000gn/T/tmpyvknifvq/h2o_phall_started_from_python.err
Server is running at http://127.0.0.1:54321
Connecting to H2O server at http://127.0.0.1:54321... successful.
###Markdown
Helper Functions Determine data types
###Code
def get_type_lists(frame, rejects=['Id', 'SalePrice']):
"""Creates lists of numeric and categorical variables.
:param frame: The frame from which to determine types.
:param rejects: Variable names not to be included in returned lists.
:return: Tuple of lists for numeric and categorical variables in the frame.
"""
nums, cats = [], []
for key, val in frame.types.items():
if key not in rejects:
if val == 'enum':
cats.append(key)
else:
nums.append(key)
print('Numeric =', nums)
print()
print('Categorical =', cats)
return nums, cats
###Output
_____no_output_____
###Markdown
Impute with GLRM
###Code
def glrm_num_impute(role, frame):
""" Helper function for imputing numeric variables using GLRM.
:param role: Role of frame to be imputed.
:param frame: H2OFrame to be imputed.
:return: H2OFrame of imputed numeric features.
"""
# count missing values in training data numeric columns
print(role + ' missing:\n', [cnt for cnt in frame.nacnt() if cnt != 0.0])
# initialize GLRM
matrix_complete_glrm = H2OGeneralizedLowRankEstimator(
k=10, # create 10 features
transform='STANDARDIZE', # <- seems very important
gamma_x=0.001, # regularization on values in X
gamma_y=0.05) # regularization on values in Y
# train GLRM
matrix_complete_glrm.train(training_frame=frame, x=original_nums)
# plot iteration history to ensure convergence
matrix_complete_glrm.score_history().plot(x='iteration', y='objective', title='GLRM Score History')
# impute numeric inputs by multiply the calculated xi and yj for the missing values in train
num_impute = matrix_complete_glrm.predict(frame)
# count missing values in imputed set
print('imputed ' + role + ' missing:\n', [cnt for cnt in num_impute.nacnt() if cnt != 0.0])
return num_impute
###Output
_____no_output_____
###Markdown
Embed with GLRM
###Code
def glrm_cat_embed(frame):
""" Helper function for embedding caetgorical variables using GLRM.
:param frame: H2OFrame to be embedded.
:return: H2OFrame of embedded categorical features.
"""
# initialize GLRM
cat_embed_glrm = H2OGeneralizedLowRankEstimator(
k=50,
transform='STANDARDIZE',
loss='Quadratic',
regularization_x='Quadratic',
regularization_y='L1',
gamma_x=0.25,
gamma_y=0.5)
# train GLRM
cat_embed_glrm.train(training_frame=frame, x=cats)
# plot iteration history to ensure convergence
cat_embed_glrm.score_history().plot(x='iteration', y='objective', title='GLRM Score History')
# extracted embedded features
cat_embed = h2o.get_frame(cat_embed_glrm._model_json['output']['representation_name'])
return cat_embed
###Output
_____no_output_____
###Markdown
Import data
###Code
train = h2o.import_file('../../03_regression/data/train.csv')
test = h2o.import_file('../../03_regression/data/test.csv')
# bug fix - from Keston
dummy_col = np.random.rand(test.shape[0])
test = test.cbind(h2o.H2OFrame(dummy_col))
cols = test.columns
cols[-1] = 'SalePrice'
test.columns = cols
print(train.shape)
print(test.shape)
original_nums, cats = get_type_lists(train)
###Output
Numeric = ['Fireplaces', 'MSSubClass', 'BsmtFinSF2', 'MoSold', 'BsmtFullBath', 'BsmtUnfSF', 'YrSold', 'ScreenPorch', 'MasVnrArea', 'GarageCars', 'TotRmsAbvGrd', 'BedroomAbvGr', '3SsnPorch', 'YearRemodAdd', '1stFlrSF', 'BsmtHalfBath', 'WoodDeckSF', 'BsmtFinSF1', 'PoolArea', 'LotFrontage', '2ndFlrSF', 'OpenPorchSF', 'KitchenAbvGr', 'GrLivArea', 'LotArea', 'YearBuilt', 'TotalBsmtSF', 'HalfBath', 'GarageYrBlt', 'FullBath', 'MiscVal', 'OverallCond', 'EnclosedPorch', 'GarageArea', 'LowQualFinSF', 'OverallQual']
Categorical = ['Exterior1st', 'BldgType', 'BsmtExposure', 'MiscFeature', 'HeatingQC', 'GarageFinish', 'LandSlope', 'RoofMatl', 'Alley', 'Fence', 'Electrical', 'Exterior2nd', 'PoolQC', 'CentralAir', 'FireplaceQu', 'SaleType', 'MasVnrType', 'LandContour', 'ExterCond', 'MSZoning', 'LotConfig', 'Condition1', 'Foundation', 'Condition2', 'BsmtFinType1', 'HouseStyle', 'GarageQual', 'Functional', 'BsmtCond', 'GarageType', 'GarageCond', 'LotShape', 'RoofStyle', 'BsmtFinType2', 'SaleCondition', 'Neighborhood', 'Utilities', 'PavedDrive', 'KitchenQual', 'ExterQual', 'BsmtQual', 'Street', 'Heating']
###Markdown
Split into to train and validation (before doing data prep!!!)
###Code
train, valid = train.split_frame([0.7], seed=12345)
print(train.shape)
print(valid.shape)
###Output
(1001, 81)
(459, 81)
###Markdown
Impute numeric missing using GLRM matrix completion Training data
###Code
train_num_impute = glrm_num_impute('training', train)
###Output
training missing:
[179.0, 7.0, 48.0]
imputed training missing:
[]
###Markdown
Validation data
###Code
valid_num_impute = glrm_num_impute('validation', valid)
###Output
validation missing:
[80.0, 1.0, 33.0]
imputed validation missing:
[]
###Markdown
Test data
###Code
test_num_impute = glrm_num_impute('test', test)
###Output
test missing:
[227.0, 15.0, 1.0, 1.0, 1.0, 1.0, 2.0, 2.0, 78.0, 1.0, 1.0]
imputed test missing:
[]
###Markdown
Embed categorical vars using GLRM Training data
###Code
train_cat_embed = glrm_cat_embed(train)
###Output
_____no_output_____
###Markdown
Validation data
###Code
valid_cat_embed = glrm_cat_embed(valid)
###Output
_____no_output_____
###Markdown
Test data
###Code
test_cat_embed = glrm_cat_embed(test)
###Output
_____no_output_____
###Markdown
Merge imputed and embedded frames
###Code
imputed_embedded_train = train[['Id', 'SalePrice']].cbind(train_num_impute).cbind(train_cat_embed)
imputed_embedded_valid = valid[['Id', 'SalePrice']].cbind(valid_num_impute).cbind(valid_cat_embed)
imputed_embedded_test = test[['Id', 'SalePrice']].cbind(test_num_impute).cbind(test_cat_embed)
###Output
_____no_output_____
###Markdown
Redefine numerics and explore
###Code
imputed_embedded_nums, cats = get_type_lists(imputed_embedded_train)
print('Imputed and encoded numeric training data:')
imputed_embedded_train.describe()
print('--------------------------------------------------------------------------------')
print('Imputed and encoded numeric validation data:')
imputed_embedded_valid.describe()
print('--------------------------------------------------------------------------------')
print('Imputed and encoded numeric test data:')
imputed_embedded_test.describe()
###Output
Imputed and encoded numeric training data:
Rows:1001
Cols:88
###Markdown
Train model on imputed, embedded features
###Code
h2o.show_progress() # turn on progress bars
# Check log transform - looks good
%matplotlib inline
imputed_embedded_train['SalePrice'].log().as_data_frame().hist()
# Execute log transform
imputed_embedded_train['SalePrice'] = imputed_embedded_train['SalePrice'].log()
imputed_embedded_valid['SalePrice'] = imputed_embedded_valid['SalePrice'].log()
print(imputed_embedded_train[0:3, 'SalePrice'])
###Output
_____no_output_____
###Markdown
Train GLM on imputed, embedded inputs
###Code
alpha_opts = [0.01, 0.25, 0.5, 0.99] # always keep some L2
hyper_parameters = {"alpha":alpha_opts}
# initialize grid search
grid = H2OGridSearch(
H2OGeneralizedLinearEstimator(
family="gaussian",
lambda_search=True,
seed=12345),
hyper_params=hyper_parameters)
# train grid
grid.train(y='SalePrice',
x=imputed_embedded_nums,
training_frame=imputed_embedded_train,
validation_frame=imputed_embedded_valid)
# show grid search results
print(grid.show())
best = grid.get_grid()[0]
print(best)
# plot top frame values
yhat_frame = imputed_embedded_valid.cbind(best.predict(imputed_embedded_valid))
print(yhat_frame[0:10, ['SalePrice', 'predict']])
# plot sorted predictions
yhat_frame_df = yhat_frame[['SalePrice', 'predict']].as_data_frame()
yhat_frame_df.sort_values(by='predict', inplace=True)
yhat_frame_df.reset_index(inplace=True, drop=True)
_ = yhat_frame_df.plot(title='Ranked Predictions Plot')
# Shutdown H2O - this will erase all your unsaved frames and models in H2O
h2o.cluster().shutdown(prompt=True)
###Output
Are you sure you want to shutdown the H2O instance running at http://127.0.0.1:54321 (Y/N)? y
H2O session _sid_b0c9 closed.
|
The Android App Market on Google Play/Guided/notebook.ipynb | ###Markdown
1. Google Play Store apps and reviewsMobile apps are everywhere. They are easy to create and can be lucrative. Because of these two factors, more and more apps are being developed. In this notebook, we will do a comprehensive analysis of the Android app market by comparing over ten thousand apps in Google Play across different categories. We'll look for insights in the data to devise strategies to drive growth and retention.Let's take a look at the data, which consists of two files:apps.csv: contains all the details of the applications on Google Play. There are 13 features that describe a given app.user_reviews.csv: contains 100 reviews for each app, most helpful first. The text in each review has been pre-processed and attributed with three new features: Sentiment (Positive, Negative or Neutral), Sentiment Polarity and Sentiment Subjectivity.
###Code
# Read in dataset
import pandas as pd
apps_with_duplicates = pd.read_csv('datasets/apps.csv')
# Drop duplicates from apps_with_duplicates
apps = apps_with_duplicates.drop_duplicates()
# Print the total number of apps
print('Total number of apps in the dataset = ', apps.shape[0])
# Print a summary of apps dataframe
print(apps.info())
# Have a look at a random sample of 5 rows
print(apps.head())
###Output
Total number of apps in the dataset = 9659
<class 'pandas.core.frame.DataFrame'>
Int64Index: 9659 entries, 0 to 9658
Data columns (total 14 columns):
Unnamed: 0 9659 non-null int64
App 9659 non-null object
Category 9659 non-null object
Rating 8196 non-null float64
Reviews 9659 non-null int64
Size 8432 non-null float64
Installs 9659 non-null object
Type 9659 non-null object
Price 9659 non-null object
Content Rating 9659 non-null object
Genres 9659 non-null object
Last Updated 9659 non-null object
Current Ver 9651 non-null object
Android Ver 9657 non-null object
dtypes: float64(2), int64(2), object(10)
memory usage: 1.1+ MB
None
Unnamed: 0 App \
0 0 Photo Editor & Candy Camera & Grid & ScrapBook
1 1 Coloring book moana
2 2 U Launcher Lite – FREE Live Cool Themes, Hide ...
3 3 Sketch - Draw & Paint
4 4 Pixel Draw - Number Art Coloring Book
Category Rating Reviews Size Installs Type Price \
0 ART_AND_DESIGN 4.1 159 19.0 10,000+ Free 0
1 ART_AND_DESIGN 3.9 967 14.0 500,000+ Free 0
2 ART_AND_DESIGN 4.7 87510 8.7 5,000,000+ Free 0
3 ART_AND_DESIGN 4.5 215644 25.0 50,000,000+ Free 0
4 ART_AND_DESIGN 4.3 967 2.8 100,000+ Free 0
Content Rating Genres Last Updated \
0 Everyone Art & Design January 7, 2018
1 Everyone Art & Design;Pretend Play January 15, 2018
2 Everyone Art & Design August 1, 2018
3 Teen Art & Design June 8, 2018
4 Everyone Art & Design;Creativity June 20, 2018
Current Ver Android Ver
0 1.0.0 4.0.3 and up
1 2.0.0 4.0.3 and up
2 1.2.4 4.0.3 and up
3 Varies with device 4.2 and up
4 1.1 4.4 and up
###Markdown
2. Data cleaningThe four features that we will be working with most frequently henceforth are Installs, Size, Rating and Price. The info() function (from the previous task) told us that Installs and Price columns are of type object and not int64 or float64 as we would expect. This is because the column contains some characters more than just [0,9] digits. Ideally, we would want these columns to be numeric as their name suggests. Hence, we now proceed to data cleaning and prepare our data to be consumed in our analyis later. Specifically, the presence of special characters (, $ +) in the Installs and Price columns make their conversion to a numerical data type difficult.
###Code
# List of characters to remove
chars_to_remove = ['+', ',', '$']
# List of column names to clean
cols_to_clean = ['Installs', 'Price']
# Loop for each column in cols_to_clean
for col in cols_to_clean:
# Loop for each char in chars_to_remove
for char in chars_to_remove:
# Replace the character with an empty string
apps[col] = apps[col].apply(lambda x: x.replace(char, ''))
# Convert col to float data type
apps[col] = apps[col].astype(float)
###Output
_____no_output_____
###Markdown
3. Exploring app categoriesWith more than 1 billion active users in 190 countries around the world, Google Play continues to be an important distribution platform to build a global audience. For businesses to get their apps in front of users, it's important to make them more quickly and easily discoverable on Google Play. To improve the overall search experience, Google has introduced the concept of grouping apps into categories.This brings us to the following questions:Which category has the highest share of (active) apps in the market? Is any specific category dominating the market?Which categories have the fewest number of apps?We will see that there are 33 unique app categories present in our dataset. Family and Game apps have the highest market prevalence. Interestingly, Tools, Business and Medical apps are also at the top.
###Code
import plotly
plotly.offline.init_notebook_mode(connected=True)
import plotly.graph_objs as go
# Print the total number of unique categories
num_categories = len(apps['Category'].unique())
print('Number of categories = ', num_categories)
# Count the number of apps in each 'Category'. Sort in descending order depending on number of apps in each category
num_apps_in_category = apps['Category'].value_counts().sort_values(ascending=False)
data = [go.Bar(
x = num_apps_in_category.index, # index = category name
y = num_apps_in_category.values, # value = count
)]
plotly.offline.iplot(data)
###Output
_____no_output_____
###Markdown
4. Distribution of app ratingsAfter having witnessed the market share for each category of apps, let's see how all these apps perform on an average. App ratings (on a scale of 1 to 5) impact the discoverability, conversion of apps as well as the company's overall brand image. Ratings are a key performance indicator of an app.From our research, we found that the average volume of ratings across all app categories is 4.17. The histogram plot is skewed to the left indicating that the majority of the apps are highly rated with only a few exceptions in the low-rated apps.
###Code
# Average rating of apps
avg_app_rating = apps['Rating'].mean()
print('Average app rating = ', avg_app_rating)
# Distribution of apps according to their ratings
data = [go.Histogram(
x = apps['Rating']
)]
# Vertical dashed line to indicate the average app rating
layout = {'shapes': [{
'type' :'line',
'x0': avg_app_rating,
'y0': 0,
'x1': avg_app_rating,
'y1': 1000,
'line': { 'dash': 'dashdot'}
}]
}
plotly.offline.iplot({'data': data, 'layout': layout})
###Output
Average app rating = 4.173243045387994
###Markdown
5. Size and price of an appLet's now examine app size and app price. For size, if the mobile app is too large, it may be difficult and/or expensive for users to download. Lengthy download times could turn users off before they even experience your mobile app. Plus, each user's device has a finite amount of disk space. For price, some users expect their apps to be free or inexpensive. These problems compound if the developing world is part of your target market; especially due to internet speeds, earning power and exchange rates.How can we effectively come up with strategies to size and price our app?Does the size of an app affect its rating? Do users really care about system-heavy apps or do they prefer light-weighted apps? Does the price of an app affect its rating? Do users always prefer free apps over paid apps?We find that the majority of top rated apps (rating over 4) range from 2 MB to 20 MB. We also find that the vast majority of apps price themselves under \$10.
###Code
%matplotlib inline
import seaborn as sns
sns.set_style("darkgrid")
import warnings
warnings.filterwarnings("ignore")
# Select rows where both 'Rating' and 'Size' values are present (ie. the two values are not null)
apps_with_size_and_rating_present = apps[(apps['Rating'].notnull() == True) & (apps['Size'].notnull() == True)]
# Subset for categories with at least 250 apps
large_categories = apps_with_size_and_rating_present.groupby('Category').filter(lambda x: len(x) >= 250)
# Plot size vs. rating
plt1 = sns.jointplot(x = large_categories['Size'], y = large_categories['Rating'], kind = 'hex')
# Select apps whose 'Type' is 'Paid'
paid_apps = apps_with_size_and_rating_present[apps_with_size_and_rating_present['Type'] == 'Paid']
# Plot price vs. rating
plt2 = sns.jointplot(x = paid_apps['Price'], y = paid_apps['Rating'])
###Output
_____no_output_____
###Markdown
6. Relation between app category and app priceSo now comes the hard part. How are companies and developers supposed to make ends meet? What monetization strategies can companies use to maximize profit? The costs of apps are largely based on features, complexity, and platform.There are many factors to consider when selecting the right pricing strategy for your mobile app. It is important to consider the willingness of your customer to pay for your app. A wrong price could break the deal before the download even happens. Potential customers could be turned off by what they perceive to be a shocking cost, or they might delete an app they’ve downloaded after receiving too many ads or simply not getting their money's worth.Different categories demand different price ranges. Some apps that are simple and used daily, like the calculator app, should probably be kept free. However, it would make sense to charge for a highly-specialized medical app that diagnoses diabetic patients. Below, we see that Medical and Family apps are the most expensive. Some medical apps extend even up to \$80! All game apps are reasonably priced below \$20.
###Code
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
fig.set_size_inches(15, 8)
# Select a few popular app categories
popular_app_cats = apps[apps.Category.isin(['GAME', 'FAMILY', 'PHOTOGRAPHY',
'MEDICAL', 'TOOLS', 'FINANCE',
'LIFESTYLE','BUSINESS'])]
# Examine the price trend by plotting Price vs Category
ax = sns.stripplot(x = popular_app_cats['Price'], y = popular_app_cats['Category'], jitter=True, linewidth=1)
ax.set_title('App pricing trend across categories')
# Apps whose Price is greater than 200
apps_above_200 = popular_app_cats[popular_app_cats['Price'] > 200]
apps_above_200[['Category', 'App', 'Price']]
###Output
_____no_output_____
###Markdown
7. Filter out "junk" appsIt looks like a bunch of the really expensive apps are "junk" apps. That is, apps that don't really have a purpose. Some app developer may create an app called I Am Rich Premium or most expensive app (H) just for a joke or to test their app development skills. Some developers even do this with malicious intent and try to make money by hoping people accidentally click purchase on their app in the store.Let's filter out these junk apps and re-do our visualization.
###Code
# Select apps priced below $100
apps_under_100 = popular_app_cats[popular_app_cats['Price'] < 100]
fig, ax = plt.subplots()
fig.set_size_inches(15, 8)
# Examine price vs category with the authentic apps (apps_under_100)
ax = sns.stripplot(x = 'Price', y = 'Category', data = apps_under_100, jitter = True, linewidth = 1)
ax.set_title('App pricing trend across categories after filtering for junk apps')
###Output
_____no_output_____
###Markdown
8. Popularity of paid apps vs free appsFor apps in the Play Store today, there are five types of pricing strategies: free, freemium, paid, paymium, and subscription. Let's focus on free and paid apps only. Some characteristics of free apps are:Free to download.Main source of income often comes from advertisements.Often created by companies that have other products and the app serves as an extension of those products.Can serve as a tool for customer retention, communication, and customer service.Some characteristics of paid apps are:Users are asked to pay once for the app to download and use it.The user can't really get a feel for the app before buying it.Are paid apps installed as much as free apps? It turns out that paid apps have a relatively lower number of installs than free apps, though the difference is not as stark as I would have expected!
###Code
trace0 = go.Box(
# Data for paid apps
y = apps[apps['Type'] == 'Paid']['Installs'],
name = 'Paid'
)
trace1 = go.Box(
# Data for free apps
y = apps[apps['Type'] == 'Free']['Installs'],
name = 'Free'
)
layout = go.Layout(
title = "Number of downloads of paid apps vs. free apps",
yaxis = dict(title = "Log number of downloads",
type = 'log',
autorange = True)
)
# Add trace0 and trace1 to a list for plotting
data = [trace0, trace1]
plotly.offline.iplot({'data': data, 'layout': layout})
###Output
_____no_output_____
###Markdown
9. Sentiment analysis of user reviewsMining user review data to determine how people feel about your product, brand, or service can be done using a technique called sentiment analysis. User reviews for apps can be analyzed to identify if the mood is positive, negative or neutral about that app. For example, positive words in an app review might include words such as 'amazing', 'friendly', 'good', 'great', and 'love'. Negative words might be words like 'malware', 'hate', 'problem', 'refund', and 'incompetent'.By plotting sentiment polarity scores of user reviews for paid and free apps, we observe that free apps receive a lot of harsh comments, as indicated by the outliers on the negative y-axis. Reviews for paid apps appear never to be extremely negative. This may indicate something about app quality, i.e., paid apps being of higher quality than free apps on average. The median polarity score for paid apps is a little higher than free apps, thereby syncing with our previous observation.In this notebook, we analyzed over ten thousand apps from the Google Play Store. We can use our findings to inform our decisions should we ever wish to create an app ourselves.
###Code
# Load user_reviews.csv
reviews_df = pd.read_csv('datasets/user_reviews.csv')
# Join the two dataframes
merged_df = pd.merge(left=apps, right=reviews_df, on='App', how='inner')
# Drop NA values from Sentiment and Review columns
merged_df = merged_df.dropna(subset = ['Sentiment', 'Review'])
sns.set_style('ticks')
fig, ax = plt.subplots()
fig.set_size_inches(11, 8)
# User review sentiment polarity for paid vs. free apps
ax = sns.boxplot(x = 'Type', y = 'Sentiment_Polarity', data = merged_df)
ax.set_title('Sentiment Polarity Distribution')
###Output
_____no_output_____ |
04_Save_Restore.ipynb | ###Markdown
TensorFlow Tutorial 04 Save & Restoreby [Magnus Erik Hvass Pedersen](http://www.hvass-labs.org/)/ [GitHub](https://github.com/Hvass-Labs/TensorFlow-Tutorials) / [Videos on YouTube](https://www.youtube.com/playlist?list=PL9Hr9sNUjfsmEu1ZniY0XpHSzl5uihcXZ) IntroductionThis tutorial demonstrates how to save and restore the variables of a Neural Network. During optimization we save the variables of the neural network whenever its classification accuracy has improved on the validation-set. The optimization is aborted when there has been no improvement for 1000 iterations. We then reload the variables that performed best on the validation-set.This strategy is called Early Stopping. It is used to avoid overfitting of the neural network. This occurs when the neural network is being trained for too long so it starts to learn the noise of the training-set, which causes the neural network to mis-classify new images.Overfitting is not really a problem for the neural network used in this tutorial on the MNIST data-set for recognizing hand-written digits. But this tutorial demonstrates the idea of Early Stopping.This builds on the previous tutorials, so you should have a basic understanding of TensorFlow and the add-on package Pretty Tensor. A lot of the source-code and text in this tutorial is similar to the previous tutorials and may be read quickly if you have recently read the previous tutorials. Flowchart The following chart shows roughly how the data flows in the Convolutional Neural Network that is implemented below. The network has two convolutional layers and two fully-connected layers, with the last layer being used for the final classification of the input images. See Tutorial 02 for a more detailed description of this network and convolution in general.
###Code
from IPython.display import Image
Image('images/02_network_flowchart.png')
###Output
_____no_output_____
###Markdown
Imports
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np
from sklearn.metrics import confusion_matrix
import time
from datetime import timedelta
import math
import os
# Use PrettyTensor to simplify Neural Network construction.
import prettytensor as pt
###Output
_____no_output_____
###Markdown
This was developed using Python 3.5.2 (Anaconda) and TensorFlow version:
###Code
tf.__version__
###Output
_____no_output_____
###Markdown
PrettyTensor version:
###Code
pt.__version__
###Output
_____no_output_____
###Markdown
Load Data The MNIST data-set is about 12 MB and will be downloaded automatically if it is not located in the given path.
###Code
from tensorflow.examples.tutorials.mnist import input_data
data = input_data.read_data_sets('data/MNIST/', one_hot=True)
###Output
Extracting data/MNIST/train-images-idx3-ubyte.gz
Extracting data/MNIST/train-labels-idx1-ubyte.gz
Extracting data/MNIST/t10k-images-idx3-ubyte.gz
Extracting data/MNIST/t10k-labels-idx1-ubyte.gz
###Markdown
The MNIST data-set has now been loaded and consists of 70,000 images and associated labels (i.e. classifications of the images). The data-set is split into 3 mutually exclusive sub-sets. We will only use the training and test-sets in this tutorial.
###Code
print("Size of:")
print("- Training-set:\t\t{}".format(len(data.train.labels)))
print("- Test-set:\t\t{}".format(len(data.test.labels)))
print("- Validation-set:\t{}".format(len(data.validation.labels)))
###Output
Size of:
- Training-set: 55000
- Test-set: 10000
- Validation-set: 5000
###Markdown
The class-labels are One-Hot encoded, which means that each label is a vector with 10 elements, all of which are zero except for one element. The index of this one element is the class-number, that is, the digit shown in the associated image. We also need the class-numbers as integers for the test- and validation-sets, so we calculate them now.
###Code
data.test.cls = np.argmax(data.test.labels, axis=1)
data.validation.cls = np.argmax(data.validation.labels, axis=1)
###Output
_____no_output_____
###Markdown
Data Dimensions The data dimensions are used in several places in the source-code below. They are defined once so we can use these variables instead of numbers throughout the source-code below.
###Code
# We know that MNIST images are 28 pixels in each dimension.
img_size = 28
# Images are stored in one-dimensional arrays of this length.
img_size_flat = img_size * img_size
# Tuple with height and width of images used to reshape arrays.
img_shape = (img_size, img_size)
# Number of colour channels for the images: 1 channel for gray-scale.
num_channels = 1
# Number of classes, one class for each of 10 digits.
num_classes = 10
###Output
_____no_output_____
###Markdown
Helper-function for plotting images Function used to plot 9 images in a 3x3 grid, and writing the true and predicted classes below each image.
###Code
def plot_images(images, cls_true, cls_pred=None):
assert len(images) == len(cls_true) == 9
# Create figure with 3x3 sub-plots.
fig, axes = plt.subplots(3, 3)
fig.subplots_adjust(hspace=0.3, wspace=0.3)
for i, ax in enumerate(axes.flat):
# Plot image.
ax.imshow(images[i].reshape(img_shape), cmap='binary')
# Show true and predicted classes.
if cls_pred is None:
xlabel = "True: {0}".format(cls_true[i])
else:
xlabel = "True: {0}, Pred: {1}".format(cls_true[i], cls_pred[i])
# Show the classes as the label on the x-axis.
ax.set_xlabel(xlabel)
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
###Output
_____no_output_____
###Markdown
Plot a few images to see if data is correct
###Code
# Get the first images from the test-set.
images = data.test.images[0:9]
# Get the true classes for those images.
cls_true = data.test.cls[0:9]
# Plot the images and labels using our helper-function above.
plot_images(images=images, cls_true=cls_true)
###Output
_____no_output_____
###Markdown
TensorFlow GraphThe entire purpose of TensorFlow is to have a so-called computational graph that can be executed much more efficiently than if the same calculations were to be performed directly in Python. TensorFlow can be more efficient than NumPy because TensorFlow knows the entire computation graph that must be executed, while NumPy only knows the computation of a single mathematical operation at a time.TensorFlow can also automatically calculate the gradients that are needed to optimize the variables of the graph so as to make the model perform better. This is because the graph is a combination of simple mathematical expressions so the gradient of the entire graph can be calculated using the chain-rule for derivatives.TensorFlow can also take advantage of multi-core CPUs as well as GPUs - and Google has even built special chips just for TensorFlow which are called TPUs (Tensor Processing Units) and are even faster than GPUs.A TensorFlow graph consists of the following parts which will be detailed below:* Placeholder variables used for inputting data to the graph.* Variables that are going to be optimized so as to make the convolutional network perform better.* The mathematical formulas for the convolutional network.* A loss measure that can be used to guide the optimization of the variables.* An optimization method which updates the variables.In addition, the TensorFlow graph may also contain various debugging statements e.g. for logging data to be displayed using TensorBoard, which is not covered in this tutorial. Placeholder variables Placeholder variables serve as the input to the TensorFlow computational graph that we may change each time we execute the graph. We call this feeding the placeholder variables and it is demonstrated further below.First we define the placeholder variable for the input images. This allows us to change the images that are input to the TensorFlow graph. This is a so-called tensor, which just means that it is a multi-dimensional array. The data-type is set to `float32` and the shape is set to `[None, img_size_flat]`, where `None` means that the tensor may hold an arbitrary number of images with each image being a vector of length `img_size_flat`.
###Code
x = tf.placeholder(tf.float32, shape=[None, img_size_flat], name='x')
###Output
_____no_output_____
###Markdown
The convolutional layers expect `x` to be encoded as a 4-dim tensor so we have to reshape it so its shape is instead `[num_images, img_height, img_width, num_channels]`. Note that `img_height == img_width == img_size` and `num_images` can be inferred automatically by using -1 for the size of the first dimension. So the reshape operation is:
###Code
x_image = tf.reshape(x, [-1, img_size, img_size, num_channels])
###Output
_____no_output_____
###Markdown
Next we have the placeholder variable for the true labels associated with the images that were input in the placeholder variable `x`. The shape of this placeholder variable is `[None, num_classes]` which means it may hold an arbitrary number of labels and each label is a vector of length `num_classes` which is 10 in this case.
###Code
y_true = tf.placeholder(tf.float32, shape=[None, 10], name='y_true')
###Output
_____no_output_____
###Markdown
We could also have a placeholder variable for the class-number, but we will instead calculate it using argmax. Note that this is a TensorFlow operator so nothing is calculated at this point.
###Code
y_true_cls = tf.argmax(y_true, dimension=1)
###Output
_____no_output_____
###Markdown
Neural Network This section implements the Convolutional Neural Network using Pretty Tensor, which is much simpler than a direct implementation in TensorFlow, see Tutorial 03.The basic idea is to wrap the input tensor `x_image` in a Pretty Tensor object which has helper-functions for adding new computational layers so as to create an entire neural network. Pretty Tensor takes care of the variable allocation, etc.
###Code
x_pretty = pt.wrap(x_image)
###Output
_____no_output_____
###Markdown
Now that we have wrapped the input image in a Pretty Tensor object, we can add the convolutional and fully-connected layers in just a few lines of source-code.Note that `pt.defaults_scope(activation_fn=tf.nn.relu)` makes `activation_fn=tf.nn.relu` an argument for each of the layers constructed inside the `with`-block, so that Rectified Linear Units (ReLU) are used for each of these layers. The `defaults_scope` makes it easy to change arguments for all of the layers.
###Code
with pt.defaults_scope(activation_fn=tf.nn.relu):
y_pred, loss = x_pretty.\
conv2d(kernel=5, depth=16, name='layer_conv1').\
max_pool(kernel=2, stride=2).\
conv2d(kernel=5, depth=36, name='layer_conv2').\
max_pool(kernel=2, stride=2).\
flatten().\
fully_connected(size=128, name='layer_fc1').\
softmax_classifier(num_classes=num_classes, labels=y_true)
###Output
_____no_output_____
###Markdown
Getting the Weights Further below, we want to plot the weights of the neural network. When the network is constructed using Pretty Tensor, all the variables of the layers are created indirectly by Pretty Tensor. We therefore have to retrieve the variables from TensorFlow.We used the names `layer_conv1` and `layer_conv2` for the two convolutional layers. These are also called variable scopes (not to be confused with `defaults_scope` as described above). Pretty Tensor automatically gives names to the variables it creates for each layer, so we can retrieve the weights for a layer using the layer's scope-name and the variable-name.The implementation is somewhat awkward because we have to use the TensorFlow function `get_variable()` which was designed for another purpose; either creating a new variable or re-using an existing variable. The easiest thing is to make the following helper-function.
###Code
def get_weights_variable(layer_name):
# Retrieve an existing variable named 'weights' in the scope
# with the given layer_name.
# This is awkward because the TensorFlow function was
# really intended for another purpose.
with tf.variable_scope(layer_name, reuse=True):
variable = tf.get_variable('weights')
return variable
###Output
_____no_output_____
###Markdown
Using this helper-function we can retrieve the variables. These are TensorFlow objects. In order to get the contents of the variables, you must do something like: `contents = session.run(weights_conv1)` as demonstrated further below.
###Code
weights_conv1 = get_weights_variable(layer_name='layer_conv1')
weights_conv2 = get_weights_variable(layer_name='layer_conv2')
###Output
_____no_output_____
###Markdown
Optimization Method Pretty Tensor gave us the predicted class-label (`y_pred`) as well as a loss-measure that must be minimized, so as to improve the ability of the neural network to classify the input images.It is unclear from the documentation for Pretty Tensor whether the loss-measure is cross-entropy or something else. But we now use the `AdamOptimizer` to minimize the loss.Note that optimization is not performed at this point. In fact, nothing is calculated at all, we just add the optimizer-object to the TensorFlow graph for later execution.
###Code
optimizer = tf.train.AdamOptimizer(learning_rate=1e-4).minimize(loss)
###Output
_____no_output_____
###Markdown
Performance MeasuresWe need a few more performance measures to display the progress to the user.First we calculate the predicted class number from the output of the neural network `y_pred`, which is a vector with 10 elements. The class number is the index of the largest element.
###Code
y_pred_cls = tf.argmax(y_pred, dimension=1)
###Output
_____no_output_____
###Markdown
Then we create a vector of booleans telling us whether the predicted class equals the true class of each image.
###Code
correct_prediction = tf.equal(y_pred_cls, y_true_cls)
###Output
_____no_output_____
###Markdown
The classification accuracy is calculated by first type-casting the vector of booleans to floats, so that False becomes 0 and True becomes 1, and then taking the average of these numbers.
###Code
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
###Output
_____no_output_____
###Markdown
SaverIn order to save the variables of the neural network, we now create a so-called Saver-object which is used for storing and retrieving all the variables of the TensorFlow graph. Nothing is actually saved at this point, which will be done further below in the `optimize()`-function.
###Code
saver = tf.train.Saver()
###Output
_____no_output_____
###Markdown
The saved files are often called checkpoints because they may be written at regular intervals during optimization.This is the directory used for saving and retrieving the data.
###Code
save_dir = 'checkpoints/'
###Output
_____no_output_____
###Markdown
Create the directory if it does not exist.
###Code
if not os.path.exists(save_dir):
os.makedirs(save_dir)
###Output
_____no_output_____
###Markdown
This is the path for the checkpoint-file.
###Code
save_path = os.path.join(save_dir, 'best_validation')
###Output
_____no_output_____
###Markdown
TensorFlow Run Create TensorFlow sessionOnce the TensorFlow graph has been created, we have to create a TensorFlow session which is used to execute the graph.
###Code
session = tf.Session()
###Output
_____no_output_____
###Markdown
Initialize variablesThe variables for `weights` and `biases` must be initialized before we start optimizing them. We make a simple wrapper-function for this, because we will call it again below.
###Code
def init_variables():
session.run(tf.global_variables_initializer())
###Output
_____no_output_____
###Markdown
Execute the function now to initialize the variables.
###Code
init_variables()
###Output
_____no_output_____
###Markdown
Helper-function to perform optimization iterations There are 55,000 images in the training-set. It takes a long time to calculate the gradient of the model using all these images. We therefore only use a small batch of images in each iteration of the optimizer.If your computer crashes or becomes very slow because you run out of RAM, then you may try and lower this number, but you may then need to perform more optimization iterations.
###Code
train_batch_size = 64
###Output
_____no_output_____
###Markdown
The classification accuracy for the validation-set will be calculated for every 100 iterations of the optimization function below. The optimization will be stopped if the validation accuracy has not been improved in 1000 iterations. We need a few variables to keep track of this.
###Code
# Best validation accuracy seen so far.
best_validation_accuracy = 0.0
# Iteration-number for last improvement to validation accuracy.
last_improvement = 0
# Stop optimization if no improvement found in this many iterations.
require_improvement = 1000
###Output
_____no_output_____
###Markdown
Function for performing a number of optimization iterations so as to gradually improve the variables of the network layers. In each iteration, a new batch of data is selected from the training-set and then TensorFlow executes the optimizer using those training samples. The progress is printed every 100 iterations where the validation accuracy is also calculated and saved to a file if it is an improvement.
###Code
# Counter for total number of iterations performed so far.
total_iterations = 0
def optimize(num_iterations):
# Ensure we update the global variables rather than local copies.
global total_iterations
global best_validation_accuracy
global last_improvement
# Start-time used for printing time-usage below.
start_time = time.time()
for i in range(num_iterations):
# Increase the total number of iterations performed.
# It is easier to update it in each iteration because
# we need this number several times in the following.
total_iterations += 1
# Get a batch of training examples.
# x_batch now holds a batch of images and
# y_true_batch are the true labels for those images.
x_batch, y_true_batch = data.train.next_batch(train_batch_size)
# Put the batch into a dict with the proper names
# for placeholder variables in the TensorFlow graph.
feed_dict_train = {x: x_batch,
y_true: y_true_batch}
# Run the optimizer using this batch of training data.
# TensorFlow assigns the variables in feed_dict_train
# to the placeholder variables and then runs the optimizer.
session.run(optimizer, feed_dict=feed_dict_train)
# Print status every 100 iterations and after last iteration.
if (total_iterations % 100 == 0) or (i == (num_iterations - 1)):
# Calculate the accuracy on the training-batch.
acc_train = session.run(accuracy, feed_dict=feed_dict_train)
# Calculate the accuracy on the validation-set.
# The function returns 2 values but we only need the first.
acc_validation, _ = validation_accuracy()
# If validation accuracy is an improvement over best-known.
if acc_validation > best_validation_accuracy:
# Update the best-known validation accuracy.
best_validation_accuracy = acc_validation
# Set the iteration for the last improvement to current.
last_improvement = total_iterations
# Save all variables of the TensorFlow graph to file.
saver.save(sess=session, save_path=save_path)
# A string to be printed below, shows improvement found.
improved_str = '*'
else:
# An empty string to be printed below.
# Shows that no improvement was found.
improved_str = ''
# Status-message for printing.
msg = "Iter: {0:>6}, Train-Batch Accuracy: {1:>6.1%}, Validation Acc: {2:>6.1%} {3}"
# Print it.
print(msg.format(i + 1, acc_train, acc_validation, improved_str))
# If no improvement found in the required number of iterations.
if total_iterations - last_improvement > require_improvement:
print("No improvement found in a while, stopping optimization.")
# Break out from the for-loop.
break
# Ending time.
end_time = time.time()
# Difference between start and end-times.
time_dif = end_time - start_time
# Print the time-usage.
print("Time usage: " + str(timedelta(seconds=int(round(time_dif)))))
###Output
_____no_output_____
###Markdown
Helper-function to plot example errors Function for plotting examples of images from the test-set that have been mis-classified.
###Code
def plot_example_errors(cls_pred, correct):
# This function is called from print_test_accuracy() below.
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# correct is a boolean array whether the predicted class
# is equal to the true class for each image in the test-set.
# Negate the boolean array.
incorrect = (correct == False)
# Get the images from the test-set that have been
# incorrectly classified.
images = data.test.images[incorrect]
# Get the predicted classes for those images.
cls_pred = cls_pred[incorrect]
# Get the true classes for those images.
cls_true = data.test.cls[incorrect]
# Plot the first 9 images.
plot_images(images=images[0:9],
cls_true=cls_true[0:9],
cls_pred=cls_pred[0:9])
###Output
_____no_output_____
###Markdown
Helper-function to plot confusion matrix
###Code
def plot_confusion_matrix(cls_pred):
# This is called from print_test_accuracy() below.
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# Get the true classifications for the test-set.
cls_true = data.test.cls
# Get the confusion matrix using sklearn.
cm = confusion_matrix(y_true=cls_true,
y_pred=cls_pred)
# Print the confusion matrix as text.
print(cm)
# Plot the confusion matrix as an image.
plt.matshow(cm)
# Make various adjustments to the plot.
plt.colorbar()
tick_marks = np.arange(num_classes)
plt.xticks(tick_marks, range(num_classes))
plt.yticks(tick_marks, range(num_classes))
plt.xlabel('Predicted')
plt.ylabel('True')
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
###Output
_____no_output_____
###Markdown
Helper-functions for calculating classificationsThis function calculates the predicted classes of images and also returns a boolean array whether the classification of each image is correct.The calculation is done in batches because it might use too much RAM otherwise. If your computer crashes then you can try and lower the batch-size.
###Code
# Split the data-set in batches of this size to limit RAM usage.
batch_size = 256
def predict_cls(images, labels, cls_true):
# Number of images.
num_images = len(images)
# Allocate an array for the predicted classes which
# will be calculated in batches and filled into this array.
cls_pred = np.zeros(shape=num_images, dtype=np.int)
# Now calculate the predicted classes for the batches.
# We will just iterate through all the batches.
# There might be a more clever and Pythonic way of doing this.
# The starting index for the next batch is denoted i.
i = 0
while i < num_images:
# The ending index for the next batch is denoted j.
j = min(i + batch_size, num_images)
# Create a feed-dict with the images and labels
# between index i and j.
feed_dict = {x: images[i:j, :],
y_true: labels[i:j, :]}
# Calculate the predicted class using TensorFlow.
cls_pred[i:j] = session.run(y_pred_cls, feed_dict=feed_dict)
# Set the start-index for the next batch to the
# end-index of the current batch.
i = j
# Create a boolean array whether each image is correctly classified.
correct = (cls_true == cls_pred)
return correct, cls_pred
###Output
_____no_output_____
###Markdown
Calculate the predicted class for the test-set.
###Code
def predict_cls_test():
return predict_cls(images = data.test.images,
labels = data.test.labels,
cls_true = data.test.cls)
###Output
_____no_output_____
###Markdown
Calculate the predicted class for the validation-set.
###Code
def predict_cls_validation():
return predict_cls(images = data.validation.images,
labels = data.validation.labels,
cls_true = data.validation.cls)
###Output
_____no_output_____
###Markdown
Helper-functions for the classification accuracyThis function calculates the classification accuracy given a boolean array whether each image was correctly classified. E.g. `cls_accuracy([True, True, False, False, False]) = 2/5 = 0.4`
###Code
def cls_accuracy(correct):
# Calculate the number of correctly classified images.
# When summing a boolean array, False means 0 and True means 1.
correct_sum = correct.sum()
# Classification accuracy is the number of correctly classified
# images divided by the total number of images in the test-set.
acc = float(correct_sum) / len(correct)
return acc, correct_sum
###Output
_____no_output_____
###Markdown
Calculate the classification accuracy on the validation-set.
###Code
def validation_accuracy():
# Get the array of booleans whether the classifications are correct
# for the validation-set.
# The function returns two values but we only need the first.
correct, _ = predict_cls_validation()
# Calculate the classification accuracy and return it.
return cls_accuracy(correct)
###Output
_____no_output_____
###Markdown
Helper-function for showing the performance Function for printing the classification accuracy on the test-set.It takes a while to compute the classification for all the images in the test-set, that's why the results are re-used by calling the above functions directly from this function, so the classifications don't have to be recalculated by each function.
###Code
def print_test_accuracy(show_example_errors=False,
show_confusion_matrix=False):
# For all the images in the test-set,
# calculate the predicted classes and whether they are correct.
correct, cls_pred = predict_cls_test()
# Classification accuracy and the number of correct classifications.
acc, num_correct = cls_accuracy(correct)
# Number of images being classified.
num_images = len(correct)
# Print the accuracy.
msg = "Accuracy on Test-Set: {0:.1%} ({1} / {2})"
print(msg.format(acc, num_correct, num_images))
# Plot some examples of mis-classifications, if desired.
if show_example_errors:
print("Example errors:")
plot_example_errors(cls_pred=cls_pred, correct=correct)
# Plot the confusion matrix, if desired.
if show_confusion_matrix:
print("Confusion Matrix:")
plot_confusion_matrix(cls_pred=cls_pred)
###Output
_____no_output_____
###Markdown
Helper-function for plotting convolutional weights
###Code
def plot_conv_weights(weights, input_channel=0):
# Assume weights are TensorFlow ops for 4-dim variables
# e.g. weights_conv1 or weights_conv2.
# Retrieve the values of the weight-variables from TensorFlow.
# A feed-dict is not necessary because nothing is calculated.
w = session.run(weights)
# Print mean and standard deviation.
print("Mean: {0:.5f}, Stdev: {1:.5f}".format(w.mean(), w.std()))
# Get the lowest and highest values for the weights.
# This is used to correct the colour intensity across
# the images so they can be compared with each other.
w_min = np.min(w)
w_max = np.max(w)
# Number of filters used in the conv. layer.
num_filters = w.shape[3]
# Number of grids to plot.
# Rounded-up, square-root of the number of filters.
num_grids = math.ceil(math.sqrt(num_filters))
# Create figure with a grid of sub-plots.
fig, axes = plt.subplots(num_grids, num_grids)
# Plot all the filter-weights.
for i, ax in enumerate(axes.flat):
# Only plot the valid filter-weights.
if i<num_filters:
# Get the weights for the i'th filter of the input channel.
# The format of this 4-dim tensor is determined by the
# TensorFlow API. See Tutorial #02 for more details.
img = w[:, :, input_channel, i]
# Plot image.
ax.imshow(img, vmin=w_min, vmax=w_max,
interpolation='nearest', cmap='seismic')
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
###Output
_____no_output_____
###Markdown
Performance before any optimizationThe accuracy on the test-set is very low because the model variables have only been initialized and not optimized at all, so it just classifies the images randomly.
###Code
print_test_accuracy()
###Output
Accuracy on Test-Set: 8.5% (849 / 10000)
###Markdown
The convolutional weights are random, but it can be difficult to see any difference from the optimized weights that are shown below. The mean and standard deviation is shown so we can see whether there is a difference.
###Code
plot_conv_weights(weights=weights_conv1)
###Output
Mean: 0.00880, Stdev: 0.28635
###Markdown
Perform 10,000 optimization iterationsWe now perform 10,000 optimization iterations and abort the optimization if no improvement is found on the validation-set in 1000 iterations.An asterisk * is shown if the classification accuracy on the validation-set is an improvement.
###Code
optimize(num_iterations=10000)
print_test_accuracy(show_example_errors=True,
show_confusion_matrix=True)
###Output
Accuracy on Test-Set: 98.4% (9842 / 10000)
Example errors:
###Markdown
The convolutional weights have now been optimized. Compare these to the random weights shown above. They appear to be almost identical. In fact, I first thought there was a bug in the program because the weights look identical before and after optimization.But try and save the images and compare them side-by-side (you can just right-click the image to save it). You will notice very small differences before and after optimization.The mean and standard deviation has also changed slightly, so the optimized weights must be different.
###Code
plot_conv_weights(weights=weights_conv1)
###Output
Mean: 0.02895, Stdev: 0.29949
###Markdown
Initialize Variables AgainRe-initialize all the variables of the neural network with random values.
###Code
init_variables()
###Output
_____no_output_____
###Markdown
This means the neural network classifies the images completely randomly again, so the classification accuracy is very poor because it is like random guesses.
###Code
print_test_accuracy()
###Output
Accuracy on Test-Set: 13.4% (1341 / 10000)
###Markdown
The convolutional weights should now be different from the weights shown above.
###Code
plot_conv_weights(weights=weights_conv1)
###Output
Mean: -0.01086, Stdev: 0.28023
###Markdown
Restore Best VariablesRe-load all the variables that were saved to file during optimization.
###Code
saver.restore(sess=session, save_path=save_path)
###Output
_____no_output_____
###Markdown
The classification accuracy is high again when using the variables that were previously saved.Note that the classification accuracy may be slightly higher or lower than that reported above, because the variables in the file were chosen to maximize the classification accuracy on the validation-set, but the optimization actually continued for another 1000 iterations after saving those variables, so we are reporting the results for two slightly different sets of variables. Sometimes this leads to slightly better or worse performance on the test-set.
###Code
print_test_accuracy(show_example_errors=True,
show_confusion_matrix=True)
###Output
Accuracy on Test-Set: 98.3% (9826 / 10000)
Example errors:
###Markdown
The convolutional weights should be nearly identical to those shown above, although not completely identical because the weights shown above had 1000 optimization iterations more.
###Code
plot_conv_weights(weights=weights_conv1)
###Output
Mean: 0.02792, Stdev: 0.29822
###Markdown
Close TensorFlow Session We are now done using TensorFlow, so we close the session to release its resources.
###Code
# This has been commented out in case you want to modify and experiment
# with the Notebook without having to restart it.
# session.close()
###Output
_____no_output_____
###Markdown
TensorFlow 튜토리얼 04 저장 및 복원 (Save & Restore)by [Magnus Erik Hvass Pedersen](http://www.hvass-labs.org/)/ [GitHub](https://github.com/Hvass-Labs/TensorFlow-Tutorials) / [Videos on YouTube](https://www.youtube.com/playlist?list=PL9Hr9sNUjfsmEu1ZniY0XpHSzl5uihcXZ) / 번역 곽병권 개요이 튜토리얼에서는 신경망 변수를 저장하고 복원하는 방법을 보여줍니다. 최적화 과정에서 우리는 신경망의 분류 정확도가 유효성 집합에서 향상 될 때 마다 신경망 변수를 저장합니다. 이후 1000 번의 반복을 해도 성능 개선이 없으면 최적화를 중단합니다. 그런 다음 유효성 검사 집합에서 가장 잘 수행 된 변수를 다시 로드합니다.이 전략을 조기 종료(Early Stopping)라고 합니다. 그것은 신경망의 과적합을 피하기 위해 사용됩니다. 이것은 신경망이 너무 오랫동안 훈련되어 훈련 세트의 잡음을 배우기 시작할 때 발생하며, 이는 신경망이 새로운 이미지를 잘못 분류하도록 합니다.수작업으로 작성한 숫자를 인식하기 위해 MNIST 데이터 세트에서, 이 튜토리얼에 사용 된 신경망에 대한 과적합은 실제로 문제가 되지 않습니다. 그러나이 튜토리얼은 조기 종료(Early Stopping)의 아이디어를 보여줍니다.이것은 이전 튜토리얼을 토대로 작성되었으므로 TensorFlow와 Add-on 패키지인 Pretty Tensor에 대한 기본적인 이해가 필요합니다. 이 튜토리얼의 소스 코드 및 텍스트는 이전 튜토리얼과 비슷하며 최근에 이 문서를 읽은 경우 빨리 읽을 수 있습니다. 흐름도 다음의 차트는 아래에 구현 된 컨볼루션 신경망 (Convolutional Neural Network)의 데이터 흐름을 대략적으로 보여줍니다. 네트워크에는 2 개의 컨볼루션 계층과 2 개의 완전 연결된 계층이 있으며 마지막 계층은 입력 이미지의 최종 분류에 사용됩니다. 이 네트워크와 컨볼루션에 대한 자세한 설명은 튜토리얼 02를 참조하십시오.
###Code
from IPython.display import Image
Image('images/02_network_flowchart.png')
###Output
_____no_output_____
###Markdown
Imports
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np
from sklearn.metrics import confusion_matrix
import time
from datetime import timedelta
import math
import os
# Use PrettyTensor to simplify Neural Network construction.
import prettytensor as pt
###Output
_____no_output_____
###Markdown
이 문서는 Python 3.6.2 (Anaconda) 및 TensorFlow 버전을 사용하여 개발되었습니다.
###Code
tf.__version__
###Output
_____no_output_____
###Markdown
PrettyTensor version:
###Code
pt.__version__
###Output
_____no_output_____
###Markdown
Load Data MNIST 데이터 세트의 크기는 약 12MB이며 주어진 경로에 위치하지 않으면 자동으로 다운로드됩니다.
###Code
from tensorflow.examples.tutorials.mnist import input_data
data = input_data.read_data_sets('data/MNIST/', one_hot=True)
###Output
Extracting data/MNIST/train-images-idx3-ubyte.gz
Extracting data/MNIST/train-labels-idx1-ubyte.gz
Extracting data/MNIST/t10k-images-idx3-ubyte.gz
Extracting data/MNIST/t10k-labels-idx1-ubyte.gz
###Markdown
MNIST 데이터 세트는 현재 로드 되었으며 70.000개의 이미지 및 관련 라벨 (즉, 이미지의 분류)로 구성됩니다. 데이터 집합은 3개의 상호 배타적인 하위 집합으로 나뉩니다. 이 튜토리얼에서는 훈련 및 테스트 세트 만 사용합니다.
###Code
print("크기:")
print("- 훈련 세트:\t\t{}".format(len(data.train.labels)))
print("- 테스트 세트:\t\t{}".format(len(data.test.labels)))
print("- Validation-set:\t{}".format(len(data.validation.labels)))
###Output
크기:
- 훈련 세트: 55000
- 테스트 세트: 10000
- Validation-set: 5000
###Markdown
클래스 레이블은 One-Hot로 인코딩 됩니다. 즉, 각 레이블은 하나의 요소를 제외하고 모두 0 인 요소가 포함 된 10개의 벡터입니다. 이 요소의 색인은 클래스 번호, 즉 연관된 이미지에 표시된 숫자입니다. 테스트 집합에 대한 클래스 수를 정수로 필요로 하므로 지금 계산합니다.
###Code
data.test.cls = np.argmax(data.test.labels, axis=1)
data.validation.cls = np.argmax(data.validation.labels, axis=1)
###Output
_____no_output_____
###Markdown
Data Dimensions 데이터 차원은 아래 소스 코드의 여러 위치에서 사용됩니다. 그것들은 한 번 정의되어 있으므로 아래의 소스 코드에서 숫자 대신 이러한 변수를 사용할 수 있습니다.
###Code
# MNIST 데이터는 이미지의 한 변이 28 픽셀입니다.
img_size = 28
# 이미지는 각 변의 크기를 곱한 수의 일차원 배열로 표현이 됩니다.
img_size_flat = img_size * img_size
# 높이와 넓이로 구성된 튜플은 이미지를 재구성하기 위해서 필요합니다.
img_shape = (img_size, img_size)
# 이미지의 컬러 채널의 수: 1 그레이 스케일 이미지의 경우 1
num_channels = 1
# 클래스의 수, 클래스는 0~9까지의 숫자를 의미합니다.
num_classes = 10
###Output
_____no_output_____
###Markdown
이미지를 그리는 도움 함수 3x3그리드에 9개의 이미지를 플롯하고 각 이미지 아래에 참 및 예측 클래스를 쓰는 데 사용되는 함수입니다.
###Code
def plot_images(images, cls_true, cls_pred=None):
assert len(images) == len(cls_true) == 9
# Create figure with 3x3 sub-plots.
fig, axes = plt.subplots(3, 3)
fig.subplots_adjust(hspace=0.3, wspace=0.3)
for i, ax in enumerate(axes.flat):
# Plot image.
ax.imshow(images[i].reshape(img_shape), cmap='binary')
# Show true and predicted classes.
if cls_pred is None:
xlabel = "True: {0}".format(cls_true[i])
else:
xlabel = "True: {0}, Pred: {1}".format(cls_true[i], cls_pred[i])
# Show the classes as the label on the x-axis.
ax.set_xlabel(xlabel)
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
###Output
_____no_output_____
###Markdown
일부의 이미지를 그려서 데이터가 정확한지 확인해 봅니다.¶
###Code
# Get the first images from the test-set.
images = data.test.images[0:9]
# Get the true classes for those images.
cls_true = data.test.cls[0:9]
# Plot the images and labels using our helper-function above.
plot_images(images=images, cls_true=cls_true)
###Output
_____no_output_____
###Markdown
TensorFlow GraphTensorFlow의 목적은 동일한 계산이 파이썬에서 직접 수행되는 것보다 훨씬 효율적으로 실행될 수있는 소위 연산 그래프를 구성하는 것입니다. TensorFlow는 실행해야 하는 전체 연산 그래프를 알고 있기 때문에 NumPy보다 더 효율적입니다. NumPy는 한 번에 하나의 수학 연산 만 계산합니다.또한 TensorFlow는 그래프의 변수를 최적화하여 모델의 성능을 향상시키는 데 필요한 그래디언트를 자동으로 계산할 수 있습니다. 이것은 그래프가 간단한 수학적 표현의 조합이기 때문에 전체 그래프의 그래디언트가 미분에 대한 체인 규칙을 사용하여 계산 될 수 있기 때문입니다.TensorFlow는 GPU뿐 아니라 멀티 코어 CPU를 활용할 수도 있습니다. Google은 TPU (Tensor Processing Units)라고 불리는 TensorFlow 용 특수 칩을 구축했으며 GPU보다 훨씬 빠릅니다.TensorFlow 그래프는 아래에 설명 된 다음 부분으로 구성됩니다.* Placeholder 변수: 변경되는 값을 입력으로 사용할 수 있도록 합니다.* 모델 변수: 모델이 더 좋은 성능을 내도록 최적화 할 수 있습니다.* 모델은 본질적으로 Placeholder 변수와 모델 변수의 입력이 제공되면 출력을 계산하는 수학 함수입니다.* 비용: 변수들을 최적화 하기 위해서 사용되는 측정값 입니다.* 최적화 기법: 모델의 변수를 변경합니다.또한 TensorFlow 그래프는 다양한 디버깅 문을 포함 할 수 있습니다. 이 노트북에서는 다루지 않지만 TensorBoard를 사용하여 데이터를 표시하도록 로깅합니다. Placeholder 변수 Placeholder 변수는 그래프를 실행할 때마다 변경할 수 있는 TensorFlow 계산 그래프의 입력 값 역할을 합니다.먼저 입력 이미지의 placeholder 변수를 정의합니다. 이를 통해 TensorFlow 그래프에 입력되는 이미지를 변경할 수 있습니다. 이것은 소위 텐서 (tensor)라고 불리는데, 이는 그것이 다차원 벡터 또는 행렬이라는 것을 의미합니다. 데이터 형은 `float32`로 설정되고 형태는 `[None, img_size_flat]`으로 설정됩니다. 여기서 `None`은 텐서가 임의의 수의 이미지를 보유 할 수 있음을 의미합니다. 각 이미지는 길이가 `img_size_flat`인 벡터입니다.
###Code
x = tf.placeholder(tf.float32, shape=[None, img_size_flat], name='x')
###Output
_____no_output_____
###Markdown
콘볼루션 레이어는 `x`가 4 차원 텐서로 인코딩 될 것으로 요구하므로 모양을 바꿔야 합니다. 요구되는 형태는 `[num_images, img_height, img_width, num_channels]`이 됩니다. 첫 번째 차원은 -1을 사용하여 자동으로 추론 할 수 있습니다. 따라서 재구성 작업은 다음과 같습니다.
###Code
x_image = tf.reshape(x, [-1, img_size, img_size, num_channels])
###Output
_____no_output_____
###Markdown
다음으로 우리는 placeholder `x`에 입력 된 이미지와 관련된 실제 레이블에 대한 placeholder 변수를 갖습니다. 이 placeholder 변수의 모양은`[None, num_classes]`입니다. 이는 임의의 수의 레이블을 보유 할 수 있음을 의미하며 각 레이블은 이 경우에는 길이가 num_classes 인 벡터입니다.
###Code
y_true = tf.placeholder(tf.float32, shape=[None, 10], name='y_true')
###Output
_____no_output_____
###Markdown
클래스 번호에 대한 placeholder 변수를 사용할 수도 있지만 대신 argmax를 사용하여 이를 계산합니다. 이 값은 TensorFlow 연산자이므로 이 시점에서는 아무 것도 계산되지 않습니다.
###Code
y_true_cls = tf.argmax(y_true, dimension=1)
###Output
WARNING:tensorflow:From <ipython-input-23-4674210f2acc>:1: calling argmax (from tensorflow.python.ops.math_ops) with dimension is deprecated and will be removed in a future version.
Instructions for updating:
Use the `axis` argument instead
###Markdown
신경망 (Neural Network) 이 섹션에서는 Pretty Tensor를 사용하여 컨볼루션 신경망을 구현합니다. TensorFlow로 직접 구현하는 것 보다 훨씬 간단합니다 (튜토리얼 03 참조).기본 개념은 전체 신경망을 생성하기 위해 새로운 계산 계층을 추가하는 도움 함수가 있는 Pretty Tensor 객체로 입력 텐서 `x_image`를 감싸는 것 입니다.Pretty Tensor는 변수 할당 등을 담당합니다.
###Code
x_pretty = pt.wrap(x_image)
###Output
_____no_output_____
###Markdown
Pretty Tensor 객체로 입력 이미지를 감싸게 했으므로 이제는 몇 줄의 소스 코드에 컨볼루션 및 완전 연결 레이어를 추가 할 수 있습니다.`pt.defaults_scope (activation_fn = tf.nn.relu)`는 `with`블록 안에 구성된 각 레이어에 대해 `activation_fn = tf.nn.relu`를 인자로 만들어서 ReLU (Rectified Linear Units)를 층들 각각에 사용하도록 합니다. `defaults_scope`는 모든 레이어의 인자를 쉽게 바꿀 수있게합니다.
###Code
with pt.defaults_scope(activation_fn=tf.nn.relu):
y_pred, loss = x_pretty.\
conv2d(kernel=5, depth=16, name='layer_conv1').\
max_pool(kernel=2, stride=2).\
conv2d(kernel=5, depth=36, name='layer_conv2').\
max_pool(kernel=2, stride=2).\
flatten().\
fully_connected(size=128, name='layer_fc1').\
softmax_classifier(num_classes=num_classes, labels=y_true)
###Output
_____no_output_____
###Markdown
가중치 구하기 아래에서 우리는 신경망의 가중치를 그래프로 나타내려고합니다. Pretty Tensor를 사용하여 네트워크를 구성하면 Pretty Tensor가 간접적으로 모든 변수를 만듭니다. 따라서 TensorFlow에서 변수를 검색해야 합니다.우리는 두 개의 컨볼루션 레이어에 `layer_conv1`과 `layer_conv2`라는 이름을 사용했습니다. 이것들은 변수 범위(variable scope)라고도 불립니다 (위에서 설명 한 `defaults_scope`와 혼동해서는 안됩니다). Pretty Tensor는 각 레이어에 대해 만드는 변수에 이름을 자동으로 부여하므로 레이어의 범위 이름과 변수 이름을 사용하여 레이어의 가중치를 검색 할 수 있습니다.실제 구현은 다른 목적을 위해 설계된 TensorFlow 함수 `get_variable()`을 사용하거나 새 변수를 만들거나 기존 변수를 다시 사용해야 하기 때문에 다소 어색해 질 수 있습니다. 가장 쉬운 방법은 다음과 같은 도움 함수를 만드는 것 입니다.
###Code
def get_weights_variable(layer_name):
# Retrieve an existing variable named 'weights' in the scope
# with the given layer_name.
# This is awkward because the TensorFlow function was
# really intended for another purpose.
with tf.variable_scope(layer_name, reuse=True):
variable = tf.get_variable('weights')
return variable
###Output
_____no_output_____
###Markdown
이 도움 함수를 사용하면 변수를 검색 할 수 있습니다. 검색된 변수는 모두 TensorFlow 개체입니다. 변수의 내용을 얻으려면 아래에서 더 자세히 설명하는 것 처럼 `content = session.run (weights_conv1)`과 같은 것을 실행해야 합니다.
###Code
weights_conv1 = get_weights_variable(layer_name='layer_conv1')
weights_conv2 = get_weights_variable(layer_name='layer_conv2')
###Output
_____no_output_____
###Markdown
최적화 방법 Pretty Tensor는 신경망에서 입력 이미지를 분류하는 능력을 향상시키기 위해 예상되는 클래스 레이블 (`y_pred`)과 손실 측정을 최소화 해야 했습니다.Pretty Tensor의 문서에서 분실 대책이 교차 엔트로피인지 또는 다른 것이 있는지는 분명하지 않습니다. 그러나 우리는 이제 손실을 최소화하기 위해`AdamOptimizer`를 사용합니다.이 시점에서 최적화는 수행되지 않습니다. 사실, 아무것도 계산되지 않습니다. 나중에 실행을 위해 TensorFlow 그래프에 옵티 마이저 개체를 추가하기 만하면됩니다.
###Code
optimizer = tf.train.AdamOptimizer(learning_rate=1e-4).minimize(loss)
###Output
_____no_output_____
###Markdown
성능 측정사용자에게 진행 상황을 표시하기 위해 몇 가지 성능 측정 방법이 필요합니다.먼저 우리는 10 개의 요소를 가진 벡터 인 `y_pred` 신경망의 출력으로부터 예측 된 클래스 번호를 계산합니다. 클래스 번호는 가장 큰 값을 가진 항목의 색인입니다.
###Code
y_pred_cls = tf.argmax(y_pred, dimension=1)
###Output
_____no_output_____
###Markdown
그런 다음 예측 된 클래스가 각 이미지의 실제 클래스와 같은지 여부를 알려주는 부울 값의 벡터를 만듭니다.
###Code
correct_prediction = tf.equal(y_pred_cls, y_true_cls)
###Output
_____no_output_____
###Markdown
부울 값의 벡터를 float 형으로 형변환하여 False가 0이되고 True가 1이되도록 분류 정확도를 계산 한 다음 이 수의 평균을 계산합니다.
###Code
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
###Output
_____no_output_____
###Markdown
Saver신경망의 변수를 저장하기 위해 이제는 TensorFlow 그래프의 모든 변수를 저장하고 검색하는 데 사용되는 Saver라는 개체를 만듭니다. 이 시점에서 아무 것도 실제로 저장되지는 않습니다. `optimize()` 함수에서 더 아래에 실행 될 것입니다.
###Code
saver = tf.train.Saver()
###Output
_____no_output_____
###Markdown
저장된 파일은 최적화 중에 규칙적인 간격으로 기록 될 수 있기 때문에 종종 체크 포인트라고도 합니다.아래의 디렉토리는 데이터 저장 및 검색에 사용되는 디렉토리입니다.
###Code
save_dir = 'checkpoints/'
###Output
_____no_output_____
###Markdown
디렉토리가 없으면 만듭니다.
###Code
if not os.path.exists(save_dir):
os.makedirs(save_dir)
###Output
_____no_output_____
###Markdown
이것은 체크포인트 파일의 경로입니다.
###Code
save_path = os.path.join(save_dir, 'best_validation')
###Output
_____no_output_____
###Markdown
TensorFlow 실행 TensorFlow session 만들기TensorFlow 그래프가 생성되면 그래프를 실행하는 데 사용되는 TensorFlow 세션을 만들어야 합니다.
###Code
session = tf.Session()
###Output
_____no_output_____
###Markdown
변수 초기화`weights`와`biases`등의 변수는 최적화를 시작하기 전에 초기화되어야합니다. 여러본 호출하기 때문에 간단한 래퍼 함수를 만듭니다.
###Code
def init_variables():
session.run(tf.global_variables_initializer())
###Output
_____no_output_____
###Markdown
변수를 초기화하기 위해 지금 함수를 실행합니다.
###Code
init_variables()
###Output
_____no_output_____
###Markdown
최적화 반복을 수행하는 도움 함수 트레이닝 세트에는 55,000 개의 이미지가 있습니다. 이 모든 이미지를 사용하여 모델의 그래디언트를 계산하는 것은 시간이 꽤 오래 걸립니다. 따라서 우리는 옵티 마이저의 각 반복에서 작은 배치 이미지만 사용합니다.메모리가 부족하여 컴퓨터가 다운되거나 충돌이 발생하면, 이 수를 줄이거 나 늘릴 수 있지만 더 많은 최적화 반복을 수행해야 할 수 있습니다.
###Code
train_batch_size = 64
###Output
_____no_output_____
###Markdown
점진적으로 네트워크 계층의 변수를 향상시키기 위해 최적화 반복을 수행하는 기능. 각 반복에서 훈련 세트에서 새로운 데이터 배치를 선택한 다음 TensorFlow는 이러한 훈련 샘플을 사용하여 최적화 프로그램을 실행합니다. 진행률은 매 100 회 반복됩니다.유효성 검사의 정확성이 1000번 반복이후 개선되지 않으면 최적화가 중단됩니다. 이를 추적하기 위해서는 몇 가지 변수가 필요합니다.
###Code
# Best validation accuracy seen so far.
best_validation_accuracy = 0.0
# Iteration-number for last improvement to validation accuracy.
last_improvement = 0
# Stop optimization if no improvement found in this many iterations.
require_improvement = 1000
###Output
_____no_output_____
###Markdown
점진적으로 네트워크 계층의 변수를 향상시키기 위해 최적화 최적화를 여러 번 수행하는 기능입니다. 각 반복에서 훈련 세트에서 새로운 데이터 배치를 선택한 다음 TensorFlow는 이러한 훈련 샘플을 사용하여 최적화를 실행합니다. 100회 반복마다 진행률이 인쇄되어 유효성 검사 정확도도 계산되고 개선 된 경우 파일에 저장됩니다.
###Code
# Counter for total number of iterations performed so far.
total_iterations = 0
def optimize(num_iterations):
# Ensure we update the global variables rather than local copies.
global total_iterations
global best_validation_accuracy
global last_improvement
# Start-time used for printing time-usage below.
start_time = time.time()
for i in range(num_iterations):
# Increase the total number of iterations performed.
# It is easier to update it in each iteration because
# we need this number several times in the following.
total_iterations += 1
# Get a batch of training examples.
# x_batch now holds a batch of images and
# y_true_batch are the true labels for those images.
x_batch, y_true_batch = data.train.next_batch(train_batch_size)
# Put the batch into a dict with the proper names
# for placeholder variables in the TensorFlow graph.
feed_dict_train = {x: x_batch,
y_true: y_true_batch}
# Run the optimizer using this batch of training data.
# TensorFlow assigns the variables in feed_dict_train
# to the placeholder variables and then runs the optimizer.
session.run(optimizer, feed_dict=feed_dict_train)
# Print status every 100 iterations and after last iteration.
if (total_iterations % 100 == 0) or (i == (num_iterations - 1)):
# Calculate the accuracy on the training-batch.
acc_train = session.run(accuracy, feed_dict=feed_dict_train)
# Calculate the accuracy on the validation-set.
# The function returns 2 values but we only need the first.
acc_validation, _ = validation_accuracy()
# If validation accuracy is an improvement over best-known.
if acc_validation > best_validation_accuracy:
# Update the best-known validation accuracy.
best_validation_accuracy = acc_validation
# Set the iteration for the last improvement to current.
last_improvement = total_iterations
# Save all variables of the TensorFlow graph to file.
saver.save(sess=session, save_path=save_path)
# A string to be printed below, shows improvement found.
improved_str = '*'
else:
# An empty string to be printed below.
# Shows that no improvement was found.
improved_str = ''
# Status-message for printing.
msg = "Iter: {0:>6}, Train-Batch Accuracy: {1:>6.1%}, Validation Acc: {2:>6.1%} {3}"
# Print it.
print(msg.format(i + 1, acc_train, acc_validation, improved_str))
# If no improvement found in the required number of iterations.
if total_iterations - last_improvement > require_improvement:
print("No improvement found in a while, stopping optimization.")
# Break out from the for-loop.
break
# Ending time.
end_time = time.time()
# Difference between start and end-times.
time_dif = end_time - start_time
# Print the time-usage.
print("Time usage: " + str(timedelta(seconds=int(round(time_dif)))))
###Output
_____no_output_____
###Markdown
분류 오류 이미지의 예를 표시하는 도움 함수 잘못 분류 된 테스트 세트의 이미지 예제를 출력하는 함수
###Code
def plot_example_errors(cls_pred, correct):
# This function is called from print_test_accuracy() below.
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# correct is a boolean array whether the predicted class
# is equal to the true class for each image in the test-set.
# Negate the boolean array.
incorrect = (correct == False)
# Get the images from the test-set that have been
# incorrectly classified.
images = data.test.images[incorrect]
# Get the predicted classes for those images.
cls_pred = cls_pred[incorrect]
# Get the true classes for those images.
cls_true = data.test.cls[incorrect]
# Plot the first 9 images.
plot_images(images=images[0:9],
cls_true=cls_true[0:9],
cls_pred=cls_pred[0:9])
###Output
_____no_output_____
###Markdown
혼란 행렬(Confusion matrix)을 그리기위한 도움 함수
###Code
def plot_confusion_matrix(cls_pred):
# This is called from print_test_accuracy() below.
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# Get the true classifications for the test-set.
cls_true = data.test.cls
# Get the confusion matrix using sklearn.
cm = confusion_matrix(y_true=cls_true,
y_pred=cls_pred)
# Print the confusion matrix as text.
print(cm)
# Plot the confusion matrix as an image.
plt.matshow(cm)
# Make various adjustments to the plot.
plt.colorbar()
tick_marks = np.arange(num_classes)
plt.xticks(tick_marks, range(num_classes))
plt.yticks(tick_marks, range(num_classes))
plt.xlabel('Predicted')
plt.ylabel('True')
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
###Output
_____no_output_____
###Markdown
분류 계산을위한 도움 함수이 함수는 예측 된 이미지 클래스를 계산하고 각 이미지의 분류가 올바른지 여부를 불린 배열 형태로 반환합니다.계산은 배치 형태로 수행하고, 그렇지 않으면 너무 많은 RAM을 사용하게 됩니다. 프로그램이 중단되는 경우, 배치 크기를 줄여보십시오.
###Code
# Split the data-set in batches of this size to limit RAM usage.
batch_size = 256
def predict_cls(images, labels, cls_true):
# Number of images.
num_images = len(images)
# Allocate an array for the predicted classes which
# will be calculated in batches and filled into this array.
cls_pred = np.zeros(shape=num_images, dtype=np.int)
# Now calculate the predicted classes for the batches.
# We will just iterate through all the batches.
# There might be a more clever and Pythonic way of doing this.
# The starting index for the next batch is denoted i.
i = 0
while i < num_images:
# The ending index for the next batch is denoted j.
j = min(i + batch_size, num_images)
# Create a feed-dict with the images and labels
# between index i and j.
feed_dict = {x: images[i:j, :],
y_true: labels[i:j, :]}
# Calculate the predicted class using TensorFlow.
cls_pred[i:j] = session.run(y_pred_cls, feed_dict=feed_dict)
# Set the start-index for the next batch to the
# end-index of the current batch.
i = j
# Create a boolean array whether each image is correctly classified.
correct = (cls_true == cls_pred)
return correct, cls_pred
###Output
_____no_output_____
###Markdown
테스트 세트에 대한 예상 클래스를 계산합니다.
###Code
def predict_cls_test():
return predict_cls(images = data.test.images,
labels = data.test.labels,
cls_true = data.test.cls)
###Output
_____no_output_____
###Markdown
유효성 검사 집합(Validation Set)에 대한 예측 클래스를 계산합니다.
###Code
def predict_cls_validation():
return predict_cls(images = data.validation.images,
labels = data.validation.labels,
cls_true = data.validation.cls)
###Output
_____no_output_____
###Markdown
분류 정확도를위한 도움 함수이 함수는 부울 배열에 주어진 분류 정확도를 계산하여 각 이미지가 정확하게 분류되었는지 여부를 계산합니다. 예: `cls_accuracy ([True, True, False, False, False]) = 2/5 = 0.4`
###Code
def cls_accuracy(correct):
# Calculate the number of correctly classified images.
# When summing a boolean array, False means 0 and True means 1.
correct_sum = correct.sum()
# Classification accuracy is the number of correctly classified
# images divided by the total number of images in the test-set.
acc = float(correct_sum) / len(correct)
return acc, correct_sum
###Output
_____no_output_____
###Markdown
유효성 검사 집합에 대한 정확도를 계산합니다.
###Code
def validation_accuracy():
# Get the array of booleans whether the classifications are correct
# for the validation-set.
# The function returns two values but we only need the first.
correct, _ = predict_cls_validation()
# Calculate the classification accuracy and return it.
return cls_accuracy(correct)
###Output
_____no_output_____
###Markdown
성능을 표시하는 도움 함수 테스트 세트에 대한 분류 정확도를 출력하는 함수입니다.테스트 세트의 모든 이미지에 대한 분류를 계산하는 데 시간이 걸리기 때문에, 이 함수에서 위의 함수를 직접 호출하여 결과를 다시 사용하므로 각 함수에서 분류를 다시 계산할 필요가 없습니다.
###Code
def print_test_accuracy(show_example_errors=False,
show_confusion_matrix=False):
# For all the images in the test-set,
# calculate the predicted classes and whether they are correct.
correct, cls_pred = predict_cls_test()
# Classification accuracy and the number of correct classifications.
acc, num_correct = cls_accuracy(correct)
# Number of images being classified.
num_images = len(correct)
# Print the accuracy.
msg = "Accuracy on Test-Set: {0:.1%} ({1} / {2})"
print(msg.format(acc, num_correct, num_images))
# Plot some examples of mis-classifications, if desired.
if show_example_errors:
print("Example errors:")
plot_example_errors(cls_pred=cls_pred, correct=correct)
# Plot the confusion matrix, if desired.
if show_confusion_matrix:
print("Confusion Matrix:")
plot_confusion_matrix(cls_pred=cls_pred)
###Output
_____no_output_____
###Markdown
컨볼루션 레이어의 가중치를 시각화 하기 위한 도움 함수
###Code
def plot_conv_weights(weights, input_channel=0):
# Assume weights are TensorFlow ops for 4-dim variables
# e.g. weights_conv1 or weights_conv2.
# Retrieve the values of the weight-variables from TensorFlow.
# A feed-dict is not necessary because nothing is calculated.
w = session.run(weights)
# Print mean and standard deviation.
print("Mean: {0:.5f}, Stdev: {1:.5f}".format(w.mean(), w.std()))
# Get the lowest and highest values for the weights.
# This is used to correct the colour intensity across
# the images so they can be compared with each other.
w_min = np.min(w)
w_max = np.max(w)
# Number of filters used in the conv. layer.
num_filters = w.shape[3]
# Number of grids to plot.
# Rounded-up, square-root of the number of filters.
num_grids = math.ceil(math.sqrt(num_filters))
# Create figure with a grid of sub-plots.
fig, axes = plt.subplots(num_grids, num_grids)
# Plot all the filter-weights.
for i, ax in enumerate(axes.flat):
# Only plot the valid filter-weights.
if i<num_filters:
# Get the weights for the i'th filter of the input channel.
# The format of this 4-dim tensor is determined by the
# TensorFlow API. See Tutorial #02 for more details.
img = w[:, :, input_channel, i]
# Plot image.
ax.imshow(img, vmin=w_min, vmax=w_max,
interpolation='nearest', cmap='seismic')
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
###Output
_____no_output_____
###Markdown
최적화 전 성능모델 변수가 초기화되고 전혀 최적화되지 않았기 때문에 테스트 세트의 정확도는 매우 낮으므로 이미지를 무작위로 분류합니다.
###Code
print_test_accuracy()
###Output
Accuracy on Test-Set: 7.8% (780 / 10000)
###Markdown
컨볼 루션 가중치는 무작위이지만 아래에 표시된 최적화 된 가중치와의 차이점을 확인하기가 어려울 수 있습니다. 평균과 표준 편차가 표시되므로 차이가 있는지 확인할 수 있습니다.
###Code
plot_conv_weights(weights=weights_conv1)
###Output
Mean: -0.01354, Stdev: 0.29200
###Markdown
10,000번의 최적화 반복 수행이제 10,000 개의 최적화 반복을 수행하고 1,000 번의 반복으로 유효성 검사 세트에 대한 성능이 개선되지 않으면 최적화를 중단합니다.유효성 검증 세트의 분류 정확도가 개선 된 경우 별표*가 표시됩니다.
###Code
optimize(num_iterations=10000)
print_test_accuracy(show_example_errors=True,
show_confusion_matrix=True)
###Output
Accuracy on Test-Set: 99.0% (9898 / 10000)
Example errors:
###Markdown
컨볼루션 가중치가 이제 최적화되었습니다. 위의 임의의 가중치와 비교하십시오. 그들은 거의 동일하게 보입니다. 사실, 처음에 저는 최적화 전후가 동일하게 보여서 프로그램 오류라고 생각했습니다.그러나 이미지를 저장하고 비교하여 나란히 비교하십시오 (이미지를 마우스 오른쪽 버튼으로 클릭하여 이미지를 저장할 수 있음). 최적화 전과 후에 매우 작은 차이점을 알 수 있습니다.평균 및 표준 편차도 약간 변경되었으므로 최적화 된 가중치가 달라야합니다.
###Code
plot_conv_weights(weights=weights_conv1)
###Output
Mean: 0.01641, Stdev: 0.30954
###Markdown
다시 변수를 초기화신경망의 모든 변수를 임의의 값으로 다시 초기화 합니다.
###Code
init_variables()
###Output
_____no_output_____
###Markdown
이것은 신경망이 이미지를 완전히 무작위로 분류한다는 것을 의미하므로 분류 정확도는 무작위 추측과 같기 때문에 매우 열악합니다.
###Code
print_test_accuracy()
###Output
Accuracy on Test-Set: 9.3% (935 / 10000)
###Markdown
컨벌루션 가중치는 이제 위의 가중치와 달라야합니다.
###Code
plot_conv_weights(weights=weights_conv1)
###Output
Mean: -0.01086, Stdev: 0.28023
###Markdown
최적의 변수를 복원최적화 중에 파일에 저장된 모든 변수를 다시로드 합니다.
###Code
saver.restore(sess=session, save_path=save_path)
###Output
_____no_output_____
###Markdown
이전에 저장된 변수를 사용할 때 분류 정확도가 다시 높아졌습니다.파일의 변수가 유효성 검사 집합의 분류 정확도를 최대화하기 위해 선택 되었기 때문에 분류 정확도가 위에서 보고 된 것보다 약간 높거나 낮을 수 있지만 이러한 변수를 저장 한 후 최적화가 실제로 또 다른 1,000 반복 동안 계속되므로 우리는 두 개의 약간 다른 변수 세트에 대한 결과를 볼 수 있습니다. 때로는 테스트 세트에서 성능이 약간 더 좋거나 나빠질 수 있습니다.
###Code
print_test_accuracy(show_example_errors=True,
show_confusion_matrix=True)
###Output
Accuracy on Test-Set: 98.3% (9826 / 10000)
Example errors:
###Markdown
위의 가중치는 1,000 번의 최적화 반복이 있었으므로 완전히 동일하지는 않지만 컨볼루션 가중치는 위에 표시된 것과 거의 동일해야 합니다.
###Code
plot_conv_weights(weights=weights_conv1)
###Output
Mean: 0.02792, Stdev: 0.29822
###Markdown
TensorFlow Session 닫기 이제 TensorFlow를 사용하여 작업을 마쳤으므로 세션을 닫아 리소스를 해제합니다.
###Code
# This has been commented out in case you want to modify and experiment
# with the Notebook without having to restart it.
# session.close()
###Output
_____no_output_____
###Markdown
Helper-function for plotting images Function used to plot 9 images in a 3x3 grid, and writing the true and predicted classes below each image.
###Code
def plot_images(images, cls_true, cls_pred=None):
assert len(images) == len(cls_true) == 9
# Create figure with 3x3 sub-plots.
fig, axes = plt.subplots(3, 3)
fig.subplots_adjust(hspace=0.3, wspace=0.3)
for i, ax in enumerate(axes.flat):
# Plot image.
ax.imshow(images[i].reshape(img_shape), cmap='binary')
# Show true and predicted classes.
if cls_pred is None:
xlabel = "True: {0}".format(cls_true[i])
else:
xlabel = "True: {0}, Pred: {1}".format(cls_true[i], cls_pred[i])
# Show the classes as the label on the x-axis.
ax.set_xlabel(xlabel)
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
###Output
_____no_output_____
###Markdown
Plot a few images to see if data is correct
###Code
# Get the first images from the test-set.
images = data.test.images[0:9]
# Get the true classes for those images.
cls_true = data.test.cls[0:9]
# Plot the images and labels using our helper-function above.
plot_images(images=images, cls_true=cls_true)
###Output
_____no_output_____
###Markdown
TensorFlow GraphThe entire purpose of TensorFlow is to have a so-called computational graph that can be executed much more efficiently than if the same calculations were to be performed directly in Python. TensorFlow can be more efficient than NumPy because TensorFlow knows the entire computation graph that must be executed, while NumPy only knows the computation of a single mathematical operation at a time.TensorFlow can also automatically calculate the gradients that are needed to optimize the variables of the graph so as to make the model perform better. This is because the graph is a combination of simple mathematical expressions so the gradient of the entire graph can be calculated using the chain-rule for derivatives.TensorFlow can also take advantage of multi-core CPUs as well as GPUs - and Google has even built special chips just for TensorFlow which are called TPUs (Tensor Processing Units) and are even faster than GPUs.A TensorFlow graph consists of the following parts which will be detailed below:* Placeholder variables used for inputting data to the graph.* Variables that are going to be optimized so as to make the convolutional network perform better.* The mathematical formulas for the convolutional network.* A loss measure that can be used to guide the optimization of the variables.* An optimization method which updates the variables.In addition, the TensorFlow graph may also contain various debugging statements e.g. for logging data to be displayed using TensorBoard, which is not covered in this tutorial. Placeholder variables Placeholder variables serve as the input to the TensorFlow computational graph that we may change each time we execute the graph. We call this feeding the placeholder variables and it is demonstrated further below.First we define the placeholder variable for the input images. This allows us to change the images that are input to the TensorFlow graph. This is a so-called tensor, which just means that it is a multi-dimensional array. The data-type is set to `float32` and the shape is set to `[None, img_size_flat]`, where `None` means that the tensor may hold an arbitrary number of images with each image being a vector of length `img_size_flat`.
###Code
x = tf.placeholder(tf.float32, shape=[None, img_size_flat], name='x')
###Output
_____no_output_____
###Markdown
The convolutional layers expect `x` to be encoded as a 4-dim tensor so we have to reshape it so its shape is instead `[num_images, img_height, img_width, num_channels]`. Note that `img_height == img_width == img_size` and `num_images` can be inferred automatically by using -1 for the size of the first dimension. So the reshape operation is:
###Code
x_image = tf.reshape(x, [-1, img_size, img_size, num_channels])
###Output
_____no_output_____
###Markdown
Next we have the placeholder variable for the true labels associated with the images that were input in the placeholder variable `x`. The shape of this placeholder variable is `[None, num_classes]` which means it may hold an arbitrary number of labels and each label is a vector of length `num_classes` which is 10 in this case.
###Code
y_true = tf.placeholder(tf.float32, shape=[None, 10], name='y_true')
###Output
_____no_output_____
###Markdown
We could also have a placeholder variable for the class-number, but we will instead calculate it using argmax. Note that this is a TensorFlow operator so nothing is calculated at this point.
###Code
y_true_cls = tf.argmax(y_true, dimension=1)
###Output
_____no_output_____
###Markdown
Neural Network This section implements the Convolutional Neural Network using Pretty Tensor, which is much simpler than a direct implementation in TensorFlow, see Tutorial 03.The basic idea is to wrap the input tensor `x_image` in a Pretty Tensor object which has helper-functions for adding new computational layers so as to create an entire neural network. Pretty Tensor takes care of the variable allocation, etc.
###Code
x_pretty = pt.wrap(x_image)
###Output
_____no_output_____
###Markdown
Now that we have wrapped the input image in a Pretty Tensor object, we can add the convolutional and fully-connected layers in just a few lines of source-code.Note that `pt.defaults_scope(activation_fn=tf.nn.relu)` makes `activation_fn=tf.nn.relu` an argument for each of the layers constructed inside the `with`-block, so that Rectified Linear Units (ReLU) are used for each of these layers. The `defaults_scope` makes it easy to change arguments for all of the layers.
###Code
with pt.defaults_scope(activation_fn=tf.nn.relu):
y_pred, loss = x_pretty.\
conv2d(kernel=5, depth=16, name='layer_conv1').\
max_pool(kernel=2, stride=2).\
conv2d(kernel=5, depth=36, name='layer_conv2').\
max_pool(kernel=2, stride=2).\
flatten().\
fully_connected(size=128, name='layer_fc1').\
softmax_classifier(num_classes=num_classes, labels=y_true)
###Output
_____no_output_____
###Markdown
Getting the Weights Further below, we want to plot the weights of the neural network. When the network is constructed using Pretty Tensor, all the variables of the layers are created indirectly by Pretty Tensor. We therefore have to retrieve the variables from TensorFlow.We used the names `layer_conv1` and `layer_conv2` for the two convolutional layers. These are also called variable scopes (not to be confused with `defaults_scope` as described above). Pretty Tensor automatically gives names to the variables it creates for each layer, so we can retrieve the weights for a layer using the layer's scope-name and the variable-name.The implementation is somewhat awkward because we have to use the TensorFlow function `get_variable()` which was designed for another purpose; either creating a new variable or re-using an existing variable. The easiest thing is to make the following helper-function.
###Code
def get_weights_variable(layer_name):
# Retrieve an existing variable named 'weights' in the scope
# with the given layer_name.
# This is awkward because the TensorFlow function was
# really intended for another purpose.
with tf.variable_scope(layer_name, reuse=True):
variable = tf.get_variable('weights')
return variable
###Output
_____no_output_____
###Markdown
Using this helper-function we can retrieve the variables. These are TensorFlow objects. In order to get the contents of the variables, you must do something like: `contents = session.run(weights_conv1)` as demonstrated further below.
###Code
weights_conv1 = get_weights_variable(layer_name='layer_conv1')
weights_conv2 = get_weights_variable(layer_name='layer_conv2')
###Output
_____no_output_____
###Markdown
Optimization Method Pretty Tensor gave us the predicted class-label (`y_pred`) as well as a loss-measure that must be minimized, so as to improve the ability of the neural network to classify the input images.It is unclear from the documentation for Pretty Tensor whether the loss-measure is cross-entropy or something else. But we now use the `AdamOptimizer` to minimize the loss.Note that optimization is not performed at this point. In fact, nothing is calculated at all, we just add the optimizer-object to the TensorFlow graph for later execution.
###Code
optimizer = tf.train.AdamOptimizer(learning_rate=1e-4).minimize(loss)
###Output
_____no_output_____
###Markdown
Performance MeasuresWe need a few more performance measures to display the progress to the user.First we calculate the predicted class number from the output of the neural network `y_pred`, which is a vector with 10 elements. The class number is the index of the largest element.
###Code
y_pred_cls = tf.argmax(y_pred, dimension=1)
###Output
_____no_output_____
###Markdown
Then we create a vector of booleans telling us whether the predicted class equals the true class of each image.
###Code
correct_prediction = tf.equal(y_pred_cls, y_true_cls)
###Output
_____no_output_____
###Markdown
The classification accuracy is calculated by first type-casting the vector of booleans to floats, so that False becomes 0 and True becomes 1, and then taking the average of these numbers.
###Code
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
###Output
_____no_output_____
###Markdown
SaverIn order to save the variables of the neural network, we now create a so-called Saver-object which is used for storing and retrieving all the variables of the TensorFlow graph. Nothing is actually saved at this point, which will be done further below in the `optimize()`-function.
###Code
saver = tf.train.Saver()
###Output
_____no_output_____
###Markdown
The saved files are often called checkpoints because they may be written at regular intervals during optimization.This is the directory used for saving and retrieving the data.
###Code
save_dir = 'checkpoints/'
###Output
_____no_output_____
###Markdown
Create the directory if it does not exist.
###Code
if not os.path.exists(save_dir):
os.makedirs(save_dir)
###Output
_____no_output_____
###Markdown
This is the path for the checkpoint-file.
###Code
save_path = os.path.join(save_dir, 'best_validation')
###Output
_____no_output_____
###Markdown
TensorFlow Run Create TensorFlow sessionOnce the TensorFlow graph has been created, we have to create a TensorFlow session which is used to execute the graph.
###Code
session = tf.Session()
###Output
_____no_output_____
###Markdown
Initialize variablesThe variables for `weights` and `biases` must be initialized before we start optimizing them. We make a simple wrapper-function for this, because we will call it again below.
###Code
def init_variables():
session.run(tf.global_variables_initializer())
###Output
_____no_output_____
###Markdown
Execute the function now to initialize the variables.
###Code
init_variables()
###Output
_____no_output_____
###Markdown
Helper-function to perform optimization iterations There are 55,000 images in the training-set. It takes a long time to calculate the gradient of the model using all these images. We therefore only use a small batch of images in each iteration of the optimizer.If your computer crashes or becomes very slow because you run out of RAM, then you may try and lower this number, but you may then need to perform more optimization iterations.
###Code
train_batch_size = 64
###Output
_____no_output_____
###Markdown
The classification accuracy for the validation-set will be calculated for every 100 iterations of the optimization function below. The optimization will be stopped if the validation accuracy has not been improved in 1000 iterations. We need a few variables to keep track of this.
###Code
# Best validation accuracy seen so far.
best_validation_accuracy = 0.0
# Iteration-number for last improvement to validation accuracy.
last_improvement = 0
# Stop optimization if no improvement found in this many iterations.
require_improvement = 1000
###Output
_____no_output_____
###Markdown
Function for performing a number of optimization iterations so as to gradually improve the variables of the network layers. In each iteration, a new batch of data is selected from the training-set and then TensorFlow executes the optimizer using those training samples. The progress is printed every 100 iterations where the validation accuracy is also calculated and saved to a file if it is an improvement.
###Code
# Counter for total number of iterations performed so far.
total_iterations = 0
def optimize(num_iterations):
# Ensure we update the global variables rather than local copies.
global total_iterations
global best_validation_accuracy
global last_improvement
# Start-time used for printing time-usage below.
start_time = time.time()
for i in range(num_iterations):
# Increase the total number of iterations performed.
# It is easier to update it in each iteration because
# we need this number several times in the following.
total_iterations += 1
# Get a batch of training examples.
# x_batch now holds a batch of images and
# y_true_batch are the true labels for those images.
x_batch, y_true_batch = data.train.next_batch(train_batch_size)
# Put the batch into a dict with the proper names
# for placeholder variables in the TensorFlow graph.
feed_dict_train = {x: x_batch,
y_true: y_true_batch}
# Run the optimizer using this batch of training data.
# TensorFlow assigns the variables in feed_dict_train
# to the placeholder variables and then runs the optimizer.
session.run(optimizer, feed_dict=feed_dict_train)
# Print status every 100 iterations and after last iteration.
if (total_iterations % 100 == 0) or (i == (num_iterations - 1)):
# Calculate the accuracy on the training-batch.
acc_train = session.run(accuracy, feed_dict=feed_dict_train)
# Calculate the accuracy on the validation-set.
# The function returns 2 values but we only need the first.
acc_validation, _ = validation_accuracy()
# If validation accuracy is an improvement over best-known.
if acc_validation > best_validation_accuracy:
# Update the best-known validation accuracy.
best_validation_accuracy = acc_validation
# Set the iteration for the last improvement to current.
last_improvement = total_iterations
# Save all variables of the TensorFlow graph to file.
saver.save(sess=session, save_path=save_path)
# A string to be printed below, shows improvement found.
improved_str = '*'
else:
# An empty string to be printed below.
# Shows that no improvement was found.
improved_str = ''
# Status-message for printing.
msg = "Iter: {0:>6}, Train-Batch Accuracy: {1:>6.1%}, Validation Acc: {2:>6.1%} {3}"
# Print it.
print(msg.format(i + 1, acc_train, acc_validation, improved_str))
# If no improvement found in the required number of iterations.
if total_iterations - last_improvement > require_improvement:
print("No improvement found in a while, stopping optimization.")
# Break out from the for-loop.
break
# Ending time.
end_time = time.time()
# Difference between start and end-times.
time_dif = end_time - start_time
# Print the time-usage.
print("Time usage: " + str(timedelta(seconds=int(round(time_dif)))))
###Output
_____no_output_____
###Markdown
Helper-function to plot example errors Function for plotting examples of images from the test-set that have been mis-classified.
###Code
def plot_example_errors(cls_pred, correct):
# This function is called from print_test_accuracy() below.
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# correct is a boolean array whether the predicted class
# is equal to the true class for each image in the test-set.
# Negate the boolean array.
incorrect = (correct == False)
# Get the images from the test-set that have been
# incorrectly classified.
images = data.test.images[incorrect]
# Get the predicted classes for those images.
cls_pred = cls_pred[incorrect]
# Get the true classes for those images.
cls_true = data.test.cls[incorrect]
# Plot the first 9 images.
plot_images(images=images[0:9],
cls_true=cls_true[0:9],
cls_pred=cls_pred[0:9])
###Output
_____no_output_____
###Markdown
Helper-function to plot confusion matrix
###Code
def plot_confusion_matrix(cls_pred):
# This is called from print_test_accuracy() below.
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# Get the true classifications for the test-set.
cls_true = data.test.cls
# Get the confusion matrix using sklearn.
cm = confusion_matrix(y_true=cls_true,
y_pred=cls_pred)
# Print the confusion matrix as text.
print(cm)
# Plot the confusion matrix as an image.
plt.matshow(cm)
# Make various adjustments to the plot.
plt.colorbar()
tick_marks = np.arange(num_classes)
plt.xticks(tick_marks, range(num_classes))
plt.yticks(tick_marks, range(num_classes))
plt.xlabel('Predicted')
plt.ylabel('True')
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
###Output
_____no_output_____
###Markdown
Helper-functions for calculating classificationsThis function calculates the predicted classes of images and also returns a boolean array whether the classification of each image is correct.The calculation is done in batches because it might use too much RAM otherwise. If your computer crashes then you can try and lower the batch-size.
###Code
# Split the data-set in batches of this size to limit RAM usage.
batch_size = 256
def predict_cls(images, labels, cls_true):
# Number of images.
num_images = len(images)
# Allocate an array for the predicted classes which
# will be calculated in batches and filled into this array.
cls_pred = np.zeros(shape=num_images, dtype=np.int)
# Now calculate the predicted classes for the batches.
# We will just iterate through all the batches.
# There might be a more clever and Pythonic way of doing this.
# The starting index for the next batch is denoted i.
i = 0
while i < num_images:
# The ending index for the next batch is denoted j.
j = min(i + batch_size, num_images)
# Create a feed-dict with the images and labels
# between index i and j.
feed_dict = {x: images[i:j, :],
y_true: labels[i:j, :]}
# Calculate the predicted class using TensorFlow.
cls_pred[i:j] = session.run(y_pred_cls, feed_dict=feed_dict)
# Set the start-index for the next batch to the
# end-index of the current batch.
i = j
# Create a boolean array whether each image is correctly classified.
correct = (cls_true == cls_pred)
return correct, cls_pred
###Output
_____no_output_____
###Markdown
Calculate the predicted class for the test-set.
###Code
def predict_cls_test():
return predict_cls(images = data.test.images,
labels = data.test.labels,
cls_true = data.test.cls)
###Output
_____no_output_____
###Markdown
Calculate the predicted class for the validation-set.
###Code
def predict_cls_validation():
return predict_cls(images = data.validation.images,
labels = data.validation.labels,
cls_true = data.validation.cls)
###Output
_____no_output_____
###Markdown
Helper-functions for the classification accuracyThis function calculates the classification accuracy given a boolean array whether each image was correctly classified. E.g. `cls_accuracy([True, True, False, False, False]) = 2/5 = 0.4`
###Code
def cls_accuracy(correct):
# Calculate the number of correctly classified images.
# When summing a boolean array, False means 0 and True means 1.
correct_sum = correct.sum()
# Classification accuracy is the number of correctly classified
# images divided by the total number of images in the test-set.
acc = float(correct_sum) / len(correct)
return acc, correct_sum
###Output
_____no_output_____
###Markdown
Calculate the classification accuracy on the validation-set.
###Code
def validation_accuracy():
# Get the array of booleans whether the classifications are correct
# for the validation-set.
# The function returns two values but we only need the first.
correct, _ = predict_cls_validation()
# Calculate the classification accuracy and return it.
return cls_accuracy(correct)
###Output
_____no_output_____
###Markdown
Helper-function for showing the performance Function for printing the classification accuracy on the test-set.It takes a while to compute the classification for all the images in the test-set, that's why the results are re-used by calling the above functions directly from this function, so the classifications don't have to be recalculated by each function.
###Code
def print_test_accuracy(show_example_errors=False,
show_confusion_matrix=False):
# For all the images in the test-set,
# calculate the predicted classes and whether they are correct.
correct, cls_pred = predict_cls_test()
# Classification accuracy and the number of correct classifications.
acc, num_correct = cls_accuracy(correct)
# Number of images being classified.
num_images = len(correct)
# Print the accuracy.
msg = "Accuracy on Test-Set: {0:.1%} ({1} / {2})"
print(msg.format(acc, num_correct, num_images))
# Plot some examples of mis-classifications, if desired.
if show_example_errors:
print("Example errors:")
plot_example_errors(cls_pred=cls_pred, correct=correct)
# Plot the confusion matrix, if desired.
if show_confusion_matrix:
print("Confusion Matrix:")
plot_confusion_matrix(cls_pred=cls_pred)
###Output
_____no_output_____
###Markdown
Helper-function for plotting convolutional weights
###Code
def plot_conv_weights(weights, input_channel=0):
# Assume weights are TensorFlow ops for 4-dim variables
# e.g. weights_conv1 or weights_conv2.
# Retrieve the values of the weight-variables from TensorFlow.
# A feed-dict is not necessary because nothing is calculated.
w = session.run(weights)
# Print mean and standard deviation.
print("Mean: {0:.5f}, Stdev: {1:.5f}".format(w.mean(), w.std()))
# Get the lowest and highest values for the weights.
# This is used to correct the colour intensity across
# the images so they can be compared with each other.
w_min = np.min(w)
w_max = np.max(w)
# Number of filters used in the conv. layer.
num_filters = w.shape[3]
# Number of grids to plot.
# Rounded-up, square-root of the number of filters.
num_grids = math.ceil(math.sqrt(num_filters))
# Create figure with a grid of sub-plots.
fig, axes = plt.subplots(num_grids, num_grids)
# Plot all the filter-weights.
for i, ax in enumerate(axes.flat):
# Only plot the valid filter-weights.
if i<num_filters:
# Get the weights for the i'th filter of the input channel.
# The format of this 4-dim tensor is determined by the
# TensorFlow API. See Tutorial #02 for more details.
img = w[:, :, input_channel, i]
# Plot image.
ax.imshow(img, vmin=w_min, vmax=w_max,
interpolation='nearest', cmap='seismic')
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
###Output
_____no_output_____
###Markdown
Performance before any optimizationThe accuracy on the test-set is very low because the model variables have only been initialized and not optimized at all, so it just classifies the images randomly.
###Code
print_test_accuracy()
###Output
Accuracy on Test-Set: 8.5% (849 / 10000)
###Markdown
The convolutional weights are random, but it can be difficult to see any difference from the optimized weights that are shown below. The mean and standard deviation is shown so we can see whether there is a difference.
###Code
plot_conv_weights(weights=weights_conv1)
###Output
Mean: 0.00880, Stdev: 0.28635
###Markdown
Perform 10,000 optimization iterationsWe now perform 10,000 optimization iterations and abort the optimization if no improvement is found on the validation-set in 1000 iterations.An asterisk * is shown if the classification accuracy on the validation-set is an improvement.
###Code
optimize(num_iterations=10000)
print_test_accuracy(show_example_errors=True,
show_confusion_matrix=True)
###Output
Accuracy on Test-Set: 98.4% (9842 / 10000)
Example errors:
###Markdown
The convolutional weights have now been optimized. Compare these to the random weights shown above. They appear to be almost identical. In fact, I first thought there was a bug in the program because the weights look identical before and after optimization.But try and save the images and compare them side-by-side (you can just right-click the image to save it). You will notice very small differences before and after optimization.The mean and standard deviation has also changed slightly, so the optimized weights must be different.
###Code
plot_conv_weights(weights=weights_conv1)
###Output
Mean: 0.02895, Stdev: 0.29949
###Markdown
Initialize Variables AgainRe-initialize all the variables of the neural network with random values.
###Code
init_variables()
###Output
_____no_output_____
###Markdown
This means the neural network classifies the images completely randomly again, so the classification accuracy is very poor because it is like random guesses.
###Code
print_test_accuracy()
###Output
Accuracy on Test-Set: 13.4% (1341 / 10000)
###Markdown
The convolutional weights should now be different from the weights shown above.
###Code
plot_conv_weights(weights=weights_conv1)
###Output
Mean: -0.01086, Stdev: 0.28023
###Markdown
Restore Best VariablesRe-load all the variables that were saved to file during optimization.
###Code
saver.restore(sess=session, save_path=save_path)
###Output
_____no_output_____
###Markdown
The classification accuracy is high again when using the variables that were previously saved.Note that the classification accuracy may be slightly higher or lower than that reported above, because the variables in the file were chosen to maximize the classification accuracy on the validation-set, but the optimization actually continued for another 1000 iterations after saving those variables, so we are reporting the results for two slightly different sets of variables. Sometimes this leads to slightly better or worse performance on the test-set.
###Code
print_test_accuracy(show_example_errors=True,
show_confusion_matrix=True)
###Output
Accuracy on Test-Set: 98.3% (9826 / 10000)
Example errors:
###Markdown
The convolutional weights should be nearly identical to those shown above, although not completely identical because the weights shown above had 1000 optimization iterations more.
###Code
plot_conv_weights(weights=weights_conv1)
###Output
Mean: 0.02792, Stdev: 0.29822
###Markdown
Close TensorFlow Session We are now done using TensorFlow, so we close the session to release its resources.
###Code
# This has been commented out in case you want to modify and experiment
# with the Notebook without having to restart it.
# session.close()
###Output
_____no_output_____
###Markdown
TensorFlow Tutorial 04 Save & Restoreby [Abbas Malekpour](https://github.com/abbasmalekpour)/ [GitHub](https://github.com/abbasmalekpour/TensorFlow-Deeplearning) / [ ](https://www.youtube.com/playlist?list=PL9Hr9sNUjfsmEu1ZniY0XpHSzl5uihcXZ) WARNING!**This tutorial does not work with TensorFlow v. 1.9 due to the PrettyTensor builder API apparently no longer being updated and supported by the Google Developers. It is recommended that you use the _Keras API_ instead, which also makes it much easier to save and load a model, see Tutorial 03-C.** IntroductionThis tutorial demonstrates how to save and restore the variables of a Neural Network. During optimization we save the variables of the neural network whenever its classification accuracy has improved on the validation-set. The optimization is aborted when there has been no improvement for 1000 iterations. We then reload the variables that performed best on the validation-set.This strategy is called Early Stopping. It is used to avoid overfitting of the neural network. This occurs when the neural network is being trained for too long so it starts to learn the noise of the training-set, which causes the neural network to mis-classify new images.Overfitting is not really a problem for the neural network used in this tutorial on the MNIST data-set for recognizing hand-written digits. But this tutorial demonstrates the idea of Early Stopping.This builds on the previous tutorials, so you should have a basic understanding of TensorFlow and the add-on package Pretty Tensor. A lot of the source-code and text in this tutorial is similar to the previous tutorials and may be read quickly if you have recently read the previous tutorials. Flowchart The following chart shows roughly how the data flows in the Convolutional Neural Network that is implemented below. The network has two convolutional layers and two fully-connected layers, with the last layer being used for the final classification of the input images. See Tutorial 02 for a more detailed description of this network and convolution in general.
###Code
from IPython.display import Image
Image('images/02_network_flowchart.png')
###Output
_____no_output_____
###Markdown
Imports
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np
from sklearn.metrics import confusion_matrix
import time
from datetime import timedelta
import math
import os
# Use PrettyTensor to simplify Neural Network construction.
import prettytensor as pt
###Output
_____no_output_____
###Markdown
This was developed using Python 3.5.2 (Anaconda) and TensorFlow version:
###Code
tf.__version__
###Output
_____no_output_____
###Markdown
PrettyTensor version:
###Code
pt.__version__
###Output
_____no_output_____
###Markdown
Load Data The MNIST data-set is about 12 MB and will be downloaded automatically if it is not located in the given path.
###Code
from tensorflow.examples.tutorials.mnist import input_data
data = input_data.read_data_sets('data/MNIST/', one_hot=True)
###Output
Extracting data/MNIST/train-images-idx3-ubyte.gz
Extracting data/MNIST/train-labels-idx1-ubyte.gz
Extracting data/MNIST/t10k-images-idx3-ubyte.gz
Extracting data/MNIST/t10k-labels-idx1-ubyte.gz
###Markdown
The MNIST data-set has now been loaded and consists of 70,000 images and associated labels (i.e. classifications of the images). The data-set is split into 3 mutually exclusive sub-sets. We will only use the training and test-sets in this tutorial.
###Code
print("Size of:")
print("- Training-set:\t\t{}".format(len(data.train.labels)))
print("- Test-set:\t\t{}".format(len(data.test.labels)))
print("- Validation-set:\t{}".format(len(data.validation.labels)))
###Output
Size of:
- Training-set: 55000
- Test-set: 10000
- Validation-set: 5000
###Markdown
The class-labels are One-Hot encoded, which means that each label is a vector with 10 elements, all of which are zero except for one element. The index of this one element is the class-number, that is, the digit shown in the associated image. We also need the class-numbers as integers for the test- and validation-sets, so we calculate them now.
###Code
data.test.cls = np.argmax(data.test.labels, axis=1)
data.validation.cls = np.argmax(data.validation.labels, axis=1)
###Output
_____no_output_____
###Markdown
Data Dimensions The data dimensions are used in several places in the source-code below. They are defined once so we can use these variables instead of numbers throughout the source-code below.
###Code
# We know that MNIST images are 28 pixels in each dimension.
img_size = 28
# Images are stored in one-dimensional arrays of this length.
img_size_flat = img_size * img_size
# Tuple with height and width of images used to reshape arrays.
img_shape = (img_size, img_size)
# Number of colour channels for the images: 1 channel for gray-scale.
num_channels = 1
# Number of classes, one class for each of 10 digits.
num_classes = 10
###Output
_____no_output_____
###Markdown
TensorFlow Tutorial 04 Save & Restoreby [Magnus Erik Hvass Pedersen](http://www.hvass-labs.org/)/ [GitHub](https://github.com/Hvass-Labs/TensorFlow-Tutorials) / [Videos on YouTube](https://www.youtube.com/playlist?list=PL9Hr9sNUjfsmEu1ZniY0XpHSzl5uihcXZ) WARNING!**This tutorial does not work with TensorFlow v. 1.9 due to the PrettyTensor builder API apparently no longer being updated and supported by the Google Developers. It is recommended that you use the _Keras API_ instead, which also makes it much easier to save and load a model, see Tutorial 03-C.** IntroductionThis tutorial demonstrates how to save and restore the variables of a Neural Network. During optimization we save the variables of the neural network whenever its classification accuracy has improved on the validation-set. The optimization is aborted when there has been no improvement for 1000 iterations. We then reload the variables that performed best on the validation-set.This strategy is called Early Stopping. It is used to avoid overfitting of the neural network. This occurs when the neural network is being trained for too long so it starts to learn the noise of the training-set, which causes the neural network to mis-classify new images.Overfitting is not really a problem for the neural network used in this tutorial on the MNIST data-set for recognizing hand-written digits. But this tutorial demonstrates the idea of Early Stopping.This builds on the previous tutorials, so you should have a basic understanding of TensorFlow and the add-on package Pretty Tensor. A lot of the source-code and text in this tutorial is similar to the previous tutorials and may be read quickly if you have recently read the previous tutorials. Flowchart The following chart shows roughly how the data flows in the Convolutional Neural Network that is implemented below. The network has two convolutional layers and two fully-connected layers, with the last layer being used for the final classification of the input images. See Tutorial 02 for a more detailed description of this network and convolution in general.
###Code
from IPython.display import Image
Image('images/02_network_flowchart.png')
###Output
_____no_output_____
###Markdown
Imports
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np
from sklearn.metrics import confusion_matrix
import time
from datetime import timedelta
import math
import os
# Use PrettyTensor to simplify Neural Network construction.
import prettytensor as pt
###Output
_____no_output_____
###Markdown
This was developed using Python 3.5.2 (Anaconda) and TensorFlow version:
###Code
tf.__version__
###Output
_____no_output_____
###Markdown
PrettyTensor version:
###Code
pt.__version__
###Output
_____no_output_____
###Markdown
Load Data The MNIST data-set is about 12 MB and will be downloaded automatically if it is not located in the given path.
###Code
from tensorflow.examples.tutorials.mnist import input_data
data = input_data.read_data_sets('data/MNIST/', one_hot=True)
###Output
WARNING:tensorflow:From <ipython-input-3-37adf088ce13>:2: read_data_sets (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
Instructions for updating:
Please use alternatives such as official/mnist/dataset.py from tensorflow/models.
WARNING:tensorflow:From /home/matthew/anaconda3/envs/tensorflow/lib/python3.6/site-packages/tensorflow/contrib/learn/python/learn/datasets/mnist.py:260: maybe_download (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version.
Instructions for updating:
Please write your own downloading logic.
WARNING:tensorflow:From /home/matthew/anaconda3/envs/tensorflow/lib/python3.6/site-packages/tensorflow/contrib/learn/python/learn/datasets/mnist.py:262: extract_images (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
Instructions for updating:
Please use tf.data to implement this functionality.
Extracting data/MNIST/train-images-idx3-ubyte.gz
WARNING:tensorflow:From /home/matthew/anaconda3/envs/tensorflow/lib/python3.6/site-packages/tensorflow/contrib/learn/python/learn/datasets/mnist.py:267: extract_labels (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
Instructions for updating:
Please use tf.data to implement this functionality.
Extracting data/MNIST/train-labels-idx1-ubyte.gz
WARNING:tensorflow:From /home/matthew/anaconda3/envs/tensorflow/lib/python3.6/site-packages/tensorflow/contrib/learn/python/learn/datasets/mnist.py:110: dense_to_one_hot (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
Instructions for updating:
Please use tf.one_hot on tensors.
Extracting data/MNIST/t10k-images-idx3-ubyte.gz
Extracting data/MNIST/t10k-labels-idx1-ubyte.gz
WARNING:tensorflow:From /home/matthew/anaconda3/envs/tensorflow/lib/python3.6/site-packages/tensorflow/contrib/learn/python/learn/datasets/mnist.py:290: DataSet.__init__ (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
Instructions for updating:
Please use alternatives such as official/mnist/dataset.py from tensorflow/models.
###Markdown
The MNIST data-set has now been loaded and consists of 70,000 images and associated labels (i.e. classifications of the images). The data-set is split into 3 mutually exclusive sub-sets. We will only use the training and test-sets in this tutorial.
###Code
print("Size of:")
print("- Training-set:\t\t{}".format(len(data.train.labels)))
print("- Test-set:\t\t{}".format(len(data.test.labels)))
print("- Validation-set:\t{}".format(len(data.validation.labels)))
###Output
Size of:
- Training-set: 55000
- Test-set: 10000
- Validation-set: 5000
###Markdown
The class-labels are One-Hot encoded, which means that each label is a vector with 10 elements, all of which are zero except for one element. The index of this one element is the class-number, that is, the digit shown in the associated image. We also need the class-numbers as integers for the test- and validation-sets, so we calculate them now.
###Code
data.test.cls = np.argmax(data.test.labels, axis=1)
data.validation.cls = np.argmax(data.validation.labels, axis=1)
###Output
_____no_output_____
###Markdown
Data Dimensions The data dimensions are used in several places in the source-code below. They are defined once so we can use these variables instead of numbers throughout the source-code below.
###Code
# We know that MNIST images are 28 pixels in each dimension.
img_size = 28
# Images are stored in one-dimensional arrays of this length.
img_size_flat = img_size * img_size
# Tuple with height and width of images used to reshape arrays.
img_shape = (img_size, img_size)
# Number of colour channels for the images: 1 channel for gray-scale.
num_channels = 1
# Number of classes, one class for each of 10 digits.
num_classes = 10
###Output
_____no_output_____
###Markdown
Helper-function for plotting images Function used to plot 9 images in a 3x3 grid, and writing the true and predicted classes below each image.
###Code
def plot_images(images, cls_true, cls_pred=None):
assert len(images) == len(cls_true) == 9
# Create figure with 3x3 sub-plots.
fig, axes = plt.subplots(3, 3)
fig.subplots_adjust(hspace=0.3, wspace=0.3)
for i, ax in enumerate(axes.flat):
# Plot image.
ax.imshow(images[i].reshape(img_shape), cmap='binary')
# Show true and predicted classes.
if cls_pred is None:
xlabel = "True: {0}".format(cls_true[i])
else:
xlabel = "True: {0}, Pred: {1}".format(cls_true[i], cls_pred[i])
# Show the classes as the label on the x-axis.
ax.set_xlabel(xlabel)
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
###Output
_____no_output_____
###Markdown
Plot a few images to see if data is correct
###Code
# Get the first images from the test-set.
images = data.test.images[0:9]
# Get the true classes for those images.
cls_true = data.test.cls[0:9]
# Plot the images and labels using our helper-function above.
plot_images(images=images, cls_true=cls_true)
###Output
_____no_output_____
###Markdown
TensorFlow GraphThe entire purpose of TensorFlow is to have a so-called computational graph that can be executed much more efficiently than if the same calculations were to be performed directly in Python. TensorFlow can be more efficient than NumPy because TensorFlow knows the entire computation graph that must be executed, while NumPy only knows the computation of a single mathematical operation at a time.TensorFlow can also automatically calculate the gradients that are needed to optimize the variables of the graph so as to make the model perform better. This is because the graph is a combination of simple mathematical expressions so the gradient of the entire graph can be calculated using the chain-rule for derivatives.TensorFlow can also take advantage of multi-core CPUs as well as GPUs - and Google has even built special chips just for TensorFlow which are called TPUs (Tensor Processing Units) and are even faster than GPUs.A TensorFlow graph consists of the following parts which will be detailed below:* Placeholder variables used for inputting data to the graph.* Variables that are going to be optimized so as to make the convolutional network perform better.* The mathematical formulas for the convolutional network.* A loss measure that can be used to guide the optimization of the variables.* An optimization method which updates the variables.In addition, the TensorFlow graph may also contain various debugging statements e.g. for logging data to be displayed using TensorBoard, which is not covered in this tutorial. Placeholder variables Placeholder variables serve as the input to the TensorFlow computational graph that we may change each time we execute the graph. We call this feeding the placeholder variables and it is demonstrated further below.First we define the placeholder variable for the input images. This allows us to change the images that are input to the TensorFlow graph. This is a so-called tensor, which just means that it is a multi-dimensional array. The data-type is set to `float32` and the shape is set to `[None, img_size_flat]`, where `None` means that the tensor may hold an arbitrary number of images with each image being a vector of length `img_size_flat`.
###Code
x = tf.placeholder(tf.float32, shape=[None, img_size_flat], name='x')
###Output
_____no_output_____
###Markdown
The convolutional layers expect `x` to be encoded as a 4-dim tensor so we have to reshape it so its shape is instead `[num_images, img_height, img_width, num_channels]`. Note that `img_height == img_width == img_size` and `num_images` can be inferred automatically by using -1 for the size of the first dimension. So the reshape operation is:
###Code
x_image = tf.reshape(x, [-1, img_size, img_size, num_channels])
###Output
_____no_output_____
###Markdown
Next we have the placeholder variable for the true labels associated with the images that were input in the placeholder variable `x`. The shape of this placeholder variable is `[None, num_classes]` which means it may hold an arbitrary number of labels and each label is a vector of length `num_classes` which is 10 in this case.
###Code
y_true = tf.placeholder(tf.float32, shape=[None, 10], name='y_true')
###Output
_____no_output_____
###Markdown
We could also have a placeholder variable for the class-number, but we will instead calculate it using argmax. Note that this is a TensorFlow operator so nothing is calculated at this point.
###Code
y_true_cls = tf.argmax(y_true, dimension=1)
###Output
WARNING:tensorflow:From <ipython-input-12-71ccadb4572d>:1: calling argmax (from tensorflow.python.ops.math_ops) with dimension is deprecated and will be removed in a future version.
Instructions for updating:
Use the `axis` argument instead
###Markdown
Neural Network This section implements the Convolutional Neural Network using Pretty Tensor, which is much simpler than a direct implementation in TensorFlow, see Tutorial 03.The basic idea is to wrap the input tensor `x_image` in a Pretty Tensor object which has helper-functions for adding new computational layers so as to create an entire neural network. Pretty Tensor takes care of the variable allocation, etc.
###Code
x_pretty = pt.wrap(x_image)
###Output
_____no_output_____
###Markdown
Now that we have wrapped the input image in a Pretty Tensor object, we can add the convolutional and fully-connected layers in just a few lines of source-code.Note that `pt.defaults_scope(activation_fn=tf.nn.relu)` makes `activation_fn=tf.nn.relu` an argument for each of the layers constructed inside the `with`-block, so that Rectified Linear Units (ReLU) are used for each of these layers. The `defaults_scope` makes it easy to change arguments for all of the layers.
###Code
with pt.defaults_scope(activation_fn=tf.nn.relu):
y_pred, loss = x_pretty.\
conv2d(kernel=5, depth=16, name='layer_conv1').\
max_pool(kernel=2, stride=2).\
conv2d(kernel=5, depth=36, name='layer_conv2').\
max_pool(kernel=2, stride=2).\
flatten().\
fully_connected(size=128, name='layer_fc1').\
softmax_classifier(num_classes=num_classes, labels=y_true)
###Output
_____no_output_____
###Markdown
Getting the Weights Further below, we want to plot the weights of the neural network. When the network is constructed using Pretty Tensor, all the variables of the layers are created indirectly by Pretty Tensor. We therefore have to retrieve the variables from TensorFlow.We used the names `layer_conv1` and `layer_conv2` for the two convolutional layers. These are also called variable scopes (not to be confused with `defaults_scope` as described above). Pretty Tensor automatically gives names to the variables it creates for each layer, so we can retrieve the weights for a layer using the layer's scope-name and the variable-name.The implementation is somewhat awkward because we have to use the TensorFlow function `get_variable()` which was designed for another purpose; either creating a new variable or re-using an existing variable. The easiest thing is to make the following helper-function.
###Code
def get_weights_variable(layer_name):
# Retrieve an existing variable named 'weights' in the scope
# with the given layer_name.
# This is awkward because the TensorFlow function was
# really intended for another purpose.
with tf.variable_scope(layer_name, reuse=True):
variable = tf.get_variable('weights')
return variable
###Output
_____no_output_____
###Markdown
Using this helper-function we can retrieve the variables. These are TensorFlow objects. In order to get the contents of the variables, you must do something like: `contents = session.run(weights_conv1)` as demonstrated further below.
###Code
weights_conv1 = get_weights_variable(layer_name='layer_conv1')
weights_conv2 = get_weights_variable(layer_name='layer_conv2')
###Output
_____no_output_____
###Markdown
Optimization Method Pretty Tensor gave us the predicted class-label (`y_pred`) as well as a loss-measure that must be minimized, so as to improve the ability of the neural network to classify the input images.It is unclear from the documentation for Pretty Tensor whether the loss-measure is cross-entropy or something else. But we now use the `AdamOptimizer` to minimize the loss.Note that optimization is not performed at this point. In fact, nothing is calculated at all, we just add the optimizer-object to the TensorFlow graph for later execution.
###Code
optimizer = tf.train.AdamOptimizer(learning_rate=1e-4).minimize(loss)
###Output
_____no_output_____
###Markdown
Performance MeasuresWe need a few more performance measures to display the progress to the user.First we calculate the predicted class number from the output of the neural network `y_pred`, which is a vector with 10 elements. The class number is the index of the largest element.
###Code
y_pred_cls = tf.argmax(y_pred, dimension=1)
###Output
_____no_output_____
###Markdown
Then we create a vector of booleans telling us whether the predicted class equals the true class of each image.
###Code
correct_prediction = tf.equal(y_pred_cls, y_true_cls)
###Output
_____no_output_____
###Markdown
The classification accuracy is calculated by first type-casting the vector of booleans to floats, so that False becomes 0 and True becomes 1, and then taking the average of these numbers.
###Code
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
###Output
_____no_output_____
###Markdown
SaverIn order to save the variables of the neural network, we now create a so-called Saver-object which is used for storing and retrieving all the variables of the TensorFlow graph. Nothing is actually saved at this point, which will be done further below in the `optimize()`-function.
###Code
saver = tf.train.Saver()
###Output
_____no_output_____
###Markdown
The saved files are often called checkpoints because they may be written at regular intervals during optimization.This is the directory used for saving and retrieving the data.
###Code
save_dir = 'checkpoints/'
###Output
_____no_output_____
###Markdown
Create the directory if it does not exist.
###Code
if not os.path.exists(save_dir):
os.makedirs(save_dir)
###Output
_____no_output_____
###Markdown
This is the path for the checkpoint-file.
###Code
save_path = os.path.join(save_dir, 'best_validation')
###Output
_____no_output_____
###Markdown
TensorFlow Run Create TensorFlow sessionOnce the TensorFlow graph has been created, we have to create a TensorFlow session which is used to execute the graph.
###Code
session = tf.Session()
###Output
_____no_output_____
###Markdown
Initialize variablesThe variables for `weights` and `biases` must be initialized before we start optimizing them. We make a simple wrapper-function for this, because we will call it again below.
###Code
def init_variables():
session.run(tf.global_variables_initializer())
###Output
_____no_output_____
###Markdown
Execute the function now to initialize the variables.
###Code
init_variables()
###Output
_____no_output_____
###Markdown
Helper-function to perform optimization iterations There are 55,000 images in the training-set. It takes a long time to calculate the gradient of the model using all these images. We therefore only use a small batch of images in each iteration of the optimizer.If your computer crashes or becomes very slow because you run out of RAM, then you may try and lower this number, but you may then need to perform more optimization iterations.
###Code
train_batch_size = 64
###Output
_____no_output_____
###Markdown
The classification accuracy for the validation-set will be calculated for every 100 iterations of the optimization function below. The optimization will be stopped if the validation accuracy has not been improved in 1000 iterations. We need a few variables to keep track of this.
###Code
# Best validation accuracy seen so far.
best_validation_accuracy = 0.0
# Iteration-number for last improvement to validation accuracy.
last_improvement = 0
# Stop optimization if no improvement found in this many iterations.
require_improvement = 1000
###Output
_____no_output_____
###Markdown
Function for performing a number of optimization iterations so as to gradually improve the variables of the network layers. In each iteration, a new batch of data is selected from the training-set and then TensorFlow executes the optimizer using those training samples. The progress is printed every 100 iterations where the validation accuracy is also calculated and saved to a file if it is an improvement.
###Code
# Counter for total number of iterations performed so far.
total_iterations = 0
def optimize(num_iterations):
# Ensure we update the global variables rather than local copies.
global total_iterations
global best_validation_accuracy
global last_improvement
# Start-time used for printing time-usage below.
start_time = time.time()
for i in range(num_iterations):
# Increase the total number of iterations performed.
# It is easier to update it in each iteration because
# we need this number several times in the following.
total_iterations += 1
# Get a batch of training examples.
# x_batch now holds a batch of images and
# y_true_batch are the true labels for those images.
x_batch, y_true_batch = data.train.next_batch(train_batch_size)
# Put the batch into a dict with the proper names
# for placeholder variables in the TensorFlow graph.
feed_dict_train = {x: x_batch,
y_true: y_true_batch}
# Run the optimizer using this batch of training data.
# TensorFlow assigns the variables in feed_dict_train
# to the placeholder variables and then runs the optimizer.
session.run(optimizer, feed_dict=feed_dict_train)
# Print status every 100 iterations and after last iteration.
if (total_iterations % 100 == 0) or (i == (num_iterations - 1)):
# Calculate the accuracy on the training-batch.
acc_train = session.run(accuracy, feed_dict=feed_dict_train)
# Calculate the accuracy on the validation-set.
# The function returns 2 values but we only need the first.
acc_validation, _ = validation_accuracy()
# If validation accuracy is an improvement over best-known.
if acc_validation > best_validation_accuracy:
# Update the best-known validation accuracy.
best_validation_accuracy = acc_validation
# Set the iteration for the last improvement to current.
last_improvement = total_iterations
# Save all variables of the TensorFlow graph to file.
saver.save(sess=session, save_path=save_path)
# A string to be printed below, shows improvement found.
improved_str = '*'
else:
# An empty string to be printed below.
# Shows that no improvement was found.
improved_str = ''
# Status-message for printing.
msg = "Iter: {0:>6}, Train-Batch Accuracy: {1:>6.1%}, Validation Acc: {2:>6.1%} {3}"
# Print it.
print(msg.format(i + 1, acc_train, acc_validation, improved_str))
# If no improvement found in the required number of iterations.
if total_iterations - last_improvement > require_improvement:
print("No improvement found in a while, stopping optimization.")
# Break out from the for-loop.
break
# Ending time.
end_time = time.time()
# Difference between start and end-times.
time_dif = end_time - start_time
# Print the time-usage.
print("Time usage: " + str(timedelta(seconds=int(round(time_dif)))))
###Output
_____no_output_____
###Markdown
Helper-function to plot example errors Function for plotting examples of images from the test-set that have been mis-classified.
###Code
def plot_example_errors(cls_pred, correct):
# This function is called from print_test_accuracy() below.
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# correct is a boolean array whether the predicted class
# is equal to the true class for each image in the test-set.
# Negate the boolean array.
incorrect = (correct == False)
# Get the images from the test-set that have been
# incorrectly classified.
images = data.test.images[incorrect]
# Get the predicted classes for those images.
cls_pred = cls_pred[incorrect]
# Get the true classes for those images.
cls_true = data.test.cls[incorrect]
# Plot the first 9 images.
plot_images(images=images[0:9],
cls_true=cls_true[0:9],
cls_pred=cls_pred[0:9])
###Output
_____no_output_____
###Markdown
Helper-function to plot confusion matrix
###Code
def plot_confusion_matrix(cls_pred):
# This is called from print_test_accuracy() below.
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# Get the true classifications for the test-set.
cls_true = data.test.cls
# Get the confusion matrix using sklearn.
cm = confusion_matrix(y_true=cls_true,
y_pred=cls_pred)
# Print the confusion matrix as text.
print(cm)
# Plot the confusion matrix as an image.
plt.matshow(cm)
# Make various adjustments to the plot.
plt.colorbar()
tick_marks = np.arange(num_classes)
plt.xticks(tick_marks, range(num_classes))
plt.yticks(tick_marks, range(num_classes))
plt.xlabel('Predicted')
plt.ylabel('True')
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
###Output
_____no_output_____
###Markdown
Helper-functions for calculating classificationsThis function calculates the predicted classes of images and also returns a boolean array whether the classification of each image is correct.The calculation is done in batches because it might use too much RAM otherwise. If your computer crashes then you can try and lower the batch-size.
###Code
# Split the data-set in batches of this size to limit RAM usage.
batch_size = 256
def predict_cls(images, labels, cls_true):
# Number of images.
num_images = len(images)
# Allocate an array for the predicted classes which
# will be calculated in batches and filled into this array.
cls_pred = np.zeros(shape=num_images, dtype=np.int)
# Now calculate the predicted classes for the batches.
# We will just iterate through all the batches.
# There might be a more clever and Pythonic way of doing this.
# The starting index for the next batch is denoted i.
i = 0
while i < num_images:
# The ending index for the next batch is denoted j.
j = min(i + batch_size, num_images)
# Create a feed-dict with the images and labels
# between index i and j.
feed_dict = {x: images[i:j, :],
y_true: labels[i:j, :]}
# Calculate the predicted class using TensorFlow.
cls_pred[i:j] = session.run(y_pred_cls, feed_dict=feed_dict)
# Set the start-index for the next batch to the
# end-index of the current batch.
i = j
# Create a boolean array whether each image is correctly classified.
correct = (cls_true == cls_pred)
return correct, cls_pred
###Output
_____no_output_____
###Markdown
Calculate the predicted class for the test-set.
###Code
def predict_cls_test():
return predict_cls(images = data.test.images,
labels = data.test.labels,
cls_true = data.test.cls)
###Output
_____no_output_____
###Markdown
Calculate the predicted class for the validation-set.
###Code
def predict_cls_validation():
return predict_cls(images = data.validation.images,
labels = data.validation.labels,
cls_true = data.validation.cls)
###Output
_____no_output_____
###Markdown
Helper-functions for the classification accuracyThis function calculates the classification accuracy given a boolean array whether each image was correctly classified. E.g. `cls_accuracy([True, True, False, False, False]) = 2/5 = 0.4`
###Code
def cls_accuracy(correct):
# Calculate the number of correctly classified images.
# When summing a boolean array, False means 0 and True means 1.
correct_sum = correct.sum()
# Classification accuracy is the number of correctly classified
# images divided by the total number of images in the test-set.
acc = float(correct_sum) / len(correct)
return acc, correct_sum
###Output
_____no_output_____
###Markdown
Calculate the classification accuracy on the validation-set.
###Code
def validation_accuracy():
# Get the array of booleans whether the classifications are correct
# for the validation-set.
# The function returns two values but we only need the first.
correct, _ = predict_cls_validation()
# Calculate the classification accuracy and return it.
return cls_accuracy(correct)
###Output
_____no_output_____
###Markdown
Helper-function for showing the performance Function for printing the classification accuracy on the test-set.It takes a while to compute the classification for all the images in the test-set, that's why the results are re-used by calling the above functions directly from this function, so the classifications don't have to be recalculated by each function.
###Code
def print_test_accuracy(show_example_errors=False,
show_confusion_matrix=False):
# For all the images in the test-set,
# calculate the predicted classes and whether they are correct.
correct, cls_pred = predict_cls_test()
# Classification accuracy and the number of correct classifications.
acc, num_correct = cls_accuracy(correct)
# Number of images being classified.
num_images = len(correct)
# Print the accuracy.
msg = "Accuracy on Test-Set: {0:.1%} ({1} / {2})"
print(msg.format(acc, num_correct, num_images))
# Plot some examples of mis-classifications, if desired.
if show_example_errors:
print("Example errors:")
plot_example_errors(cls_pred=cls_pred, correct=correct)
# Plot the confusion matrix, if desired.
if show_confusion_matrix:
print("Confusion Matrix:")
plot_confusion_matrix(cls_pred=cls_pred)
###Output
_____no_output_____
###Markdown
Helper-function for plotting convolutional weights
###Code
def plot_conv_weights(weights, input_channel=0):
# Assume weights are TensorFlow ops for 4-dim variables
# e.g. weights_conv1 or weights_conv2.
# Retrieve the values of the weight-variables from TensorFlow.
# A feed-dict is not necessary because nothing is calculated.
w = session.run(weights)
# Print mean and standard deviation.
print("Mean: {0:.5f}, Stdev: {1:.5f}".format(w.mean(), w.std()))
# Get the lowest and highest values for the weights.
# This is used to correct the colour intensity across
# the images so they can be compared with each other.
w_min = np.min(w)
w_max = np.max(w)
# Number of filters used in the conv. layer.
num_filters = w.shape[3]
# Number of grids to plot.
# Rounded-up, square-root of the number of filters.
num_grids = math.ceil(math.sqrt(num_filters))
# Create figure with a grid of sub-plots.
fig, axes = plt.subplots(num_grids, num_grids)
# Plot all the filter-weights.
for i, ax in enumerate(axes.flat):
# Only plot the valid filter-weights.
if i<num_filters:
# Get the weights for the i'th filter of the input channel.
# The format of this 4-dim tensor is determined by the
# TensorFlow API. See Tutorial #02 for more details.
img = w[:, :, input_channel, i]
# Plot image.
ax.imshow(img, vmin=w_min, vmax=w_max,
interpolation='nearest', cmap='seismic')
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
###Output
_____no_output_____
###Markdown
Performance before any optimizationThe accuracy on the test-set is very low because the model variables have only been initialized and not optimized at all, so it just classifies the images randomly.
###Code
print_test_accuracy()
###Output
Accuracy on Test-Set: 8.5% (849 / 10000)
###Markdown
The convolutional weights are random, but it can be difficult to see any difference from the optimized weights that are shown below. The mean and standard deviation is shown so we can see whether there is a difference.
###Code
plot_conv_weights(weights=weights_conv1)
###Output
Mean: 0.00880, Stdev: 0.28635
###Markdown
Perform 10,000 optimization iterationsWe now perform 10,000 optimization iterations and abort the optimization if no improvement is found on the validation-set in 1000 iterations.An asterisk * is shown if the classification accuracy on the validation-set is an improvement.
###Code
optimize(num_iterations=10000)
print_test_accuracy(show_example_errors=True,
show_confusion_matrix=True)
###Output
Accuracy on Test-Set: 98.4% (9842 / 10000)
Example errors:
###Markdown
The convolutional weights have now been optimized. Compare these to the random weights shown above. They appear to be almost identical. In fact, I first thought there was a bug in the program because the weights look identical before and after optimization.But try and save the images and compare them side-by-side (you can just right-click the image to save it). You will notice very small differences before and after optimization.The mean and standard deviation has also changed slightly, so the optimized weights must be different.
###Code
plot_conv_weights(weights=weights_conv1)
###Output
Mean: 0.02895, Stdev: 0.29949
###Markdown
Initialize Variables AgainRe-initialize all the variables of the neural network with random values.
###Code
init_variables()
###Output
_____no_output_____
###Markdown
This means the neural network classifies the images completely randomly again, so the classification accuracy is very poor because it is like random guesses.
###Code
print_test_accuracy()
###Output
Accuracy on Test-Set: 13.4% (1341 / 10000)
###Markdown
The convolutional weights should now be different from the weights shown above.
###Code
plot_conv_weights(weights=weights_conv1)
###Output
Mean: -0.01086, Stdev: 0.28023
###Markdown
Restore Best VariablesRe-load all the variables that were saved to file during optimization.
###Code
saver.restore(sess=session, save_path=save_path)
###Output
_____no_output_____
###Markdown
The classification accuracy is high again when using the variables that were previously saved.Note that the classification accuracy may be slightly higher or lower than that reported above, because the variables in the file were chosen to maximize the classification accuracy on the validation-set, but the optimization actually continued for another 1000 iterations after saving those variables, so we are reporting the results for two slightly different sets of variables. Sometimes this leads to slightly better or worse performance on the test-set.
###Code
print_test_accuracy(show_example_errors=True,
show_confusion_matrix=True)
###Output
Accuracy on Test-Set: 98.3% (9826 / 10000)
Example errors:
###Markdown
The convolutional weights should be nearly identical to those shown above, although not completely identical because the weights shown above had 1000 optimization iterations more.
###Code
plot_conv_weights(weights=weights_conv1)
###Output
Mean: 0.02792, Stdev: 0.29822
###Markdown
Close TensorFlow Session We are now done using TensorFlow, so we close the session to release its resources.
###Code
# This has been commented out in case you want to modify and experiment
# with the Notebook without having to restart it.
# session.close()
###Output
_____no_output_____
###Markdown
TensorFlow Tutorial 04 Save & Restoreby [Magnus Erik Hvass Pedersen](http://www.hvass-labs.org/)/ [GitHub](https://github.com/Hvass-Labs/TensorFlow-Tutorials) / [Videos on YouTube](https://www.youtube.com/playlist?list=PL9Hr9sNUjfsmEu1ZniY0XpHSzl5uihcXZ) WARNING!**This tutorial does not work with TensorFlow v. 1.9 due to the PrettyTensor builder API apparently no longer being updated and supported by the Google Developers. It is recommended that you use the _Keras API_ instead, which also makes it much easier to save and load a model, see Tutorial 03-C.** IntroductionThis tutorial demonstrates how to save and restore the variables of a Neural Network. During optimization we save the variables of the neural network whenever its classification accuracy has improved on the validation-set. The optimization is aborted when there has been no improvement for 1000 iterations. We then reload the variables that performed best on the validation-set.This strategy is called Early Stopping. It is used to avoid overfitting of the neural network. This occurs when the neural network is being trained for too long so it starts to learn the noise of the training-set, which causes the neural network to mis-classify new images.Overfitting is not really a problem for the neural network used in this tutorial on the MNIST data-set for recognizing hand-written digits. But this tutorial demonstrates the idea of Early Stopping.This builds on the previous tutorials, so you should have a basic understanding of TensorFlow and the add-on package Pretty Tensor. A lot of the source-code and text in this tutorial is similar to the previous tutorials and may be read quickly if you have recently read the previous tutorials. Flowchart The following chart shows roughly how the data flows in the Convolutional Neural Network that is implemented below. The network has two convolutional layers and two fully-connected layers, with the last layer being used for the final classification of the input images. See Tutorial 02 for a more detailed description of this network and convolution in general.
###Code
from IPython.display import Image
Image('images/02_network_flowchart.png')
###Output
_____no_output_____
###Markdown
Imports
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np
from sklearn.metrics import confusion_matrix
import time
from datetime import timedelta
import math
import os
# Use PrettyTensor to simplify Neural Network construction.
import prettytensor as pt
###Output
_____no_output_____
###Markdown
This was developed using Python 3.5.2 (Anaconda) and TensorFlow version:
###Code
tf.__version__
###Output
_____no_output_____
###Markdown
PrettyTensor version:
###Code
pt.__version__
###Output
_____no_output_____
###Markdown
Load Data The MNIST data-set is about 12 MB and will be downloaded automatically if it is not located in the given path.
###Code
from tensorflow.examples.tutorials.mnist import input_data
data = input_data.read_data_sets('data/MNIST/', one_hot=True)
###Output
Extracting data/MNIST/train-images-idx3-ubyte.gz
Extracting data/MNIST/train-labels-idx1-ubyte.gz
Extracting data/MNIST/t10k-images-idx3-ubyte.gz
Extracting data/MNIST/t10k-labels-idx1-ubyte.gz
###Markdown
The MNIST data-set has now been loaded and consists of 70,000 images and associated labels (i.e. classifications of the images). The data-set is split into 3 mutually exclusive sub-sets. We will only use the training and test-sets in this tutorial.
###Code
print("Size of:")
print("- Training-set:\t\t{}".format(len(data.train.labels)))
print("- Test-set:\t\t{}".format(len(data.test.labels)))
print("- Validation-set:\t{}".format(len(data.validation.labels)))
###Output
Size of:
- Training-set: 55000
- Test-set: 10000
- Validation-set: 5000
###Markdown
The class-labels are One-Hot encoded, which means that each label is a vector with 10 elements, all of which are zero except for one element. The index of this one element is the class-number, that is, the digit shown in the associated image. We also need the class-numbers as integers for the test- and validation-sets, so we calculate them now.
###Code
data.test.cls = np.argmax(data.test.labels, axis=1)
data.validation.cls = np.argmax(data.validation.labels, axis=1)
###Output
_____no_output_____
###Markdown
Data Dimensions The data dimensions are used in several places in the source-code below. They are defined once so we can use these variables instead of numbers throughout the source-code below.
###Code
# We know that MNIST images are 28 pixels in each dimension.
img_size = 28
# Images are stored in one-dimensional arrays of this length.
img_size_flat = img_size * img_size
# Tuple with height and width of images used to reshape arrays.
img_shape = (img_size, img_size)
# Number of colour channels for the images: 1 channel for gray-scale.
num_channels = 1
# Number of classes, one class for each of 10 digits.
num_classes = 10
###Output
_____no_output_____
###Markdown
Helper-function for plotting images Function used to plot 9 images in a 3x3 grid, and writing the true and predicted classes below each image.
###Code
def plot_images(images, cls_true, cls_pred=None):
assert len(images) == len(cls_true) == 9
# Create figure with 3x3 sub-plots.
fig, axes = plt.subplots(3, 3)
fig.subplots_adjust(hspace=0.3, wspace=0.3)
for i, ax in enumerate(axes.flat):
# Plot image.
ax.imshow(images[i].reshape(img_shape), cmap='binary')
# Show true and predicted classes.
if cls_pred is None:
xlabel = "True: {0}".format(cls_true[i])
else:
xlabel = "True: {0}, Pred: {1}".format(cls_true[i], cls_pred[i])
# Show the classes as the label on the x-axis.
ax.set_xlabel(xlabel)
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
###Output
_____no_output_____
###Markdown
Plot a few images to see if data is correct
###Code
# Get the first images from the test-set.
images = data.test.images[0:9]
# Get the true classes for those images.
cls_true = data.test.cls[0:9]
# Plot the images and labels using our helper-function above.
plot_images(images=images, cls_true=cls_true)
###Output
_____no_output_____
###Markdown
TensorFlow GraphThe entire purpose of TensorFlow is to have a so-called computational graph that can be executed much more efficiently than if the same calculations were to be performed directly in Python. TensorFlow can be more efficient than NumPy because TensorFlow knows the entire computation graph that must be executed, while NumPy only knows the computation of a single mathematical operation at a time.TensorFlow can also automatically calculate the gradients that are needed to optimize the variables of the graph so as to make the model perform better. This is because the graph is a combination of simple mathematical expressions so the gradient of the entire graph can be calculated using the chain-rule for derivatives.TensorFlow can also take advantage of multi-core CPUs as well as GPUs - and Google has even built special chips just for TensorFlow which are called TPUs (Tensor Processing Units) and are even faster than GPUs.A TensorFlow graph consists of the following parts which will be detailed below:* Placeholder variables used for inputting data to the graph.* Variables that are going to be optimized so as to make the convolutional network perform better.* The mathematical formulas for the convolutional network.* A loss measure that can be used to guide the optimization of the variables.* An optimization method which updates the variables.In addition, the TensorFlow graph may also contain various debugging statements e.g. for logging data to be displayed using TensorBoard, which is not covered in this tutorial. Placeholder variables Placeholder variables serve as the input to the TensorFlow computational graph that we may change each time we execute the graph. We call this feeding the placeholder variables and it is demonstrated further below.First we define the placeholder variable for the input images. This allows us to change the images that are input to the TensorFlow graph. This is a so-called tensor, which just means that it is a multi-dimensional array. The data-type is set to `float32` and the shape is set to `[None, img_size_flat]`, where `None` means that the tensor may hold an arbitrary number of images with each image being a vector of length `img_size_flat`.
###Code
x = tf.placeholder(tf.float32, shape=[None, img_size_flat], name='x')
###Output
_____no_output_____
###Markdown
The convolutional layers expect `x` to be encoded as a 4-dim tensor so we have to reshape it so its shape is instead `[num_images, img_height, img_width, num_channels]`. Note that `img_height == img_width == img_size` and `num_images` can be inferred automatically by using -1 for the size of the first dimension. So the reshape operation is:
###Code
x_image = tf.reshape(x, [-1, img_size, img_size, num_channels])
###Output
_____no_output_____
###Markdown
Next we have the placeholder variable for the true labels associated with the images that were input in the placeholder variable `x`. The shape of this placeholder variable is `[None, num_classes]` which means it may hold an arbitrary number of labels and each label is a vector of length `num_classes` which is 10 in this case.
###Code
y_true = tf.placeholder(tf.float32, shape=[None, 10], name='y_true')
###Output
_____no_output_____
###Markdown
We could also have a placeholder variable for the class-number, but we will instead calculate it using argmax. Note that this is a TensorFlow operator so nothing is calculated at this point.
###Code
y_true_cls = tf.argmax(y_true, dimension=1)
###Output
_____no_output_____
###Markdown
Neural Network This section implements the Convolutional Neural Network using Pretty Tensor, which is much simpler than a direct implementation in TensorFlow, see Tutorial 03.The basic idea is to wrap the input tensor `x_image` in a Pretty Tensor object which has helper-functions for adding new computational layers so as to create an entire neural network. Pretty Tensor takes care of the variable allocation, etc.
###Code
x_pretty = pt.wrap(x_image)
###Output
_____no_output_____
###Markdown
Now that we have wrapped the input image in a Pretty Tensor object, we can add the convolutional and fully-connected layers in just a few lines of source-code.Note that `pt.defaults_scope(activation_fn=tf.nn.relu)` makes `activation_fn=tf.nn.relu` an argument for each of the layers constructed inside the `with`-block, so that Rectified Linear Units (ReLU) are used for each of these layers. The `defaults_scope` makes it easy to change arguments for all of the layers.
###Code
with pt.defaults_scope(activation_fn=tf.nn.relu):
y_pred, loss = x_pretty.\
conv2d(kernel=5, depth=16, name='layer_conv1').\
max_pool(kernel=2, stride=2).\
conv2d(kernel=5, depth=36, name='layer_conv2').\
max_pool(kernel=2, stride=2).\
flatten().\
fully_connected(size=128, name='layer_fc1').\
softmax_classifier(num_classes=num_classes, labels=y_true)
###Output
_____no_output_____
###Markdown
Getting the Weights Further below, we want to plot the weights of the neural network. When the network is constructed using Pretty Tensor, all the variables of the layers are created indirectly by Pretty Tensor. We therefore have to retrieve the variables from TensorFlow.We used the names `layer_conv1` and `layer_conv2` for the two convolutional layers. These are also called variable scopes (not to be confused with `defaults_scope` as described above). Pretty Tensor automatically gives names to the variables it creates for each layer, so we can retrieve the weights for a layer using the layer's scope-name and the variable-name.The implementation is somewhat awkward because we have to use the TensorFlow function `get_variable()` which was designed for another purpose; either creating a new variable or re-using an existing variable. The easiest thing is to make the following helper-function.
###Code
def get_weights_variable(layer_name):
# Retrieve an existing variable named 'weights' in the scope
# with the given layer_name.
# This is awkward because the TensorFlow function was
# really intended for another purpose.
with tf.variable_scope(layer_name, reuse=True):
variable = tf.get_variable('weights')
return variable
###Output
_____no_output_____
###Markdown
Using this helper-function we can retrieve the variables. These are TensorFlow objects. In order to get the contents of the variables, you must do something like: `contents = session.run(weights_conv1)` as demonstrated further below.
###Code
weights_conv1 = get_weights_variable(layer_name='layer_conv1')
weights_conv2 = get_weights_variable(layer_name='layer_conv2')
###Output
_____no_output_____
###Markdown
Optimization Method Pretty Tensor gave us the predicted class-label (`y_pred`) as well as a loss-measure that must be minimized, so as to improve the ability of the neural network to classify the input images.It is unclear from the documentation for Pretty Tensor whether the loss-measure is cross-entropy or something else. But we now use the `AdamOptimizer` to minimize the loss.Note that optimization is not performed at this point. In fact, nothing is calculated at all, we just add the optimizer-object to the TensorFlow graph for later execution.
###Code
optimizer = tf.train.AdamOptimizer(learning_rate=1e-4).minimize(loss)
###Output
_____no_output_____
###Markdown
Performance MeasuresWe need a few more performance measures to display the progress to the user.First we calculate the predicted class number from the output of the neural network `y_pred`, which is a vector with 10 elements. The class number is the index of the largest element.
###Code
y_pred_cls = tf.argmax(y_pred, dimension=1)
###Output
_____no_output_____
###Markdown
Then we create a vector of booleans telling us whether the predicted class equals the true class of each image.
###Code
correct_prediction = tf.equal(y_pred_cls, y_true_cls)
###Output
_____no_output_____
###Markdown
The classification accuracy is calculated by first type-casting the vector of booleans to floats, so that False becomes 0 and True becomes 1, and then taking the average of these numbers.
###Code
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
###Output
_____no_output_____
###Markdown
SaverIn order to save the variables of the neural network, we now create a so-called Saver-object which is used for storing and retrieving all the variables of the TensorFlow graph. Nothing is actually saved at this point, which will be done further below in the `optimize()`-function.
###Code
saver = tf.train.Saver()
###Output
_____no_output_____
###Markdown
The saved files are often called checkpoints because they may be written at regular intervals during optimization.This is the directory used for saving and retrieving the data.
###Code
save_dir = 'checkpoints/'
###Output
_____no_output_____
###Markdown
Create the directory if it does not exist.
###Code
if not os.path.exists(save_dir):
os.makedirs(save_dir)
###Output
_____no_output_____
###Markdown
This is the path for the checkpoint-file.
###Code
save_path = os.path.join(save_dir, 'best_validation')
###Output
_____no_output_____
###Markdown
TensorFlow Run Create TensorFlow sessionOnce the TensorFlow graph has been created, we have to create a TensorFlow session which is used to execute the graph.
###Code
session = tf.Session()
###Output
_____no_output_____
###Markdown
Initialize variablesThe variables for `weights` and `biases` must be initialized before we start optimizing them. We make a simple wrapper-function for this, because we will call it again below.
###Code
def init_variables():
session.run(tf.global_variables_initializer())
###Output
_____no_output_____
###Markdown
Execute the function now to initialize the variables.
###Code
init_variables()
###Output
_____no_output_____
###Markdown
Helper-function to perform optimization iterations There are 55,000 images in the training-set. It takes a long time to calculate the gradient of the model using all these images. We therefore only use a small batch of images in each iteration of the optimizer.If your computer crashes or becomes very slow because you run out of RAM, then you may try and lower this number, but you may then need to perform more optimization iterations.
###Code
train_batch_size = 64
###Output
_____no_output_____
###Markdown
The classification accuracy for the validation-set will be calculated for every 100 iterations of the optimization function below. The optimization will be stopped if the validation accuracy has not been improved in 1000 iterations. We need a few variables to keep track of this.
###Code
# Best validation accuracy seen so far.
best_validation_accuracy = 0.0
# Iteration-number for last improvement to validation accuracy.
last_improvement = 0
# Stop optimization if no improvement found in this many iterations.
require_improvement = 1000
###Output
_____no_output_____
###Markdown
Function for performing a number of optimization iterations so as to gradually improve the variables of the network layers. In each iteration, a new batch of data is selected from the training-set and then TensorFlow executes the optimizer using those training samples. The progress is printed every 100 iterations where the validation accuracy is also calculated and saved to a file if it is an improvement.
###Code
# Counter for total number of iterations performed so far.
total_iterations = 0
def optimize(num_iterations):
# Ensure we update the global variables rather than local copies.
global total_iterations
global best_validation_accuracy
global last_improvement
# Start-time used for printing time-usage below.
start_time = time.time()
for i in range(num_iterations):
# Increase the total number of iterations performed.
# It is easier to update it in each iteration because
# we need this number several times in the following.
total_iterations += 1
# Get a batch of training examples.
# x_batch now holds a batch of images and
# y_true_batch are the true labels for those images.
x_batch, y_true_batch = data.train.next_batch(train_batch_size)
# Put the batch into a dict with the proper names
# for placeholder variables in the TensorFlow graph.
feed_dict_train = {x: x_batch,
y_true: y_true_batch}
# Run the optimizer using this batch of training data.
# TensorFlow assigns the variables in feed_dict_train
# to the placeholder variables and then runs the optimizer.
session.run(optimizer, feed_dict=feed_dict_train)
# Print status every 100 iterations and after last iteration.
if (total_iterations % 100 == 0) or (i == (num_iterations - 1)):
# Calculate the accuracy on the training-batch.
acc_train = session.run(accuracy, feed_dict=feed_dict_train)
# Calculate the accuracy on the validation-set.
# The function returns 2 values but we only need the first.
acc_validation, _ = validation_accuracy()
# If validation accuracy is an improvement over best-known.
if acc_validation > best_validation_accuracy:
# Update the best-known validation accuracy.
best_validation_accuracy = acc_validation
# Set the iteration for the last improvement to current.
last_improvement = total_iterations
# Save all variables of the TensorFlow graph to file.
saver.save(sess=session, save_path=save_path)
# A string to be printed below, shows improvement found.
improved_str = '*'
else:
# An empty string to be printed below.
# Shows that no improvement was found.
improved_str = ''
# Status-message for printing.
msg = "Iter: {0:>6}, Train-Batch Accuracy: {1:>6.1%}, Validation Acc: {2:>6.1%} {3}"
# Print it.
print(msg.format(i + 1, acc_train, acc_validation, improved_str))
# If no improvement found in the required number of iterations.
if total_iterations - last_improvement > require_improvement:
print("No improvement found in a while, stopping optimization.")
# Break out from the for-loop.
break
# Ending time.
end_time = time.time()
# Difference between start and end-times.
time_dif = end_time - start_time
# Print the time-usage.
print("Time usage: " + str(timedelta(seconds=int(round(time_dif)))))
###Output
_____no_output_____
###Markdown
Helper-function to plot example errors Function for plotting examples of images from the test-set that have been mis-classified.
###Code
def plot_example_errors(cls_pred, correct):
# This function is called from print_test_accuracy() below.
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# correct is a boolean array whether the predicted class
# is equal to the true class for each image in the test-set.
# Negate the boolean array.
incorrect = (correct == False)
# Get the images from the test-set that have been
# incorrectly classified.
images = data.test.images[incorrect]
# Get the predicted classes for those images.
cls_pred = cls_pred[incorrect]
# Get the true classes for those images.
cls_true = data.test.cls[incorrect]
# Plot the first 9 images.
plot_images(images=images[0:9],
cls_true=cls_true[0:9],
cls_pred=cls_pred[0:9])
###Output
_____no_output_____
###Markdown
Helper-function to plot confusion matrix
###Code
def plot_confusion_matrix(cls_pred):
# This is called from print_test_accuracy() below.
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# Get the true classifications for the test-set.
cls_true = data.test.cls
# Get the confusion matrix using sklearn.
cm = confusion_matrix(y_true=cls_true,
y_pred=cls_pred)
# Print the confusion matrix as text.
print(cm)
# Plot the confusion matrix as an image.
plt.matshow(cm)
# Make various adjustments to the plot.
plt.colorbar()
tick_marks = np.arange(num_classes)
plt.xticks(tick_marks, range(num_classes))
plt.yticks(tick_marks, range(num_classes))
plt.xlabel('Predicted')
plt.ylabel('True')
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
###Output
_____no_output_____
###Markdown
Helper-functions for calculating classificationsThis function calculates the predicted classes of images and also returns a boolean array whether the classification of each image is correct.The calculation is done in batches because it might use too much RAM otherwise. If your computer crashes then you can try and lower the batch-size.
###Code
# Split the data-set in batches of this size to limit RAM usage.
batch_size = 256
def predict_cls(images, labels, cls_true):
# Number of images.
num_images = len(images)
# Allocate an array for the predicted classes which
# will be calculated in batches and filled into this array.
cls_pred = np.zeros(shape=num_images, dtype=np.int)
# Now calculate the predicted classes for the batches.
# We will just iterate through all the batches.
# There might be a more clever and Pythonic way of doing this.
# The starting index for the next batch is denoted i.
i = 0
while i < num_images:
# The ending index for the next batch is denoted j.
j = min(i + batch_size, num_images)
# Create a feed-dict with the images and labels
# between index i and j.
feed_dict = {x: images[i:j, :],
y_true: labels[i:j, :]}
# Calculate the predicted class using TensorFlow.
cls_pred[i:j] = session.run(y_pred_cls, feed_dict=feed_dict)
# Set the start-index for the next batch to the
# end-index of the current batch.
i = j
# Create a boolean array whether each image is correctly classified.
correct = (cls_true == cls_pred)
return correct, cls_pred
###Output
_____no_output_____
###Markdown
Calculate the predicted class for the test-set.
###Code
def predict_cls_test():
return predict_cls(images = data.test.images,
labels = data.test.labels,
cls_true = data.test.cls)
###Output
_____no_output_____
###Markdown
Calculate the predicted class for the validation-set.
###Code
def predict_cls_validation():
return predict_cls(images = data.validation.images,
labels = data.validation.labels,
cls_true = data.validation.cls)
###Output
_____no_output_____
###Markdown
Helper-functions for the classification accuracyThis function calculates the classification accuracy given a boolean array whether each image was correctly classified. E.g. `cls_accuracy([True, True, False, False, False]) = 2/5 = 0.4`
###Code
def cls_accuracy(correct):
# Calculate the number of correctly classified images.
# When summing a boolean array, False means 0 and True means 1.
correct_sum = correct.sum()
# Classification accuracy is the number of correctly classified
# images divided by the total number of images in the test-set.
acc = float(correct_sum) / len(correct)
return acc, correct_sum
###Output
_____no_output_____
###Markdown
Calculate the classification accuracy on the validation-set.
###Code
def validation_accuracy():
# Get the array of booleans whether the classifications are correct
# for the validation-set.
# The function returns two values but we only need the first.
correct, _ = predict_cls_validation()
# Calculate the classification accuracy and return it.
return cls_accuracy(correct)
###Output
_____no_output_____
###Markdown
Helper-function for showing the performance Function for printing the classification accuracy on the test-set.It takes a while to compute the classification for all the images in the test-set, that's why the results are re-used by calling the above functions directly from this function, so the classifications don't have to be recalculated by each function.
###Code
def print_test_accuracy(show_example_errors=False,
show_confusion_matrix=False):
# For all the images in the test-set,
# calculate the predicted classes and whether they are correct.
correct, cls_pred = predict_cls_test()
# Classification accuracy and the number of correct classifications.
acc, num_correct = cls_accuracy(correct)
# Number of images being classified.
num_images = len(correct)
# Print the accuracy.
msg = "Accuracy on Test-Set: {0:.1%} ({1} / {2})"
print(msg.format(acc, num_correct, num_images))
# Plot some examples of mis-classifications, if desired.
if show_example_errors:
print("Example errors:")
plot_example_errors(cls_pred=cls_pred, correct=correct)
# Plot the confusion matrix, if desired.
if show_confusion_matrix:
print("Confusion Matrix:")
plot_confusion_matrix(cls_pred=cls_pred)
###Output
_____no_output_____
###Markdown
Helper-function for plotting convolutional weights
###Code
def plot_conv_weights(weights, input_channel=0):
# Assume weights are TensorFlow ops for 4-dim variables
# e.g. weights_conv1 or weights_conv2.
# Retrieve the values of the weight-variables from TensorFlow.
# A feed-dict is not necessary because nothing is calculated.
w = session.run(weights)
# Print mean and standard deviation.
print("Mean: {0:.5f}, Stdev: {1:.5f}".format(w.mean(), w.std()))
# Get the lowest and highest values for the weights.
# This is used to correct the colour intensity across
# the images so they can be compared with each other.
w_min = np.min(w)
w_max = np.max(w)
# Number of filters used in the conv. layer.
num_filters = w.shape[3]
# Number of grids to plot.
# Rounded-up, square-root of the number of filters.
num_grids = math.ceil(math.sqrt(num_filters))
# Create figure with a grid of sub-plots.
fig, axes = plt.subplots(num_grids, num_grids)
# Plot all the filter-weights.
for i, ax in enumerate(axes.flat):
# Only plot the valid filter-weights.
if i<num_filters:
# Get the weights for the i'th filter of the input channel.
# The format of this 4-dim tensor is determined by the
# TensorFlow API. See Tutorial #02 for more details.
img = w[:, :, input_channel, i]
# Plot image.
ax.imshow(img, vmin=w_min, vmax=w_max,
interpolation='nearest', cmap='seismic')
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
###Output
_____no_output_____
###Markdown
Performance before any optimizationThe accuracy on the test-set is very low because the model variables have only been initialized and not optimized at all, so it just classifies the images randomly.
###Code
print_test_accuracy()
###Output
Accuracy on Test-Set: 8.5% (849 / 10000)
###Markdown
The convolutional weights are random, but it can be difficult to see any difference from the optimized weights that are shown below. The mean and standard deviation is shown so we can see whether there is a difference.
###Code
plot_conv_weights(weights=weights_conv1)
###Output
Mean: 0.00880, Stdev: 0.28635
###Markdown
Perform 10,000 optimization iterationsWe now perform 10,000 optimization iterations and abort the optimization if no improvement is found on the validation-set in 1000 iterations.An asterisk * is shown if the classification accuracy on the validation-set is an improvement.
###Code
optimize(num_iterations=10000)
print_test_accuracy(show_example_errors=True,
show_confusion_matrix=True)
###Output
Accuracy on Test-Set: 98.4% (9842 / 10000)
Example errors:
###Markdown
The convolutional weights have now been optimized. Compare these to the random weights shown above. They appear to be almost identical. In fact, I first thought there was a bug in the program because the weights look identical before and after optimization.But try and save the images and compare them side-by-side (you can just right-click the image to save it). You will notice very small differences before and after optimization.The mean and standard deviation has also changed slightly, so the optimized weights must be different.
###Code
plot_conv_weights(weights=weights_conv1)
###Output
Mean: 0.02895, Stdev: 0.29949
###Markdown
Initialize Variables AgainRe-initialize all the variables of the neural network with random values.
###Code
init_variables()
###Output
_____no_output_____
###Markdown
This means the neural network classifies the images completely randomly again, so the classification accuracy is very poor because it is like random guesses.
###Code
print_test_accuracy()
###Output
Accuracy on Test-Set: 13.4% (1341 / 10000)
###Markdown
The convolutional weights should now be different from the weights shown above.
###Code
plot_conv_weights(weights=weights_conv1)
###Output
Mean: -0.01086, Stdev: 0.28023
###Markdown
Restore Best VariablesRe-load all the variables that were saved to file during optimization.
###Code
saver.restore(sess=session, save_path=save_path)
###Output
_____no_output_____
###Markdown
The classification accuracy is high again when using the variables that were previously saved.Note that the classification accuracy may be slightly higher or lower than that reported above, because the variables in the file were chosen to maximize the classification accuracy on the validation-set, but the optimization actually continued for another 1000 iterations after saving those variables, so we are reporting the results for two slightly different sets of variables. Sometimes this leads to slightly better or worse performance on the test-set.
###Code
print_test_accuracy(show_example_errors=True,
show_confusion_matrix=True)
###Output
Accuracy on Test-Set: 98.3% (9826 / 10000)
Example errors:
###Markdown
The convolutional weights should be nearly identical to those shown above, although not completely identical because the weights shown above had 1000 optimization iterations more.
###Code
plot_conv_weights(weights=weights_conv1)
###Output
Mean: 0.02792, Stdev: 0.29822
###Markdown
Close TensorFlow Session We are now done using TensorFlow, so we close the session to release its resources.
###Code
# This has been commented out in case you want to modify and experiment
# with the Notebook without having to restart it.
# session.close()
###Output
_____no_output_____
###Markdown
TensorFlow Tutorial 04 Save & Restoreby [Magnus Erik Hvass Pedersen](http://www.hvass-labs.org/)/ [GitHub](https://github.com/Hvass-Labs/TensorFlow-Tutorials) / [Videos on YouTube](https://www.youtube.com/playlist?list=PL9Hr9sNUjfsmEu1ZniY0XpHSzl5uihcXZ) WARNING!**This tutorial does not work with TensorFlow v. 1.9 due to the PrettyTensor builder API apparently no longer being updated and supported by the Google Developers. It is recommended that you use the _Keras API_ instead, which also makes it much easier to save and load a model, see Tutorial 03-C.** IntroductionThis tutorial demonstrates how to save and restore the variables of a Neural Network. During optimization we save the variables of the neural network whenever its classification accuracy has improved on the validation-set. The optimization is aborted when there has been no improvement for 1000 iterations. We then reload the variables that performed best on the validation-set.This strategy is called Early Stopping. It is used to avoid overfitting of the neural network. This occurs when the neural network is being trained for too long so it starts to learn the noise of the training-set, which causes the neural network to mis-classify new images.Overfitting is not really a problem for the neural network used in this tutorial on the MNIST data-set for recognizing hand-written digits. But this tutorial demonstrates the idea of Early Stopping.This builds on the previous tutorials, so you should have a basic understanding of TensorFlow and the add-on package Pretty Tensor. A lot of the source-code and text in this tutorial is similar to the previous tutorials and may be read quickly if you have recently read the previous tutorials. Flowchart The following chart shows roughly how the data flows in the Convolutional Neural Network that is implemented below. The network has two convolutional layers and two fully-connected layers, with the last layer being used for the final classification of the input images. See Tutorial 02 for a more detailed description of this network and convolution in general.
###Code
from IPython.display import Image
Image('images/02_network_flowchart.png')
###Output
_____no_output_____
###Markdown
Imports
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np
from sklearn.metrics import confusion_matrix
import time
from datetime import timedelta
import math
import os
# Use PrettyTensor to simplify Neural Network construction.
import prettytensor as pt
###Output
_____no_output_____
###Markdown
This was developed using Python 3.5.2 (Anaconda) and TensorFlow version:
###Code
tf.__version__
###Output
_____no_output_____
###Markdown
PrettyTensor version:
###Code
pt.__version__
###Output
_____no_output_____
###Markdown
Load Data The MNIST data-set is about 12 MB and will be downloaded automatically if it is not located in the given path.
###Code
from tensorflow.examples.tutorials.mnist import input_data
data = input_data.read_data_sets('data/MNIST/', one_hot=True)
###Output
Extracting data/MNIST/train-images-idx3-ubyte.gz
Extracting data/MNIST/train-labels-idx1-ubyte.gz
Extracting data/MNIST/t10k-images-idx3-ubyte.gz
Extracting data/MNIST/t10k-labels-idx1-ubyte.gz
###Markdown
The MNIST data-set has now been loaded and consists of 70,000 images and associated labels (i.e. classifications of the images). The data-set is split into 3 mutually exclusive sub-sets. We will only use the training and test-sets in this tutorial.
###Code
print("Size of:")
print("- Training-set:\t\t{}".format(len(data.train.labels)))
print("- Test-set:\t\t{}".format(len(data.test.labels)))
print("- Validation-set:\t{}".format(len(data.validation.labels)))
###Output
Size of:
- Training-set: 55000
- Test-set: 10000
- Validation-set: 5000
###Markdown
The class-labels are One-Hot encoded, which means that each label is a vector with 10 elements, all of which are zero except for one element. The index of this one element is the class-number, that is, the digit shown in the associated image. We also need the class-numbers as integers for the test- and validation-sets, so we calculate them now.
###Code
data.test.cls = np.argmax(data.test.labels, axis=1)
data.validation.cls = np.argmax(data.validation.labels, axis=1)
###Output
_____no_output_____
###Markdown
Data Dimensions The data dimensions are used in several places in the source-code below. They are defined once so we can use these variables instead of numbers throughout the source-code below.
###Code
# We know that MNIST images are 28 pixels in each dimension.
img_size = 28
# Images are stored in one-dimensional arrays of this length.
img_size_flat = img_size * img_size
# Tuple with height and width of images used to reshape arrays.
img_shape = (img_size, img_size)
# Number of colour channels for the images: 1 channel for gray-scale.
num_channels = 1
# Number of classes, one class for each of 10 digits.
num_classes = 10
###Output
_____no_output_____
###Markdown
Helper-function for plotting images Function used to plot 9 images in a 3x3 grid, and writing the true and predicted classes below each image.
###Code
def plot_images(images, cls_true, cls_pred=None):
assert len(images) == len(cls_true) == 9
# Create figure with 3x3 sub-plots.
fig, axes = plt.subplots(3, 3)
fig.subplots_adjust(hspace=0.3, wspace=0.3)
for i, ax in enumerate(axes.flat):
# Plot image.
ax.imshow(images[i].reshape(img_shape), cmap='binary')
# Show true and predicted classes.
if cls_pred is None:
xlabel = "True: {0}".format(cls_true[i])
else:
xlabel = "True: {0}, Pred: {1}".format(cls_true[i], cls_pred[i])
# Show the classes as the label on the x-axis.
ax.set_xlabel(xlabel)
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
###Output
_____no_output_____
###Markdown
Plot a few images to see if data is correct
###Code
# Get the first images from the test-set.
images = data.test.images[0:9]
# Get the true classes for those images.
cls_true = data.test.cls[0:9]
# Plot the images and labels using our helper-function above.
plot_images(images=images, cls_true=cls_true)
###Output
_____no_output_____
###Markdown
TensorFlow GraphThe entire purpose of TensorFlow is to have a so-called computational graph that can be executed much more efficiently than if the same calculations were to be performed directly in Python. TensorFlow can be more efficient than NumPy because TensorFlow knows the entire computation graph that must be executed, while NumPy only knows the computation of a single mathematical operation at a time.TensorFlow can also automatically calculate the gradients that are needed to optimize the variables of the graph so as to make the model perform better. This is because the graph is a combination of simple mathematical expressions so the gradient of the entire graph can be calculated using the chain-rule for derivatives.TensorFlow can also take advantage of multi-core CPUs as well as GPUs - and Google has even built special chips just for TensorFlow which are called TPUs (Tensor Processing Units) and are even faster than GPUs.A TensorFlow graph consists of the following parts which will be detailed below:* Placeholder variables used for inputting data to the graph.* Variables that are going to be optimized so as to make the convolutional network perform better.* The mathematical formulas for the convolutional network.* A loss measure that can be used to guide the optimization of the variables.* An optimization method which updates the variables.In addition, the TensorFlow graph may also contain various debugging statements e.g. for logging data to be displayed using TensorBoard, which is not covered in this tutorial. Placeholder variables Placeholder variables serve as the input to the TensorFlow computational graph that we may change each time we execute the graph. We call this feeding the placeholder variables and it is demonstrated further below.First we define the placeholder variable for the input images. This allows us to change the images that are input to the TensorFlow graph. This is a so-called tensor, which just means that it is a multi-dimensional array. The data-type is set to `float32` and the shape is set to `[None, img_size_flat]`, where `None` means that the tensor may hold an arbitrary number of images with each image being a vector of length `img_size_flat`.
###Code
x = tf.placeholder(tf.float32, shape=[None, img_size_flat], name='x')
###Output
_____no_output_____
###Markdown
The convolutional layers expect `x` to be encoded as a 4-dim tensor so we have to reshape it so its shape is instead `[num_images, img_height, img_width, num_channels]`. Note that `img_height == img_width == img_size` and `num_images` can be inferred automatically by using -1 for the size of the first dimension. So the reshape operation is:
###Code
x_image = tf.reshape(x, [-1, img_size, img_size, num_channels])
###Output
_____no_output_____
###Markdown
Next we have the placeholder variable for the true labels associated with the images that were input in the placeholder variable `x`. The shape of this placeholder variable is `[None, num_classes]` which means it may hold an arbitrary number of labels and each label is a vector of length `num_classes` which is 10 in this case.
###Code
y_true = tf.placeholder(tf.float32, shape=[None, 10], name='y_true')
###Output
_____no_output_____
###Markdown
We could also have a placeholder variable for the class-number, but we will instead calculate it using argmax. Note that this is a TensorFlow operator so nothing is calculated at this point.
###Code
y_true_cls = tf.argmax(y_true, dimension=1)
###Output
_____no_output_____
###Markdown
Neural Network This section implements the Convolutional Neural Network using Pretty Tensor, which is much simpler than a direct implementation in TensorFlow, see Tutorial 03.The basic idea is to wrap the input tensor `x_image` in a Pretty Tensor object which has helper-functions for adding new computational layers so as to create an entire neural network. Pretty Tensor takes care of the variable allocation, etc.
###Code
x_pretty = pt.wrap(x_image)
###Output
_____no_output_____
###Markdown
Now that we have wrapped the input image in a Pretty Tensor object, we can add the convolutional and fully-connected layers in just a few lines of source-code.Note that `pt.defaults_scope(activation_fn=tf.nn.relu)` makes `activation_fn=tf.nn.relu` an argument for each of the layers constructed inside the `with`-block, so that Rectified Linear Units (ReLU) are used for each of these layers. The `defaults_scope` makes it easy to change arguments for all of the layers.
###Code
with pt.defaults_scope(activation_fn=tf.nn.relu):
y_pred, loss = x_pretty.\
conv2d(kernel=5, depth=16, name='layer_conv1').\
max_pool(kernel=2, stride=2).\
conv2d(kernel=5, depth=36, name='layer_conv2').\
max_pool(kernel=2, stride=2).\
flatten().\
fully_connected(size=128, name='layer_fc1').\
softmax_classifier(num_classes=num_classes, labels=y_true)
###Output
_____no_output_____
###Markdown
Getting the Weights Further below, we want to plot the weights of the neural network. When the network is constructed using Pretty Tensor, all the variables of the layers are created indirectly by Pretty Tensor. We therefore have to retrieve the variables from TensorFlow.We used the names `layer_conv1` and `layer_conv2` for the two convolutional layers. These are also called variable scopes (not to be confused with `defaults_scope` as described above). Pretty Tensor automatically gives names to the variables it creates for each layer, so we can retrieve the weights for a layer using the layer's scope-name and the variable-name.The implementation is somewhat awkward because we have to use the TensorFlow function `get_variable()` which was designed for another purpose; either creating a new variable or re-using an existing variable. The easiest thing is to make the following helper-function.
###Code
def get_weights_variable(layer_name):
# Retrieve an existing variable named 'weights' in the scope
# with the given layer_name.
# This is awkward because the TensorFlow function was
# really intended for another purpose.
with tf.variable_scope(layer_name, reuse=True):
variable = tf.get_variable('weights')
return variable
###Output
_____no_output_____
###Markdown
Using this helper-function we can retrieve the variables. These are TensorFlow objects. In order to get the contents of the variables, you must do something like: `contents = session.run(weights_conv1)` as demonstrated further below.
###Code
weights_conv1 = get_weights_variable(layer_name='layer_conv1')
weights_conv2 = get_weights_variable(layer_name='layer_conv2')
###Output
_____no_output_____
###Markdown
Optimization Method Pretty Tensor gave us the predicted class-label (`y_pred`) as well as a loss-measure that must be minimized, so as to improve the ability of the neural network to classify the input images.It is unclear from the documentation for Pretty Tensor whether the loss-measure is cross-entropy or something else. But we now use the `AdamOptimizer` to minimize the loss.Note that optimization is not performed at this point. In fact, nothing is calculated at all, we just add the optimizer-object to the TensorFlow graph for later execution.
###Code
optimizer = tf.train.AdamOptimizer(learning_rate=1e-4).minimize(loss)
###Output
_____no_output_____
###Markdown
Performance MeasuresWe need a few more performance measures to display the progress to the user.First we calculate the predicted class number from the output of the neural network `y_pred`, which is a vector with 10 elements. The class number is the index of the largest element.
###Code
y_pred_cls = tf.argmax(y_pred, dimension=1)
###Output
_____no_output_____
###Markdown
Then we create a vector of booleans telling us whether the predicted class equals the true class of each image.
###Code
correct_prediction = tf.equal(y_pred_cls, y_true_cls)
###Output
_____no_output_____
###Markdown
The classification accuracy is calculated by first type-casting the vector of booleans to floats, so that False becomes 0 and True becomes 1, and then taking the average of these numbers.
###Code
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
###Output
_____no_output_____
###Markdown
SaverIn order to save the variables of the neural network, we now create a so-called Saver-object which is used for storing and retrieving all the variables of the TensorFlow graph. Nothing is actually saved at this point, which will be done further below in the `optimize()`-function.
###Code
saver = tf.train.Saver()
###Output
_____no_output_____
###Markdown
The saved files are often called checkpoints because they may be written at regular intervals during optimization.This is the directory used for saving and retrieving the data.
###Code
save_dir = 'checkpoints/'
###Output
_____no_output_____
###Markdown
Create the directory if it does not exist.
###Code
if not os.path.exists(save_dir):
os.makedirs(save_dir)
###Output
_____no_output_____
###Markdown
This is the path for the checkpoint-file.
###Code
save_path = os.path.join(save_dir, 'best_validation')
###Output
_____no_output_____
###Markdown
TensorFlow Run Create TensorFlow sessionOnce the TensorFlow graph has been created, we have to create a TensorFlow session which is used to execute the graph.
###Code
session = tf.Session()
###Output
_____no_output_____
###Markdown
Initialize variablesThe variables for `weights` and `biases` must be initialized before we start optimizing them. We make a simple wrapper-function for this, because we will call it again below.
###Code
def init_variables():
session.run(tf.global_variables_initializer())
###Output
_____no_output_____
###Markdown
Execute the function now to initialize the variables.
###Code
init_variables()
###Output
_____no_output_____
###Markdown
Helper-function to perform optimization iterations There are 55,000 images in the training-set. It takes a long time to calculate the gradient of the model using all these images. We therefore only use a small batch of images in each iteration of the optimizer.If your computer crashes or becomes very slow because you run out of RAM, then you may try and lower this number, but you may then need to perform more optimization iterations.
###Code
train_batch_size = 64
###Output
_____no_output_____
###Markdown
The classification accuracy for the validation-set will be calculated for every 100 iterations of the optimization function below. The optimization will be stopped if the validation accuracy has not been improved in 1000 iterations. We need a few variables to keep track of this.
###Code
# Best validation accuracy seen so far.
best_validation_accuracy = 0.0
# Iteration-number for last improvement to validation accuracy.
last_improvement = 0
# Stop optimization if no improvement found in this many iterations.
require_improvement = 1000
###Output
_____no_output_____
###Markdown
Function for performing a number of optimization iterations so as to gradually improve the variables of the network layers. In each iteration, a new batch of data is selected from the training-set and then TensorFlow executes the optimizer using those training samples. The progress is printed every 100 iterations where the validation accuracy is also calculated and saved to a file if it is an improvement.
###Code
# Counter for total number of iterations performed so far.
total_iterations = 0
def optimize(num_iterations):
# Ensure we update the global variables rather than local copies.
global total_iterations
global best_validation_accuracy
global last_improvement
# Start-time used for printing time-usage below.
start_time = time.time()
for i in range(num_iterations):
# Increase the total number of iterations performed.
# It is easier to update it in each iteration because
# we need this number several times in the following.
total_iterations += 1
# Get a batch of training examples.
# x_batch now holds a batch of images and
# y_true_batch are the true labels for those images.
x_batch, y_true_batch = data.train.next_batch(train_batch_size)
# Put the batch into a dict with the proper names
# for placeholder variables in the TensorFlow graph.
feed_dict_train = {x: x_batch,
y_true: y_true_batch}
# Run the optimizer using this batch of training data.
# TensorFlow assigns the variables in feed_dict_train
# to the placeholder variables and then runs the optimizer.
session.run(optimizer, feed_dict=feed_dict_train)
# Print status every 100 iterations and after last iteration.
if (total_iterations % 100 == 0) or (i == (num_iterations - 1)):
# Calculate the accuracy on the training-batch.
acc_train = session.run(accuracy, feed_dict=feed_dict_train)
# Calculate the accuracy on the validation-set.
# The function returns 2 values but we only need the first.
acc_validation, _ = validation_accuracy()
# If validation accuracy is an improvement over best-known.
if acc_validation > best_validation_accuracy:
# Update the best-known validation accuracy.
best_validation_accuracy = acc_validation
# Set the iteration for the last improvement to current.
last_improvement = total_iterations
# Save all variables of the TensorFlow graph to file.
saver.save(sess=session, save_path=save_path)
# A string to be printed below, shows improvement found.
improved_str = '*'
else:
# An empty string to be printed below.
# Shows that no improvement was found.
improved_str = ''
# Status-message for printing.
msg = "Iter: {0:>6}, Train-Batch Accuracy: {1:>6.1%}, Validation Acc: {2:>6.1%} {3}"
# Print it.
print(msg.format(i + 1, acc_train, acc_validation, improved_str))
# If no improvement found in the required number of iterations.
if total_iterations - last_improvement > require_improvement:
print("No improvement found in a while, stopping optimization.")
# Break out from the for-loop.
break
# Ending time.
end_time = time.time()
# Difference between start and end-times.
time_dif = end_time - start_time
# Print the time-usage.
print("Time usage: " + str(timedelta(seconds=int(round(time_dif)))))
###Output
_____no_output_____
###Markdown
Helper-function to plot example errors Function for plotting examples of images from the test-set that have been mis-classified.
###Code
def plot_example_errors(cls_pred, correct):
# This function is called from print_test_accuracy() below.
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# correct is a boolean array whether the predicted class
# is equal to the true class for each image in the test-set.
# Negate the boolean array.
incorrect = (correct == False)
# Get the images from the test-set that have been
# incorrectly classified.
images = data.test.images[incorrect]
# Get the predicted classes for those images.
cls_pred = cls_pred[incorrect]
# Get the true classes for those images.
cls_true = data.test.cls[incorrect]
# Plot the first 9 images.
plot_images(images=images[0:9],
cls_true=cls_true[0:9],
cls_pred=cls_pred[0:9])
###Output
_____no_output_____
###Markdown
Helper-function to plot confusion matrix
###Code
def plot_confusion_matrix(cls_pred):
# This is called from print_test_accuracy() below.
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# Get the true classifications for the test-set.
cls_true = data.test.cls
# Get the confusion matrix using sklearn.
cm = confusion_matrix(y_true=cls_true,
y_pred=cls_pred)
# Print the confusion matrix as text.
print(cm)
# Plot the confusion matrix as an image.
plt.matshow(cm)
# Make various adjustments to the plot.
plt.colorbar()
tick_marks = np.arange(num_classes)
plt.xticks(tick_marks, range(num_classes))
plt.yticks(tick_marks, range(num_classes))
plt.xlabel('Predicted')
plt.ylabel('True')
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
###Output
_____no_output_____
###Markdown
Helper-functions for calculating classificationsThis function calculates the predicted classes of images and also returns a boolean array whether the classification of each image is correct.The calculation is done in batches because it might use too much RAM otherwise. If your computer crashes then you can try and lower the batch-size.
###Code
# Split the data-set in batches of this size to limit RAM usage.
batch_size = 256
def predict_cls(images, labels, cls_true):
# Number of images.
num_images = len(images)
# Allocate an array for the predicted classes which
# will be calculated in batches and filled into this array.
cls_pred = np.zeros(shape=num_images, dtype=np.int)
# Now calculate the predicted classes for the batches.
# We will just iterate through all the batches.
# There might be a more clever and Pythonic way of doing this.
# The starting index for the next batch is denoted i.
i = 0
while i < num_images:
# The ending index for the next batch is denoted j.
j = min(i + batch_size, num_images)
# Create a feed-dict with the images and labels
# between index i and j.
feed_dict = {x: images[i:j, :],
y_true: labels[i:j, :]}
# Calculate the predicted class using TensorFlow.
cls_pred[i:j] = session.run(y_pred_cls, feed_dict=feed_dict)
# Set the start-index for the next batch to the
# end-index of the current batch.
i = j
# Create a boolean array whether each image is correctly classified.
correct = (cls_true == cls_pred)
return correct, cls_pred
###Output
_____no_output_____
###Markdown
Calculate the predicted class for the test-set.
###Code
def predict_cls_test():
return predict_cls(images = data.test.images,
labels = data.test.labels,
cls_true = data.test.cls)
###Output
_____no_output_____
###Markdown
Calculate the predicted class for the validation-set.
###Code
def predict_cls_validation():
return predict_cls(images = data.validation.images,
labels = data.validation.labels,
cls_true = data.validation.cls)
###Output
_____no_output_____
###Markdown
Helper-functions for the classification accuracyThis function calculates the classification accuracy given a boolean array whether each image was correctly classified. E.g. `cls_accuracy([True, True, False, False, False]) = 2/5 = 0.4`
###Code
def cls_accuracy(correct):
# Calculate the number of correctly classified images.
# When summing a boolean array, False means 0 and True means 1.
correct_sum = correct.sum()
# Classification accuracy is the number of correctly classified
# images divided by the total number of images in the test-set.
acc = float(correct_sum) / len(correct)
return acc, correct_sum
###Output
_____no_output_____
###Markdown
Calculate the classification accuracy on the validation-set.
###Code
def validation_accuracy():
# Get the array of booleans whether the classifications are correct
# for the validation-set.
# The function returns two values but we only need the first.
correct, _ = predict_cls_validation()
# Calculate the classification accuracy and return it.
return cls_accuracy(correct)
###Output
_____no_output_____
###Markdown
Helper-function for showing the performance Function for printing the classification accuracy on the test-set.It takes a while to compute the classification for all the images in the test-set, that's why the results are re-used by calling the above functions directly from this function, so the classifications don't have to be recalculated by each function.
###Code
def print_test_accuracy(show_example_errors=False,
show_confusion_matrix=False):
# For all the images in the test-set,
# calculate the predicted classes and whether they are correct.
correct, cls_pred = predict_cls_test()
# Classification accuracy and the number of correct classifications.
acc, num_correct = cls_accuracy(correct)
# Number of images being classified.
num_images = len(correct)
# Print the accuracy.
msg = "Accuracy on Test-Set: {0:.1%} ({1} / {2})"
print(msg.format(acc, num_correct, num_images))
# Plot some examples of mis-classifications, if desired.
if show_example_errors:
print("Example errors:")
plot_example_errors(cls_pred=cls_pred, correct=correct)
# Plot the confusion matrix, if desired.
if show_confusion_matrix:
print("Confusion Matrix:")
plot_confusion_matrix(cls_pred=cls_pred)
###Output
_____no_output_____
###Markdown
Helper-function for plotting convolutional weights
###Code
def plot_conv_weights(weights, input_channel=0):
# Assume weights are TensorFlow ops for 4-dim variables
# e.g. weights_conv1 or weights_conv2.
# Retrieve the values of the weight-variables from TensorFlow.
# A feed-dict is not necessary because nothing is calculated.
w = session.run(weights)
# Print mean and standard deviation.
print("Mean: {0:.5f}, Stdev: {1:.5f}".format(w.mean(), w.std()))
# Get the lowest and highest values for the weights.
# This is used to correct the colour intensity across
# the images so they can be compared with each other.
w_min = np.min(w)
w_max = np.max(w)
# Number of filters used in the conv. layer.
num_filters = w.shape[3]
# Number of grids to plot.
# Rounded-up, square-root of the number of filters.
num_grids = math.ceil(math.sqrt(num_filters))
# Create figure with a grid of sub-plots.
fig, axes = plt.subplots(num_grids, num_grids)
# Plot all the filter-weights.
for i, ax in enumerate(axes.flat):
# Only plot the valid filter-weights.
if i<num_filters:
# Get the weights for the i'th filter of the input channel.
# The format of this 4-dim tensor is determined by the
# TensorFlow API. See Tutorial #02 for more details.
img = w[:, :, input_channel, i]
# Plot image.
ax.imshow(img, vmin=w_min, vmax=w_max,
interpolation='nearest', cmap='seismic')
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
###Output
_____no_output_____
###Markdown
Performance before any optimizationThe accuracy on the test-set is very low because the model variables have only been initialized and not optimized at all, so it just classifies the images randomly.
###Code
print_test_accuracy()
###Output
Accuracy on Test-Set: 8.5% (849 / 10000)
###Markdown
The convolutional weights are random, but it can be difficult to see any difference from the optimized weights that are shown below. The mean and standard deviation is shown so we can see whether there is a difference.
###Code
plot_conv_weights(weights=weights_conv1)
###Output
Mean: 0.00880, Stdev: 0.28635
###Markdown
Perform 10,000 optimization iterationsWe now perform 10,000 optimization iterations and abort the optimization if no improvement is found on the validation-set in 1000 iterations.An asterisk * is shown if the classification accuracy on the validation-set is an improvement.
###Code
optimize(num_iterations=10000)
print_test_accuracy(show_example_errors=True,
show_confusion_matrix=True)
###Output
Accuracy on Test-Set: 98.4% (9842 / 10000)
Example errors:
###Markdown
The convolutional weights have now been optimized. Compare these to the random weights shown above. They appear to be almost identical. In fact, I first thought there was a bug in the program because the weights look identical before and after optimization.But try and save the images and compare them side-by-side (you can just right-click the image to save it). You will notice very small differences before and after optimization.The mean and standard deviation has also changed slightly, so the optimized weights must be different.
###Code
plot_conv_weights(weights=weights_conv1)
###Output
Mean: 0.02895, Stdev: 0.29949
###Markdown
Initialize Variables AgainRe-initialize all the variables of the neural network with random values.
###Code
init_variables()
###Output
_____no_output_____
###Markdown
This means the neural network classifies the images completely randomly again, so the classification accuracy is very poor because it is like random guesses.
###Code
print_test_accuracy()
###Output
Accuracy on Test-Set: 13.4% (1341 / 10000)
###Markdown
The convolutional weights should now be different from the weights shown above.
###Code
plot_conv_weights(weights=weights_conv1)
###Output
Mean: -0.01086, Stdev: 0.28023
###Markdown
Restore Best VariablesRe-load all the variables that were saved to file during optimization.
###Code
saver.restore(sess=session, save_path=save_path)
###Output
_____no_output_____
###Markdown
The classification accuracy is high again when using the variables that were previously saved.Note that the classification accuracy may be slightly higher or lower than that reported above, because the variables in the file were chosen to maximize the classification accuracy on the validation-set, but the optimization actually continued for another 1000 iterations after saving those variables, so we are reporting the results for two slightly different sets of variables. Sometimes this leads to slightly better or worse performance on the test-set.
###Code
print_test_accuracy(show_example_errors=True,
show_confusion_matrix=True)
###Output
Accuracy on Test-Set: 98.3% (9826 / 10000)
Example errors:
###Markdown
The convolutional weights should be nearly identical to those shown above, although not completely identical because the weights shown above had 1000 optimization iterations more.
###Code
plot_conv_weights(weights=weights_conv1)
###Output
Mean: 0.02792, Stdev: 0.29822
###Markdown
Close TensorFlow Session We are now done using TensorFlow, so we close the session to release its resources.
###Code
# This has been commented out in case you want to modify and experiment
# with the Notebook without having to restart it.
# session.close()
###Output
_____no_output_____ |
progress_files/experimentation.ipynb | ###Markdown
ASL to TextAn application to convert sign language to text. 1. Problem DefinationEveryone cannot understand sign language, to help everyone to read sign language, an AI based solution to convert sign language to text will be ideal solution. 2. DataUsing [ASL Alphabet dataset](https://www.kaggle.com/grassknoted/asl-alphabet) from [Kaggle.com](https://www.kaggle.com)in the form of data we have Images (unstructured data).The data structure is:```bash+- asl_alphabet_test | +- asl_alphabet_test| | +- images.jpg|+- asl_alphabet_train| +- asl_alphabet_train| | +- | | | +- images.jpg``` 3. EvaluationAs our problem is multiclass classification, we will be using evaluation methods for multi-class classification 4. Features* Unstructured Data* 29 classes/different signs* about 87K images in the train folder (total)* 28 images for the test set
###Code
# imports
import tensorflow as tf
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
download the datset
###Code
!pip install kaggle
!mkdir /root/.kaggle/
!touch /root/.kaggle/kaggle.json
!echo '{"username":"yashpawarp","key":"bf231966c472597c9b7dfd0c64730dc9"}' > /root/.kaggle/kaggle.json
!chmod 600 /root/.kaggle/kaggle.json
!kaggle datasets download -d grassknoted/asl-alphabet --unzip
###Output
mkdir: cannot create directory ‘/root/.kaggle/’: File exists
Downloading asl-alphabet.zip to /content
99% 1.02G/1.03G [00:08<00:00, 139MB/s]
100% 1.03G/1.03G [00:09<00:00, 122MB/s]
###Markdown
Unzip the data
###Code
from zipfile import ZipFile
with ZipFile('/content/asl-alphabet.zip', 'r') as zipObj:
# Extract all the contents of zip file in current directory
zipObj.extractall()
###Output
_____no_output_____
###Markdown
Load the data
###Code
def unbatchify(batch):
'''
Takes a batched dataset of (image, label) Tensors and returns separate arrays of imags and labels
'''
images = []
labels = []
# Loop through batched data
for image, label in batch.unbatch().as_numpy_iterator():
images.append(image)
labels.append(label)
return np.array(images), np.array(labels)
data = tf.keras.preprocessing.image_dataset_from_directory(
'/content/asl_alphabet_train/asl_alphabet_train', # data directory
labels="inferred",
label_mode="categorical",
class_names=None,
color_mode="rgb",
batch_size=32,
image_size=(224, 224),
shuffle=True,
seed=42,
validation_split=None,
subset=None,
interpolation="bilinear"
)
from sklearn.model_selection import StratifiedShuffleSplit
def stratify_split(X, y, test_size):
"""
Split the data using StratifiedShuffleSplit
"""
split = StratifiedShuffleSplit(n_splits=1, test_size=test_size, random_state=42)
for train_index, test_index in split.split(X, y):
X_train, y_train = X[train_index], y[train_index]
X_test, y_test = X[test_index], y[test_index]
return X_train, y_trian, X_test, y_test
# unbatchify crashes the session
###Output
_____no_output_____ |
05_resilience/continuous_eval.ipynb | ###Markdown
Continuous EvaluationThis notebook demonstrates how to use Cloud AI Platform to execute continuous evaluation of a deployed machine learning model. You'll need to have a project set up with Google Cloud Platform. Set upStart by creating environment variables for your Google Cloud project and bucket. Also, import the libraries we'll need for this notebook.
###Code
# change these to try this notebook out
PROJECT = '<YOUR-GCS-BUCKET>'
BUCKET = '<YOUR-GCS-BUCKET>'
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['TFVERSION'] = '2.1'
import shutil
import pandas as pd
import tensorflow as tf
from google.cloud import bigquery
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.callbacks import EarlyStopping
from tensorflow_hub import KerasLayer
from tensorflow.keras.layers import Dense, Input, Lambda
from tensorflow.keras.models import Model
print(tf.__version__)
%matplotlib inline
###Output
2.1.1
###Markdown
Train and deploy the modelFor this notebook, we'll build a text classification model using the Hacker News dataset. Each training example consists of an article title and the article source. The model will be trained to classify a given article title as belonging to either `nytimes`, `github` or `techcrunch`. Load the data
###Code
DATASET_NAME = "titles_full.csv"
COLUMNS = ['title', 'source']
titles_df = pd.read_csv(DATASET_NAME, header=None, names=COLUMNS)
titles_df.head()
###Output
_____no_output_____
###Markdown
We one-hot encode the label...
###Code
CLASSES = {
'github': 0,
'nytimes': 1,
'techcrunch': 2
}
N_CLASSES = len(CLASSES)
def encode_labels(sources):
classes = [CLASSES[source] for source in sources]
one_hots = to_categorical(classes, num_classes=N_CLASSES)
return one_hots
encode_labels(titles_df.source[:4])
###Output
_____no_output_____
###Markdown
...and create a train/test split.
###Code
N_TRAIN = int(len(titles_df) * 0.80)
titles_train, sources_train = (
titles_df.title[:N_TRAIN], titles_df.source[:N_TRAIN])
titles_valid, sources_valid = (
titles_df.title[N_TRAIN:], titles_df.source[N_TRAIN:])
X_train, Y_train = titles_train.values, encode_labels(sources_train)
X_valid, Y_valid = titles_valid.values, encode_labels(sources_valid)
X_train[:3]
###Output
_____no_output_____
###Markdown
Swivel ModelWe'll build a simple text classification model using a Tensorflow Hub embedding module derived from Swivel. [Swivel](https://arxiv.org/abs/1602.02215) is an algorithm that essentially factorizes word co-occurrence matrices to create the words embeddings. TF-Hub hosts the pretrained [gnews-swivel-20dim-with-oov](https://tfhub.dev/google/tf2-preview/gnews-swivel-20dim-with-oov/1) 20-dimensional Swivel module.
###Code
SWIVEL = "https://tfhub.dev/google/tf2-preview/gnews-swivel-20dim-with-oov/1"
swivel_module = KerasLayer(SWIVEL, output_shape=[20], input_shape=[], dtype=tf.string, trainable=True)
###Output
_____no_output_____
###Markdown
The `build_model` function is written so that the TF Hub module can easily be exchanged with another module.
###Code
def build_model(hub_module, model_name):
inputs = Input(shape=[], dtype=tf.string, name="text")
module = hub_module(inputs)
h1 = Dense(16, activation='relu', name="h1")(module)
outputs = Dense(N_CLASSES, activation='softmax', name='outputs')(h1)
model = Model(inputs=inputs, outputs=[outputs], name=model_name)
model.compile(
optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy']
)
return model
def train_and_evaluate(train_data, val_data, model, batch_size=5000):
tf.random.set_seed(33)
X_train, Y_train = train_data
history = model.fit(
X_train, Y_train,
epochs=100,
batch_size=batch_size,
validation_data=val_data,
callbacks=[EarlyStopping()],
)
return history
txtcls_model = build_model(swivel_module, model_name='txtcls_swivel')
txtcls_model.summary()
###Output
Model: "txtcls_swivel"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
text (InputLayer) [(None,)] 0
_________________________________________________________________
keras_layer_3 (KerasLayer) (None, 20) 389380
_________________________________________________________________
h1 (Dense) (None, 16) 336
_________________________________________________________________
outputs (Dense) (None, 3) 51
=================================================================
Total params: 389,767
Trainable params: 389,767
Non-trainable params: 0
_________________________________________________________________
###Markdown
Train and evaluation the modelWith the model defined and data set up, next we'll train and evaluate the model.
###Code
# set up train and validation data
train_data = (X_train, Y_train)
val_data = (X_valid, Y_valid)
###Output
_____no_output_____
###Markdown
For training we'll call `train_and_evaluate` on `txtcls_model`.
###Code
txtcls_history = train_and_evaluate(train_data, val_data, txtcls_model)
history = txtcls_history
pd.DataFrame(history.history)[['loss', 'val_loss']].plot()
pd.DataFrame(history.history)[['accuracy', 'val_accuracy']].plot()
###Output
_____no_output_____
###Markdown
Calling predicition from model head produces output from final dense layer. This final layer is used to compute categorical cross-entropy when training.
###Code
txtcls_model.predict(x=["YouTube introduces Video Chapters to make it easier to navigate longer videos"])
###Output
_____no_output_____
###Markdown
We can save the model artifacts in the local directory called `./txtcls_swivel`.
###Code
tf.saved_model.save(txtcls_model, './txtcls_swivel/')
###Output
WARNING:tensorflow:From /home/jupyter/.local/lib/python3.7/site-packages/tensorflow_core/python/ops/resource_variable_ops.py:1786: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.
Instructions for updating:
If using Keras pass *_constraint arguments to layers.
###Markdown
....and examine the model's serving default signature. As expected the model takes as input a text string (e.g. an article title) and retrns a 3-dimensional vector of floats (i.e. the softmax output layer).
###Code
!saved_model_cli show \
--tag_set serve \
--signature_def serving_default \
--dir ./txtcls_swivel/
###Output
2020-06-26 02:27:42.046042: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libnvinfer.so.6
2020-06-26 02:27:42.049327: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libnvinfer_plugin.so.6
The given SavedModel SignatureDef contains the following input(s):
inputs['text'] tensor_info:
dtype: DT_STRING
shape: (-1)
name: serving_default_text:0
The given SavedModel SignatureDef contains the following output(s):
outputs['outputs'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 3)
name: StatefulPartitionedCall_2:0
Method name is: tensorflow/serving/predict
###Markdown
To simplify the returned predictions, we'll modify the model signature so that the model outputs the predicted article source (either `nytimes`, `techcrunch`, or `github`) rather than the final softmax layer. We'll also return the 'confidence' of the model's prediction. This will be the softmax value corresonding to the predicted article source.
###Code
@tf.function(input_signature=[tf.TensorSpec([None], dtype=tf.string)])
def source_name(text):
labels = tf.constant(['github', 'techcrunch', 'nytimes'], dtype=tf.string)
probs = txtcls_model(text, training=False)
indices = tf.argmax(probs, axis=1)
pred_source = tf.gather(params=labels, indices=indices)
pred_confidence = tf.reduce_max(probs, axis=1)
return {'source': pred_source,
'confidence': pred_confidence}
###Output
_____no_output_____
###Markdown
Now, we'll re-save the new Swivel model that has this updated model signature by referencing the `source_name` function for the model's `serving_default`.
###Code
shutil.rmtree('./txtcls_swivel', ignore_errors=True)
txtcls_model.save('./txtcls_swivel', signatures={'serving_default': source_name})
###Output
INFO:tensorflow:Assets written to: ./txtcls_swivel/assets
###Markdown
Examine the model signature to confirm the changes:
###Code
!saved_model_cli show \
--tag_set serve \
--signature_def serving_default \
--dir ./txtcls_model/
###Output
2020-06-26 02:32:11.529183: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libnvinfer.so.6
2020-06-26 02:32:11.531565: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libnvinfer_plugin.so.6
The given SavedModel SignatureDef contains the following input(s):
inputs['text'] tensor_info:
dtype: DT_STRING
shape: (-1)
name: serving_default_text:0
The given SavedModel SignatureDef contains the following output(s):
outputs['confidence'] tensor_info:
dtype: DT_FLOAT
shape: (-1)
name: StatefulPartitionedCall_2:0
outputs['source'] tensor_info:
dtype: DT_STRING
shape: (-1)
name: StatefulPartitionedCall_2:1
Method name is: tensorflow/serving/predict
###Markdown
Now when we call predictions using the updated serving input function, the model will return the predicted article source as a readable string, and the model's confidence for that prediction.
###Code
title1 = "House Passes Sweeping Policing Bill Targeting Racial Bias and Use of Force"
title2 = "YouTube introduces Video Chapters to make it easier to navigate longer videos"
title3 = "A native Mac app wrapper for WhatsApp Web"
restored = tf.keras.models.load_model('./txtcls_swivel')
infer = restored.signatures['serving_default']
outputs = infer(text=tf.constant([title1, title2, title3]))
print(outputs['source'].numpy())
print(outputs['confidence'].numpy())
###Output
[b'nytimes' b'techcrunch' b'techcrunch']
[0.52479076 0.5127273 0.48214597]
###Markdown
Deploy the model for online servingOnce the model is trained and the assets saved, deploying the model to GCP is straightforward. After some time you should be able to see your deployed model and its version on the [model page of GCP console](https://console.cloud.google.com/ai-platform/models).
###Code
%%bash
MODEL_NAME="txtcls"
MODEL_VERSION="swivel"
MODEL_LOCATION="./txtcls_swivel/"
gcloud ai-platform versions create ${MODEL_VERSION} \
--model ${MODEL_NAME} \
--origin ${MODEL_LOCATION} \
--staging-bucket gs://${BUCKET} \
--runtime-version=2.1
###Output
Creating version (this might take a few minutes)......
......................................................................................................................................................................................................................................................................................................done.
###Markdown
Set up the Evaluation job on CAIPNow that the model is deployed, go to [Cloud AI Platform](https://console.cloud.google.com/ai-platform/models/txtcls/versions) to see the model version you've deployed and [set up an evaluation job](https://console.cloud.google.com/ai-platform/models/txtcls/versions/swivel/evaluation) by clicking on the button called "Create Evaluation Job". You will be asked to provide some relevant information: - Job description: txtcls_swivel_eval - Model objective: text classification - Classification type: single-label classification - Prediction label file path for the annotation specification set: When you create an evaluation job on CAIP, you must specify a CSV file that defines your annotation specification set. This file must have one row for every possible label your model outputs during prediction. Each row should be a comma-separated pair containing the label and a description of the label: label-name,description - Daily sample percentage: We'll set this to 100% so that all online predicitons are captured for evaluation. - BigQuery table to house online prediction requests: We'll use the BQ dataset and table `txtcls_eval.swivel`. If you enter a BigQuery table that doesn’t exist, one with that name will be created with the correct schema. - Prediction input - Data key: this is The key for the raw prediction data. From examining our deployed model signature, the input data key is `text`. - Data reference key: this is for image models, so we can ignore - Prediction output - Prediction labels key: This is the prediction key which contains the predicted label (i.e. the article source). For our model, the label key is `source`. - Prediction score key: This is the prediction key which contains the predicted scores (i.e. the model confidence). For our model, the score key is `confidence`. - Ground-truth method: Check the box that indicates we will provide our own labels, and not use a Human data labeling service. Once the evaluation job is set up, the table will be made in BigQuery to capture the online prediction requests.
###Code
%load_ext google.cloud.bigquery
%%bigquery --project $PROJECT
SELECT * FROM `txtcls_eval.swivel`
###Output
_____no_output_____
###Markdown
Now, every time this model version receives an online prediction request, this information will be captured and stored in the BQ table. Note, this happens everytime because we set the sampling proportion to 100%. Send prediction requests to your model Here are some article titles and their groundtruth sources that we can test with prediciton.| title | groundtruth ||---|---|| YouTube introduces Video Chapters to make it easier to navigate longer videos | techcrunch || A Filmmaker Put Away for Tax Fraud Takes Us Inside a British Prison | nytimes || A native Mac app wrapper for WhatsApp Web | github || Astronauts Dock With Space Station After Historic SpaceX Launch | nytimes || House Passes Sweeping Policing Bill Targeting Racial Bias and Use of Force | nytimes || Scrollability | github || iOS 14 lets deaf users set alerts for important sounds, among other clever accessibility perks | techcrunch |
###Code
%%writefile input.json
{"text": "YouTube introduces Video Chapters to make it easier to navigate longer videos"}
!gcloud ai-platform predict \
--model txtcls \
--json-instances input.json \
--version swivel
%%writefile input.json
{"text": "A Filmmaker Put Away for Tax Fraud Takes Us Inside a British Prison"}
!gcloud ai-platform predict \
--model txtcls \
--json-instances input.json \
--version swivel
%%writefile input.json
{"text": "A native Mac app wrapper for WhatsApp Web"}
!gcloud ai-platform predict \
--model txtcls \
--json-instances input.json \
--version swivel
%%writefile input.json
{"text": "Astronauts Dock With Space Station After Historic SpaceX Launch"}
!gcloud ai-platform predict \
--model txtcls \
--json-instances input.json \
--version swivel
%%writefile input.json
{"text": "House Passes Sweeping Policing Bill Targeting Racial Bias and Use of Force"}
!gcloud ai-platform predict \
--model txtcls \
--json-instances input.json \
--version swivel
%%writefile input.json
{"text": "Scrollability"}
!gcloud ai-platform predict \
--model txtcls \
--json-instances input.json \
--version swivel
%%writefile input.json
{"text": "iOS 14 lets deaf users set alerts for important sounds, among other clever accessibility perks"}
!gcloud ai-platform predict \
--model txtcls \
--json-instances input.json \
--version swivel
###Output
CONFIDENCE SOURCE
0.484371 nytimes
###Markdown
Summarizing the results from our model:| title | groundtruth | predicted|---|---|---|| YouTube introduces Video Chapters to make it easier to navigate longer videos | techcrunch | techcrunch || A Filmmaker Put Away for Tax Fraud Takes Us Inside a British Prison | nytimes | techcrunch || A native Mac app wrapper for WhatsApp Web | github | techcrunch || Astronauts Dock With Space Station After Historic SpaceX Launch | nytimes | techcrunch || House Passes Sweeping Policing Bill Targeting Racial Bias and Use of Force | nytimes | nytimes || Scrollability | github | techcrunch || iOS 14 lets deaf users set alerts for important sounds, among other clever accessibility perks | techcrunch | nytimes |
###Code
%%bigquery --project $PROJECT
SELECT * FROM `txtcls_eval.swivel`
###Output
_____no_output_____
###Markdown
Provide the ground truth for the raw prediction inputNotice the groundtruth is missing. We'll update the evaluation table to contain the ground truth.
###Code
%%bigquery --project $PROJECT
UPDATE `txtcls_eval.swivel`
SET
groundtruth = '{"predictions": [{"source": "techcrunch"}]}'
WHERE
raw_data = '{"instances": [{"text": "YouTube introduces Video Chapters to make it easier to navigate longer videos"}]}';
%%bigquery --project $PROJECT
UPDATE `txtcls_eval.swivel`
SET
groundtruth = '{"predictions": [{"source": "nytimes"}]}'
WHERE
raw_data = '{"instances": [{"text": "A Filmmaker Put Away for Tax Fraud Takes Us Inside a British Prison"}]}';
%%bigquery --project $PROJECT
UPDATE `txtcls_eval.swivel`
SET
groundtruth = '{"predictions": [{"source": "github"}]}'
WHERE
raw_data = '{"instances": [{"text": "A native Mac app wrapper for WhatsApp Web"}]}';
%%bigquery --project $PROJECT
UPDATE `txtcls_eval.swivel`
SET
groundtruth = '{"predictions": [{"source": "nytimes"}]}'
WHERE
raw_data = '{"instances": [{"text": "Astronauts Dock With Space Station After Historic SpaceX Launch"}]}';
%%bigquery --project $PROJECT
UPDATE `txtcls_eval.swivel`
SET
groundtruth = '{"predictions": [{"source": "nytimes"}]}'
WHERE
raw_data = '{"instances": [{"text": "House Passes Sweeping Policing Bill Targeting Racial Bias and Use of Force"}]}';
%%bigquery --project $PROJECT
UPDATE `txtcls_eval.swivel`
SET
groundtruth = '{"predictions": [{"source": "github"}]}'
WHERE
raw_data = '{"instances": [{"text": "Scrollability"}]}';
%%bigquery --project $PROJECT
UPDATE `txtcls_eval.swivel`
SET
groundtruth = '{"predictions": [{"source": "techcrunch"}]}'
WHERE
raw_data = '{"instances": [{"text": "iOS 14 lets deaf users set alerts for important sounds, among other clever accessibility perks"}]}';
###Output
_____no_output_____
###Markdown
We can confirm that the ground truch has been properly added to the table.
###Code
%%bigquery --project $PROJECT
SELECT * FROM `txtcls_eval.swivel`
###Output
_____no_output_____
###Markdown
Compute evaluation metricsWith the raw prediction input, the model output and the groundtruth in one place, we can evaluation how our model performs. And how the model performs across various aspects (e.g. over time, different model versions, different labels, etc)
###Code
import seaborn as sns
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.metrics import confusion_matrix
from sklearn.metrics import precision_recall_fscore_support as score
from sklearn.metrics import classification_report
###Output
_____no_output_____
###Markdown
Using regex we can extract the model predictions, to have an easier to read format:
###Code
%%bigquery --project $PROJECT
SELECT
model,
model_version,
time,
REGEXP_EXTRACT(raw_data, r'.*"text": "(.*)"') AS text,
REGEXP_EXTRACT(raw_prediction, r'.*"source": "(.*?)"') AS prediction,
REGEXP_EXTRACT(raw_prediction, r'.*"confidence": (0.\d{2}).*') AS confidence,
REGEXP_EXTRACT(groundtruth, r'.*"source": "(.*?)"') AS groundtruth,
FROM
`txtcls_eval.swivel`
query = '''
SELECT
model,
model_version,
time,
REGEXP_EXTRACT(raw_data, r'.*"text": "(.*)"') AS text,
REGEXP_EXTRACT(raw_prediction, r'.*"source": "(.*?)"') AS prediction,
REGEXP_EXTRACT(raw_prediction, r'.*"confidence": (0.\d{2}).*') AS confidence,
REGEXP_EXTRACT(groundtruth, r'.*"source": "(.*?)"') AS groundtruth,
FROM
`txtcls_eval.swivel`
'''
client = bigquery.Client()
df_results = client.query(query).to_dataframe()
df_results.head(20)
prediction = list(df_results.prediction)
groundtruth = list(df_results.groundtruth)
precision, recall, fscore, support = score(groundtruth, prediction)
from tabulate import tabulate
sources = list(CLASSES.keys())
results = list(zip(sources, precision, recall, fscore, support))
print(tabulate(results, headers = ['source', 'precision', 'recall', 'fscore', 'support'],
tablefmt='orgtbl'))
###Output
| source | precision | recall | fscore | support |
|------------+-------------+----------+----------+-----------|
| github | 0 | 0 | 0 | 2 |
| nytimes | 0.5 | 0.333333 | 0.4 | 3 |
| techcrunch | 0.2 | 0.5 | 0.285714 | 2 |
###Markdown
Or a full classification report from the sklearn library:
###Code
print(classification_report(y_true=groundtruth, y_pred=prediction))
###Output
precision recall f1-score support
github 0.00 0.00 0.00 2
nytimes 0.50 0.33 0.40 3
techcrunch 0.20 0.50 0.29 2
accuracy 0.29 7
macro avg 0.23 0.28 0.23 7
weighted avg 0.27 0.29 0.25 7
###Markdown
Can also examine a confusion matrix:
###Code
cm = confusion_matrix(groundtruth, prediction, labels=sources)
ax= plt.subplot()
sns.heatmap(cm, annot=True, ax = ax, cmap="Blues")
# labels, title and ticks
ax.set_xlabel('Predicted labels')
ax.set_ylabel('True labels')
ax.set_title('Confusion Matrix')
ax.xaxis.set_ticklabels(sources)
ax.yaxis.set_ticklabels(sources)
plt.savefig("./txtcls_cm.png")
###Output
_____no_output_____
###Markdown
Examine eval metrics by model version or timestamp By specifying the same evaluation table, two different model versions can be evaluated. Also, since the timestamp is captured, it is straightforward to evaluation model performance over time.
###Code
now = pd.Timestamp.now(tz='UTC')
one_week_ago = now - pd.DateOffset(weeks=1)
one_month_ago = now - pd.DateOffset(months=1)
df_prev_week = df_results[df_results.time > one_week_ago]
df_prev_month = df_results[df_results.time > one_month_ago]
df_prev_month
###Output
_____no_output_____
###Markdown
Continuous EvaluationThis notebook demonstrates how to use Cloud AI Platform to execute continuous evaluation of a deployed machine learning model. You'll need to have a project set up with Google Cloud Platform. Set upStart by creating environment variables for your Google Cloud project and bucket. Also, import the libraries we'll need for this notebook.
###Code
# change these to try this notebook out
PROJECT = 'munn-sandbox'
BUCKET = 'munn-sandbox'
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['TFVERSION'] = '2.1'
import shutil
import pandas as pd
import tensorflow as tf
from google.cloud import bigquery
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.callbacks import EarlyStopping
from tensorflow_hub import KerasLayer
from tensorflow.keras.layers import Dense, Input, Lambda
from tensorflow.keras.models import Model
print(tf.__version__)
%matplotlib inline
###Output
2.2.0
###Markdown
Train and deploy the modelFor this notebook, we'll build a text classification model using the Hacker News dataset. Each training example consists of an article title and the article source. The model will be trained to classify a given article title as belonging to either `nytimes`, `github` or `techcrunch`. Load the data
###Code
DATASET_NAME = "titles_full.csv"
COLUMNS = ['title', 'source']
titles_df = pd.read_csv(DATASET_NAME, header=None, names=COLUMNS)
titles_df.head()
###Output
_____no_output_____
###Markdown
We one-hot encode the label...
###Code
CLASSES = {
'github': 0,
'nytimes': 1,
'techcrunch': 2
}
N_CLASSES = len(CLASSES)
def encode_labels(sources):
classes = [CLASSES[source] for source in sources]
one_hots = to_categorical(classes, num_classes=N_CLASSES)
return one_hots
encode_labels(titles_df.source[:4])
###Output
_____no_output_____
###Markdown
...and create a train/test split.
###Code
N_TRAIN = int(len(titles_df) * 0.80)
titles_train, sources_train = (
titles_df.title[:N_TRAIN], titles_df.source[:N_TRAIN])
titles_valid, sources_valid = (
titles_df.title[N_TRAIN:], titles_df.source[N_TRAIN:])
X_train, Y_train = titles_train.values, encode_labels(sources_train)
X_valid, Y_valid = titles_valid.values, encode_labels(sources_valid)
X_train[:3]
###Output
_____no_output_____
###Markdown
Swivel ModelWe'll build a simple text classification model using a Tensorflow Hub embedding module derived from Swivel. [Swivel](https://arxiv.org/abs/1602.02215) is an algorithm that essentially factorizes word co-occurrence matrices to create the words embeddings. TF-Hub hosts the pretrained [gnews-swivel-20dim-with-oov](https://tfhub.dev/google/tf2-preview/gnews-swivel-20dim-with-oov/1) 20-dimensional Swivel module.
###Code
SWIVEL = "https://tfhub.dev/google/tf2-preview/gnews-swivel-20dim-with-oov/1"
swivel_module = KerasLayer(SWIVEL, output_shape=[20], input_shape=[], dtype=tf.string, trainable=True)
###Output
_____no_output_____
###Markdown
The `build_model` function is written so that the TF Hub module can easily be exchanged with another module.
###Code
def build_model(hub_module, model_name):
inputs = Input(shape=[], dtype=tf.string, name="text")
module = hub_module(inputs)
h1 = Dense(16, activation='relu', name="h1")(module)
outputs = Dense(N_CLASSES, activation='softmax', name='outputs')(h1)
model = Model(inputs=inputs, outputs=[outputs], name=model_name)
model.compile(
optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy']
)
return model
def train_and_evaluate(train_data, val_data, model, batch_size=5000):
tf.random.set_seed(33)
X_train, Y_train = train_data
history = model.fit(
X_train, Y_train,
epochs=100,
batch_size=batch_size,
validation_data=val_data,
callbacks=[EarlyStopping()],
)
return history
txtcls_model = build_model(swivel_module, model_name='txtcls_swivel')
txtcls_model.summary()
###Output
Model: "txtcls_swivel"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
text (InputLayer) [(None,)] 0
_________________________________________________________________
keras_layer (KerasLayer) (None, 20) 389380
_________________________________________________________________
h1 (Dense) (None, 16) 336
_________________________________________________________________
outputs (Dense) (None, 3) 51
=================================================================
Total params: 389,767
Trainable params: 389,767
Non-trainable params: 0
_________________________________________________________________
###Markdown
Train and evaluation the modelWith the model defined and data set up, next we'll train and evaluate the model.
###Code
# set up train and validation data
train_data = (X_train, Y_train)
val_data = (X_valid, Y_valid)
###Output
_____no_output_____
###Markdown
For training we'll call `train_and_evaluate` on `txtcls_model`.
###Code
txtcls_history = train_and_evaluate(train_data, val_data, txtcls_model)
history = txtcls_history
pd.DataFrame(history.history)[['loss', 'val_loss']].plot()
pd.DataFrame(history.history)[['accuracy', 'val_accuracy']].plot()
###Output
_____no_output_____
###Markdown
Calling predicition from model head produces output from final dense layer. This final layer is used to compute categorical cross-entropy when training.
###Code
txtcls_model.predict(x=["YouTube introduces Video Chapters to make it easier to navigate longer videos"])
###Output
_____no_output_____
###Markdown
We can save the model artifacts in the local directory called `./txtcls_swivel`.
###Code
tf.saved_model.save(txtcls_model, './txtcls_swivel/')
###Output
WARNING:tensorflow:From /opt/conda/lib/python3.7/site-packages/tensorflow/python/ops/resource_variable_ops.py:1817: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.
Instructions for updating:
If using Keras pass *_constraint arguments to layers.
###Markdown
....and examine the model's serving default signature. As expected the model takes as input a text string (e.g. an article title) and retrns a 3-dimensional vector of floats (i.e. the softmax output layer).
###Code
!saved_model_cli show \
--tag_set serve \
--signature_def serving_default \
--dir ./txtcls_swivel/
###Output
The given SavedModel SignatureDef contains the following input(s):
inputs['text'] tensor_info:
dtype: DT_STRING
shape: (-1)
name: serving_default_text:0
The given SavedModel SignatureDef contains the following output(s):
outputs['outputs'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 3)
name: StatefulPartitionedCall_2:0
Method name is: tensorflow/serving/predict
###Markdown
To simplify the returned predictions, we'll modify the model signature so that the model outputs the predicted article source (either `nytimes`, `techcrunch`, or `github`) rather than the final softmax layer. We'll also return the 'confidence' of the model's prediction. This will be the softmax value corresonding to the predicted article source.
###Code
@tf.function(input_signature=[tf.TensorSpec([None], dtype=tf.string)])
def source_name(text):
labels = tf.constant(['github', 'nytimes', 'techcrunch'], dtype=tf.string)
probs = txtcls_model(text, training=False)
indices = tf.argmax(probs, axis=1)
pred_source = tf.gather(params=labels, indices=indices)
pred_confidence = tf.reduce_max(probs, axis=1)
return {'source': pred_source,
'confidence': pred_confidence}
###Output
_____no_output_____
###Markdown
Now, we'll re-save the new Swivel model that has this updated model signature by referencing the `source_name` function for the model's `serving_default`.
###Code
shutil.rmtree('./txtcls_swivel', ignore_errors=True)
txtcls_model.save('./txtcls_swivel', signatures={'serving_default': source_name})
###Output
INFO:tensorflow:Assets written to: ./txtcls_swivel/assets
###Markdown
Examine the model signature to confirm the changes:
###Code
!saved_model_cli show \
--tag_set serve \
--signature_def serving_default \
--dir ./txtcls_swivel/
###Output
The given SavedModel SignatureDef contains the following input(s):
inputs['text'] tensor_info:
dtype: DT_STRING
shape: (-1)
name: serving_default_text:0
The given SavedModel SignatureDef contains the following output(s):
outputs['confidence'] tensor_info:
dtype: DT_FLOAT
shape: (-1)
name: StatefulPartitionedCall_2:0
outputs['source'] tensor_info:
dtype: DT_STRING
shape: (-1)
name: StatefulPartitionedCall_2:1
Method name is: tensorflow/serving/predict
###Markdown
Now when we call predictions using the updated serving input function, the model will return the predicted article source as a readable string, and the model's confidence for that prediction.
###Code
title1 = "House Passes Sweeping Policing Bill Targeting Racial Bias and Use of Force"
title2 = "YouTube introduces Video Chapters to make it easier to navigate longer videos"
title3 = "As facebook turns 10 zuckerberg wants to change how tech industry works"
restored = tf.keras.models.load_model('./txtcls_swivel')
infer = restored.signatures['serving_default']
outputs = infer(text=tf.constant([title1, title2, title3]))
print(outputs['source'].numpy())
print(outputs['confidence'].numpy())
###Output
[b'github' b'github' b'techcrunch']
[0.9840866 0.8172988 0.9124565]
###Markdown
Deploy the model for online servingOnce the model is trained and the assets saved, deploying the model to GCP is straightforward. After some time you should be able to see your deployed model and its version on the [model page of GCP console](https://console.cloud.google.com/ai-platform/models).
###Code
%%bash
MODEL_NAME="txtcls"
MODEL_VERSION="swivel"
MODEL_LOCATION="./txtcls_swivel/"
gcloud ai-platform models create ${MODEL_NAME}
gcloud ai-platform versions create ${MODEL_VERSION} \
--model ${MODEL_NAME} \
--origin ${MODEL_LOCATION} \
--staging-bucket gs://${BUCKET} \
--runtime-version=2.1
###Output
Using endpoint [https://ml.googleapis.com/]
WARNING: Please explicitly specify a region. Using [us-central1] by default on https://ml.googleapis.com. Please note that your model will be inaccessible from https://us-central1-ml.googelapis.com
Learn more about regional endpoints and see a list of available regions: https://cloud.google.com/ai-platform/prediction/docs/regional-endpoints
Created ml engine model [projects/munn-sandbox/models/txtcls_mm].
Using endpoint [https://ml.googleapis.com/]
Creating version (this might take a few minutes)......
...................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................done.
###Markdown
Set up the Evaluation job on CAIPNow that the model is deployed, go to [Cloud AI Platform](https://console.cloud.google.com/ai-platform/models/txtcls/versions) to see the model version you've deployed and [set up an evaluation job](https://console.cloud.google.com/ai-platform/models/txtcls/versions/swivel/evaluation) by clicking on the button called "Create Evaluation Job". You will be asked to provide some relevant information: - Job description: txtcls_swivel_eval - Model objective: text classification - Classification type: single-label classification - Prediction label file path for the annotation specification set: When you create an evaluation job on CAIP, you must specify a CSV file that defines your annotation specification set. This file must have one row for every possible label your model outputs during prediction. Each row should be a comma-separated pair containing the label and a description of the label: label-name,description - Daily sample percentage: We'll set this to 100% so that all online predicitons are captured for evaluation. - BigQuery table to house online prediction requests: We'll use the BQ dataset and table `txtcls_eval.swivel`. If you enter a BigQuery table that doesn’t exist, one with that name will be created with the correct schema. - Prediction input - Data key: this is The key for the raw prediction data. From examining our deployed model signature, the input data key is `text`. - Data reference key: this is for image models, so we can ignore - Prediction output - Prediction labels key: This is the prediction key which contains the predicted label (i.e. the article source). For our model, the label key is `source`. - Prediction score key: This is the prediction key which contains the predicted scores (i.e. the model confidence). For our model, the score key is `confidence`. - Ground-truth method: Check the box that indicates we will provide our own labels, and not use a Human data labeling service. Once the evaluation job is set up, the table will be made in BigQuery to capture the online prediction requests.
###Code
%load_ext google.cloud.bigquery
%%bigquery --project $PROJECT
SELECT * FROM `txtcls_eval.swivel`
###Output
_____no_output_____
###Markdown
Now, every time this model version receives an online prediction request, this information will be captured and stored in the BQ table. Note, this happens everytime because we set the sampling proportion to 100%. Send prediction requests to your model Here are some article titles and their groundtruth sources that we can test with prediciton.| title | groundtruth ||---|---|| YouTube introduces Video Chapters to make it easier to navigate longer videos | techcrunch || A Filmmaker Put Away for Tax Fraud Takes Us Inside a British Prison | nytimes || A native Mac app wrapper for WhatsApp Web | github || Astronauts Dock With Space Station After Historic SpaceX Launch | nytimes || House Passes Sweeping Policing Bill Targeting Racial Bias and Use of Force | nytimes || Scrollability | github || iOS 14 lets deaf users set alerts for important sounds, among other clever accessibility perks | techcrunch |
###Code
%%writefile input.json
{"text": "YouTube introduces Video Chapters to make it easier to navigate longer videos"}
!gcloud ai-platform predict \
--model txtcls \
--json-instances input.json \
--version swivel
%%writefile input.json
{"text": "A Filmmaker Put Away for Tax Fraud Takes Us Inside a British Prison"}
!gcloud ai-platform predict \
--model txtcls \
--json-instances input.json \
--version swivel
%%writefile input.json
{"text": "A native Mac app wrapper for WhatsApp Web"}
!gcloud ai-platform predict \
--model txtcls \
--json-instances input.json \
--version swivel
%%writefile input.json
{"text": "Astronauts Dock With Space Station After Historic SpaceX Launch"}
!gcloud ai-platform predict \
--model txtcls \
--json-instances input.json \
--version swivel
%%writefile input.json
{"text": "House Passes Sweeping Policing Bill Targeting Racial Bias and Use of Force"}
!gcloud ai-platform predict \
--model txtcls \
--json-instances input.json \
--version swivel
%%writefile input.json
{"text": "Scrollability"}
!gcloud ai-platform predict \
--model txtcls \
--json-instances input.json \
--version swivel
%%writefile input.json
{"text": "iOS 14 lets deaf users set alerts for important sounds, among other clever accessibility perks"}
!gcloud ai-platform predict \
--model txtcls \
--json-instances input.json \
--version swivel
###Output
CONFIDENCE SOURCE
0.484371 nytimes
###Markdown
Summarizing the results from our model:| title | groundtruth | predicted|---|---|---|| YouTube introduces Video Chapters to make it easier to navigate longer videos | techcrunch | techcrunch || A Filmmaker Put Away for Tax Fraud Takes Us Inside a British Prison | nytimes | techcrunch || A native Mac app wrapper for WhatsApp Web | github | techcrunch || Astronauts Dock With Space Station After Historic SpaceX Launch | nytimes | techcrunch || House Passes Sweeping Policing Bill Targeting Racial Bias and Use of Force | nytimes | nytimes || Scrollability | github | techcrunch || iOS 14 lets deaf users set alerts for important sounds, among other clever accessibility perks | techcrunch | nytimes |
###Code
%%bigquery --project $PROJECT
SELECT * FROM `txtcls_eval.swivel`
###Output
_____no_output_____
###Markdown
Provide the ground truth for the raw prediction inputNotice the groundtruth is missing. We'll update the evaluation table to contain the ground truth.
###Code
%%bigquery --project $PROJECT
UPDATE `txtcls_eval.swivel`
SET
groundtruth = '{"predictions": [{"source": "techcrunch"}]}'
WHERE
raw_data = '{"instances": [{"text": "YouTube introduces Video Chapters to make it easier to navigate longer videos"}]}';
%%bigquery --project $PROJECT
UPDATE `txtcls_eval.swivel`
SET
groundtruth = '{"predictions": [{"source": "nytimes"}]}'
WHERE
raw_data = '{"instances": [{"text": "A Filmmaker Put Away for Tax Fraud Takes Us Inside a British Prison"}]}';
%%bigquery --project $PROJECT
UPDATE `txtcls_eval.swivel`
SET
groundtruth = '{"predictions": [{"source": "github"}]}'
WHERE
raw_data = '{"instances": [{"text": "A native Mac app wrapper for WhatsApp Web"}]}';
%%bigquery --project $PROJECT
UPDATE `txtcls_eval.swivel`
SET
groundtruth = '{"predictions": [{"source": "nytimes"}]}'
WHERE
raw_data = '{"instances": [{"text": "Astronauts Dock With Space Station After Historic SpaceX Launch"}]}';
%%bigquery --project $PROJECT
UPDATE `txtcls_eval.swivel`
SET
groundtruth = '{"predictions": [{"source": "nytimes"}]}'
WHERE
raw_data = '{"instances": [{"text": "House Passes Sweeping Policing Bill Targeting Racial Bias and Use of Force"}]}';
%%bigquery --project $PROJECT
UPDATE `txtcls_eval.swivel`
SET
groundtruth = '{"predictions": [{"source": "github"}]}'
WHERE
raw_data = '{"instances": [{"text": "Scrollability"}]}';
%%bigquery --project $PROJECT
UPDATE `txtcls_eval.swivel`
SET
groundtruth = '{"predictions": [{"source": "techcrunch"}]}'
WHERE
raw_data = '{"instances": [{"text": "iOS 14 lets deaf users set alerts for important sounds, among other clever accessibility perks"}]}';
###Output
_____no_output_____
###Markdown
We can confirm that the ground truch has been properly added to the table.
###Code
%%bigquery --project $PROJECT
SELECT * FROM `txtcls_eval.swivel`
###Output
_____no_output_____
###Markdown
Compute evaluation metricsWith the raw prediction input, the model output and the groundtruth in one place, we can evaluation how our model performs. And how the model performs across various aspects (e.g. over time, different model versions, different labels, etc)
###Code
import seaborn as sns
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.metrics import confusion_matrix
from sklearn.metrics import precision_recall_fscore_support as score
from sklearn.metrics import classification_report
###Output
_____no_output_____
###Markdown
Using regex we can extract the model predictions, to have an easier to read format:
###Code
%%bigquery --project $PROJECT
SELECT
model,
model_version,
time,
REGEXP_EXTRACT(raw_data, r'.*"text": "(.*)"') AS text,
REGEXP_EXTRACT(raw_prediction, r'.*"source": "(.*?)"') AS prediction,
REGEXP_EXTRACT(raw_prediction, r'.*"confidence": (0.\d{2}).*') AS confidence,
REGEXP_EXTRACT(groundtruth, r'.*"source": "(.*?)"') AS groundtruth,
FROM
`txtcls_eval.swivel`
query = '''
SELECT
model,
model_version,
time,
REGEXP_EXTRACT(raw_data, r'.*"text": "(.*)"') AS text,
REGEXP_EXTRACT(raw_prediction, r'.*"source": "(.*?)"') AS prediction,
REGEXP_EXTRACT(raw_prediction, r'.*"confidence": (0.\d{2}).*') AS confidence,
REGEXP_EXTRACT(groundtruth, r'.*"source": "(.*?)"') AS groundtruth,
FROM
`txtcls_eval.swivel`
'''
client = bigquery.Client()
df_results = client.query(query).to_dataframe()
df_results.head(20)
prediction = list(df_results.prediction)
groundtruth = list(df_results.groundtruth)
precision, recall, fscore, support = score(groundtruth, prediction)
from tabulate import tabulate
sources = list(CLASSES.keys())
results = list(zip(sources, precision, recall, fscore, support))
print(tabulate(results, headers = ['source', 'precision', 'recall', 'fscore', 'support'],
tablefmt='orgtbl'))
###Output
| source | precision | recall | fscore | support |
|------------+-------------+----------+----------+-----------|
| github | 0 | 0 | 0 | 2 |
| nytimes | 0.5 | 0.333333 | 0.4 | 3 |
| techcrunch | 0.2 | 0.5 | 0.285714 | 2 |
###Markdown
Or a full classification report from the sklearn library:
###Code
print(classification_report(y_true=groundtruth, y_pred=prediction))
###Output
precision recall f1-score support
github 0.00 0.00 0.00 2
nytimes 0.50 0.33 0.40 3
techcrunch 0.20 0.50 0.29 2
accuracy 0.29 7
macro avg 0.23 0.28 0.23 7
weighted avg 0.27 0.29 0.25 7
###Markdown
Can also examine a confusion matrix:
###Code
cm = confusion_matrix(groundtruth, prediction, labels=sources)
ax= plt.subplot()
sns.heatmap(cm, annot=True, ax = ax, cmap="Blues")
# labels, title and ticks
ax.set_xlabel('Predicted labels')
ax.set_ylabel('True labels')
ax.set_title('Confusion Matrix')
ax.xaxis.set_ticklabels(sources)
ax.yaxis.set_ticklabels(sources)
plt.savefig("./txtcls_cm.png")
###Output
_____no_output_____
###Markdown
Examine eval metrics by model version or timestamp By specifying the same evaluation table, two different model versions can be evaluated. Also, since the timestamp is captured, it is straightforward to evaluation model performance over time.
###Code
now = pd.Timestamp.now(tz='UTC')
one_week_ago = now - pd.DateOffset(weeks=1)
one_month_ago = now - pd.DateOffset(months=1)
df_prev_week = df_results[df_results.time > one_week_ago]
df_prev_month = df_results[df_results.time > one_month_ago]
df_prev_month
###Output
_____no_output_____ |
src/Online_Patient_Conversation_Classifier.ipynb | ###Markdown
###Code
from google.colab import files
uploaded = files.upload()
for fn in uploaded.keys():
print('User uploaded file "{name}" with length {length} bytes'.format(
name=fn, length=len(uploaded[fn])))
# import all libraries/dependencies
from tensorflow.contrib import learn
import tensorflow as tf
import numpy as np
import pandas as pd
import datetime
import time
import csv
import re
import os
os.environ['TF_CPP_MIN_LOG_LEVEL']='2'
def generate_dataset():
train_data_df = pd.read_csv("train.csv", encoding="ISO-8859-1")
test_data_df = pd.read_csv("test.csv", encoding="ISO-8859-1")
positive_train_data_df = train_data_df[train_data_df['Patient_Tag'] == 1]
positive_train_data = positive_train_data_df[['TRANS_CONV_TEXT']]
negative_train_data_df = train_data_df[train_data_df['Patient_Tag'] == 0]
negative_train_data = negative_train_data_df[['TRANS_CONV_TEXT']]
test_data = test_data_df[['TRANS_CONV_TEXT']]
for index, row in positive_train_data.iterrows():
file_handler1 = open("patient_conversation-positive.txt", 'a', encoding='utf-8')
file_handler1.write(str(row[0])+"\n")
file_handler1.close()
for index, row in negative_train_data.iterrows():
file_handler2 = open("patient_conversation-negative.txt", 'a', encoding='utf-8')
file_handler2.write(str(row[0])+"\n")
file_handler2.close()
for index, row in test_data.iterrows():
file_handler3 = open("patient_conversations-test.txt", 'a', encoding='utf-8')
file_handler3.write(str(row[0])+"\n")
file_handler3.close()
class TextCNN(object):
#A CNN for text classification. Uses an embedding layer, followed by a convolutional, max-pooling and softmax layer.
def __init__(
self, sequence_length, num_classes, vocab_size,
embedding_size, filter_sizes, num_filters, l2_reg_lambda=0.0):
# Placeholders for input, output and dropout
self.input_x = tf.placeholder(tf.int32, [None, sequence_length], name="input_x")
self.input_y = tf.placeholder(tf.float32, [None, num_classes], name="input_y")
self.dropout_keep_prob = tf.placeholder(tf.float32, name="dropout_keep_prob")
# Keeping track of l2 regularization loss (optional)
l2_loss = tf.constant(0.0)
# Embedding layer
with tf.device('/gpu:0'), tf.name_scope("embedding"):
self.W = tf.Variable(
tf.random_uniform([vocab_size, embedding_size], -1.0, 1.0),
name="W")
self.embedded_chars = tf.nn.embedding_lookup(self.W, self.input_x)
self.embedded_chars_expanded = tf.expand_dims(self.embedded_chars, -1)
# Create a convolution + maxpool layer for each filter size
pooled_outputs = []
for i, filter_size in enumerate(filter_sizes):
with tf.name_scope("conv-maxpool-%s" % filter_size):
# Convolution Layer
filter_shape = [filter_size, embedding_size, 1, num_filters]
W = tf.Variable(tf.truncated_normal(filter_shape, stddev=0.1), name="W")
b = tf.Variable(tf.constant(0.1, shape=[num_filters]), name="b")
conv = tf.nn.conv2d(
self.embedded_chars_expanded,
W,
strides=[1, 1, 1, 1],
padding="VALID",
name="conv")
# Apply nonlinearity
h = tf.nn.relu(tf.nn.bias_add(conv, b), name="relu")
# Maxpooling over the outputs
pooled = tf.nn.max_pool(
h,
ksize=[1, sequence_length - filter_size + 1, 1, 1],
strides=[1, 1, 1, 1],
padding='VALID',
name="pool")
pooled_outputs.append(pooled)
# Combine all the pooled features
num_filters_total = num_filters * len(filter_sizes)
self.h_pool = tf.concat(pooled_outputs, 3)
self.h_pool_flat = tf.reshape(self.h_pool, [-1, num_filters_total])
# Add dropout
with tf.name_scope("dropout"):
self.h_drop = tf.nn.dropout(self.h_pool_flat, self.dropout_keep_prob)
# Final (unnormalized) scores and predictions
with tf.name_scope("output"):
W = tf.get_variable(
"W",
shape=[num_filters_total, num_classes],
initializer=tf.contrib.layers.xavier_initializer())
b = tf.Variable(tf.constant(0.1, shape=[num_classes]), name="b")
l2_loss += tf.nn.l2_loss(W)
l2_loss += tf.nn.l2_loss(b)
self.scores = tf.nn.xw_plus_b(self.h_drop, W, b, name="scores")
self.predictions = tf.argmax(self.scores, 1, name="predictions")
# Calculate mean cross-entropy loss
with tf.name_scope("loss"):
losses = tf.nn.softmax_cross_entropy_with_logits(logits=self.scores, labels=self.input_y)
self.loss = tf.reduce_mean(losses) + l2_reg_lambda * l2_loss
# Accuracy
with tf.name_scope("accuracy"):
correct_predictions = tf.equal(self.predictions, tf.argmax(self.input_y, 1))
self.accuracy = tf.reduce_mean(tf.cast(correct_predictions, "float"), name="accuracy")
def clean_str(string):
# Tokenization/string cleaning for all datasets.
string = re.sub(r"[^A-Za-z0-9(),!?\'\`]", " ", string)
string = re.sub(r"\'s", " \'s", string)
string = re.sub(r"\'ve", " \'ve", string)
string = re.sub(r"n\'t", " n\'t", string)
string = re.sub(r"\'re", " \'re", string)
string = re.sub(r"\'d", " \'d", string)
string = re.sub(r"\'ll", " \'ll", string)
string = re.sub(r",", " , ", string)
string = re.sub(r"!", " ! ", string)
string = re.sub(r"\(", " \\( ", string)
string = re.sub(r"\)", " \\) ", string)
string = re.sub(r"\?", " \\? ", string)
string = re.sub(r"\s{2,}", " ", string)
return string.strip().lower()
def load_data_and_labels(positive_data_file, negative_data_file):
# Loads data from files, splits the data into words and generates labels. Returns split sentences and labels.
# Load data from files
positive_examples = list(open(positive_data_file, "r", encoding="utf8").readlines())
positive_examples = [s.strip() for s in positive_examples]
negative_examples = list(open(negative_data_file, "r", encoding="utf8").readlines())
negative_examples = [s.strip() for s in negative_examples]
# Split by words
x_text = positive_examples + negative_examples
x_text = [clean_str(sent) for sent in x_text]
# Generate labels
positive_labels = [[0, 1] for _ in positive_examples]
negative_labels = [[1, 0] for _ in negative_examples]
y = np.concatenate([positive_labels, negative_labels], 0)
return [x_text, y]
def batch_iter(data, batch_size, num_epochs, shuffle=True):
# Generates a batch iterator for a dataset.
data = np.array(data)
data_size = len(data)
num_batches_per_epoch = int((len(data)-1)/batch_size) + 1
for epoch in range(num_epochs):
# Shuffle the data at each epoch
if shuffle:
shuffle_indices = np.random.permutation(np.arange(data_size))
shuffled_data = data[shuffle_indices]
else:
shuffled_data = data
for batch_num in range(num_batches_per_epoch):
start_index = batch_num * batch_size
end_index = min((batch_num + 1) * batch_size, data_size)
yield shuffled_data[start_index:end_index]
def del_all_flags(FLAGS):
flags_dict = FLAGS._flags()
keys_list = [keys for keys in flags_dict]
print("\nDeleting Following keys...")
for keys in keys_list:
print(keys)
FLAGS.__delattr__(keys)
def print_all_flags(FLAGS):
flags_dict = FLAGS._flags()
keys_list = [keys for keys in flags_dict]
print("\nCreated Following keys...")
for keys in keys_list:
print(keys)
# Parameters
# ==================================================
# Delete all flags before declaring
del_all_flags(tf.flags.FLAGS)
# Data loading params
tf.flags.DEFINE_float("dev_sample_percentage", .1, "Percentage of the training data to use for validation")
tf.flags.DEFINE_string("positive_data_file", "patient_conversation-positive.txt", "Data source for the positive data.")
tf.flags.DEFINE_string("negative_data_file", "patient_conversation-negative.txt", "Data source for the negative data.")
# Model Hyperparameters
tf.flags.DEFINE_integer("embedding_dim", 128, "Dimensionality of character embedding (default: 128)")
tf.flags.DEFINE_string("filter_sizes", "3,4,5", "Comma-separated filter sizes (default: '3,4,5')")
tf.flags.DEFINE_integer("num_filters", 128, "Number of filters per filter size (default: 128)")
tf.flags.DEFINE_float("dropout_keep_prob", 0.5, "Dropout keep probability (default: 0.5)")
tf.flags.DEFINE_float("l2_reg_lambda", 0.0, "L2 regularization lambda (default: 0.0)")
# Training parameters
tf.flags.DEFINE_integer("batch_size", 64, "Batch Size (default: 64)")
tf.flags.DEFINE_integer("num_epochs", 200, "Number of training epochs (default: 200)")
tf.flags.DEFINE_integer("evaluate_every", 100, "Evaluate model on dev set after this many steps (default: 100)")
tf.flags.DEFINE_integer("checkpoint_every", 100, "Save model after this many steps (default: 100)")
tf.flags.DEFINE_integer("num_checkpoints", 5, "Number of checkpoints to store (default: 5)")
# Misc Parameters
tf.flags.DEFINE_boolean("allow_soft_placement", True, "Allow device soft device placement")
tf.flags.DEFINE_boolean("log_device_placement", False, "Log placement of ops on devices")
tf.flags.DEFINE_string("model_path", "", "Path where the model is saved.")
FLAGS = tf.flags.FLAGS
# Print all flags after declaring
print_all_flags(FLAGS)
def preprocess():
# Data Preparation
# ==================================================
# Load data
print("Loading data...")
x_text, y = load_data_and_labels(FLAGS.positive_data_file, FLAGS.negative_data_file)
# Build vocabulary
max_document_length = max([len(x.split(" ")) for x in x_text])
vocab_processor = learn.preprocessing.VocabularyProcessor(max_document_length)
x = np.array(list(vocab_processor.fit_transform(x_text)))
# Randomly shuffle data
np.random.seed(10)
shuffle_indices = np.random.permutation(np.arange(len(y)))
x_shuffled = x[shuffle_indices]
y_shuffled = y[shuffle_indices]
# Split train/test set
# TODO: This is very crude, should use cross-validation
dev_sample_index = -1 * int(FLAGS.dev_sample_percentage * float(len(y)))
x_train, x_dev = x_shuffled[:dev_sample_index], x_shuffled[dev_sample_index:]
y_train, y_dev = y_shuffled[:dev_sample_index], y_shuffled[dev_sample_index:]
del x, y, x_shuffled, y_shuffled
print("Vocabulary Size: {:d}".format(len(vocab_processor.vocabulary_)))
print("Train/Dev split: {:d}/{:d}".format(len(y_train), len(y_dev)))
return x_train, y_train, vocab_processor, x_dev, y_dev
def restore(sess_var, model_path):
if model_path is not None:
if os.path.exists("{}.index".format(model_path)):
saver = tf.train.Saver(var_list=tf.trainable_variables())
saver.restore(sess_var, model_path)
print("Model at %s restored" % model_path)
else:
print("Model path does not exist, skipping...")
else:
print("Model path is None - Nothing to restore")
def train(x_train, y_train, vocab_processor, x_dev, y_dev):
# Training
# ==================================================
with tf.Graph().as_default():
session_conf = tf.ConfigProto(
allow_soft_placement=FLAGS.allow_soft_placement,
log_device_placement=FLAGS.log_device_placement)
sess = tf.Session(config=session_conf)
with sess.as_default():
cnn = TextCNN(
sequence_length=x_train.shape[1],
num_classes=y_train.shape[1],
vocab_size=len(vocab_processor.vocabulary_),
embedding_size=FLAGS.embedding_dim,
filter_sizes=list(map(int, FLAGS.filter_sizes.split(","))),
num_filters=FLAGS.num_filters,
l2_reg_lambda=FLAGS.l2_reg_lambda)
# Define Training procedure
global_step = tf.Variable(0, name="global_step", trainable=False)
optimizer = tf.train.AdamOptimizer(1e-3)
grads_and_vars = optimizer.compute_gradients(cnn.loss)
train_op = optimizer.apply_gradients(grads_and_vars, global_step=global_step)
# Keep track of gradient values and sparsity (optional)
grad_summaries = []
for g, v in grads_and_vars:
if g is not None:
grad_hist_summary = tf.summary.histogram("{}/grad/hist".format((v.name).replace(":","_")), g)
sparsity_summary = tf.summary.scalar("{}/grad/sparsity".format((v.name).replace(":","_")), tf.nn.zero_fraction(g))
grad_summaries.append(grad_hist_summary)
grad_summaries.append(sparsity_summary)
grad_summaries_merged = tf.summary.merge(grad_summaries)
# Output directory for models and summaries
timestamp = str(int(time.time()))
out_dir = os.path.abspath(os.path.join(os.path.curdir, "runs", timestamp))
print("Writing to {}\n".format(out_dir))
# Summaries for loss and accuracy
loss_summary = tf.summary.scalar("loss", cnn.loss)
acc_summary = tf.summary.scalar("accuracy", cnn.accuracy)
# Train Summaries
train_summary_op = tf.summary.merge([loss_summary, acc_summary, grad_summaries_merged])
train_summary_dir = os.path.join(out_dir, "summaries", "train")
train_summary_writer = tf.summary.FileWriter(train_summary_dir, sess.graph)
# Dev summaries
dev_summary_op = tf.summary.merge([loss_summary, acc_summary])
dev_summary_dir = os.path.join(out_dir, "summaries", "dev")
dev_summary_writer = tf.summary.FileWriter(dev_summary_dir, sess.graph)
# Checkpoint directory. Tensorflow assumes this directory already exists so we need to create it
checkpoint_dir = os.path.abspath(os.path.join(out_dir, "checkpoints"))
checkpoint_prefix = os.path.join(checkpoint_dir, "model")
if not os.path.exists(checkpoint_dir):
os.makedirs(checkpoint_dir)
saver = tf.train.Saver(tf.global_variables(), max_to_keep=FLAGS.num_checkpoints)
# Write vocabulary
vocab_processor.save(os.path.join(out_dir, "vocab"))
# Initialize all variables
sess.run(tf.global_variables_initializer())
def train_step(x_batch, y_batch):
"""
A single training step
"""
feed_dict = {
cnn.input_x: x_batch,
cnn.input_y: y_batch,
cnn.dropout_keep_prob: FLAGS.dropout_keep_prob
}
_, step, summaries, loss, accuracy = sess.run(
[train_op, global_step, train_summary_op, cnn.loss, cnn.accuracy],
feed_dict)
time_str = datetime.datetime.now().isoformat()
print("{}: step {}, loss {:g}, acc {:g}".format(time_str, step, loss, accuracy))
train_summary_writer.add_summary(summaries, step)
def dev_step(x_batch, y_batch, writer=None):
"""
Evaluates model on a dev set
"""
feed_dict = {
cnn.input_x: x_batch,
cnn.input_y: y_batch,
cnn.dropout_keep_prob: 1.0
}
step, summaries, loss, accuracy = sess.run(
[global_step, dev_summary_op, cnn.loss, cnn.accuracy],
feed_dict)
time_str = datetime.datetime.now().isoformat()
print("{}: step {}, loss {:g}, acc {:g}".format(time_str, step, loss, accuracy))
if writer:
writer.add_summary(summaries, step)
# Generate batches
batches = batch_iter(
list(zip(x_train, y_train)), FLAGS.batch_size, FLAGS.num_epochs)
# Training loop. For each batch...
for batch in batches:
x_batch, y_batch = zip(*batch)
train_step(x_batch, y_batch)
current_step = tf.train.global_step(sess, global_step)
if current_step % FLAGS.evaluate_every == 0:
print("\nEvaluation:")
dev_step(x_dev, y_dev, writer=dev_summary_writer)
print("")
if current_step % FLAGS.checkpoint_every == 0:
path = saver.save(sess, checkpoint_prefix, global_step=current_step)
print("Saved model checkpoint to {}\n".format(path))
FLAGS.model_path = path
#restore(sess, FLAGS.model_path)
def test(model_path):
# Delete all flags before declaring
del_all_flags(tf.flags.FLAGS)
tf.flags.DEFINE_string("test_data_file", "patient_conversations-test.txt", "Data source for the test data.")
tf.flags.DEFINE_string("checkpoint_dir", model_path, "Checkpoint directory from training run")
tf.flags.DEFINE_boolean("eval_train", True, "Evaluate on all training data")
tf.flags.DEFINE_integer("batch_size", 2, "Batch Size (default: 64)")
tf.flags.DEFINE_boolean("allow_soft_placement", True, "Allow device soft device placement")
tf.flags.DEFINE_boolean("log_device_placement", False, "Log placement of ops on devices")
FLAGS = tf.flags.FLAGS
# Print all flags after declaring
print_all_flags(FLAGS)
if FLAGS.eval_train:
# Load data from files
test_examples = list(open(FLAGS.test_data_file, "r", encoding="utf8").readlines())
test_examples = [s.strip() for s in test_examples]
# Split by words
x_raw = [clean_str(sent) for sent in test_examples]
else:
x_raw = ["I think I am suffering from cold and flu", "I am really loving this problem"]
# Map data into vocabulary
vocab_path = os.path.join(FLAGS.checkpoint_dir, "./../../", "vocab")
vocab_processor = learn.preprocessing.VocabularyProcessor.restore(vocab_path)
x_test = np.array(list(vocab_processor.transform(x_raw)))
print("\nEvaluating...\n")
# Evaluation
# ==================================================
checkpoint_file = tf.train.latest_checkpoint(os.path.join(FLAGS.checkpoint_dir, "./../"))
graph = tf.Graph()
with graph.as_default():
session_conf = tf.ConfigProto(allow_soft_placement=FLAGS.allow_soft_placement, log_device_placement=FLAGS.log_device_placement)
sess = tf.Session(config=session_conf)
with sess.as_default():
# Load the saved meta graph and restore variables
saver = tf.train.import_meta_graph("{}.meta".format(checkpoint_file))
saver.restore(sess, checkpoint_file)
# Get the placeholders from the graph by name
input_x = graph.get_operation_by_name("input_x").outputs[0]
# input_y = graph.get_operation_by_name("input_y").outputs[0]
dropout_keep_prob = graph.get_operation_by_name("dropout_keep_prob").outputs[0]
# Tensors we want to evaluate
predictions = graph.get_operation_by_name("output/predictions").outputs[0]
# Generate batches for one epoch
batches = batch_iter(list(x_test), FLAGS.batch_size, 1, shuffle=False)
# Collect the predictions here
patient_tag = ["Patient_Tag"]
for x_test_batch in batches:
batch_predictions = sess.run(predictions, {input_x: x_test_batch, dropout_keep_prob: 1.0})
patient_tag = np.concatenate([patient_tag, batch_predictions])
patient_index = ["Index"]
patient_index = np.concatenate([patient_index, np.arange(1,len(x_raw)+1,1)])
conversation_data = ["Conversation_Data"]
conversation_data = np.concatenate([conversation_data, np.array(x_raw)])
# Save the evaluation to a csv
predictions = np.column_stack((patient_index, patient_tag))
predictions_description = np.column_stack((conversation_data, patient_tag))
submission_file = os.path.join(FLAGS.checkpoint_dir, "../../../../", "prediction.csv")
predictions_description_file = os.path.join(FLAGS.checkpoint_dir, "../../../../", "prediction-description.csv")
out_path = os.path.abspath(submission_file)
print("Saving evaluation to {0}".format(out_path))
with open(out_path, 'w+', encoding="utf8", newline='') as f:
csv.writer(f).writerows(predictions)
out_path = os.path.abspath(predictions_description_file)
print("Saving prediction descriptions to {0}".format(out_path))
with open(out_path, 'w+', encoding="utf8", newline='') as f:
csv.writer(f).writerows(predictions_description)
def main(argv=None):
generate_dataset()
x_train, y_train, vocab_processor, x_dev, y_dev = preprocess()
train(x_train, y_train, vocab_processor, x_dev, y_dev)
test(FLAGS.model_path)
if __name__ == '__main__':
tf.app.run()
os._exit(1)
# Download prediction.csv file
files.download("prediction.csv")
files.download("prediction-description.csv")
###Output
_____no_output_____
###Markdown
###Code
from google.colab import files
uploaded = files.upload()
for fn in uploaded.keys():
print('User uploaded file "{name}" with length {length} bytes'.format(
name=fn, length=len(uploaded[fn])))
#Install required dependencies
!pip install tensorflow==1.12.0
# import all libraries/dependencies
from tensorflow.contrib import learn
import tensorflow as tf
import numpy as np
import pandas as pd
import datetime
import time
import csv
import re
import os
os.environ['TF_CPP_MIN_LOG_LEVEL']='2'
def generate_dataset():
train_data_df = pd.read_csv("train.csv", encoding="ISO-8859-1")
test_data_df = pd.read_csv("test.csv", encoding="ISO-8859-1")
positive_train_data_df = train_data_df[train_data_df['Patient_Tag'] == 1]
positive_train_data = positive_train_data_df[['TRANS_CONV_TEXT']]
negative_train_data_df = train_data_df[train_data_df['Patient_Tag'] == 0]
negative_train_data = negative_train_data_df[['TRANS_CONV_TEXT']]
test_data = test_data_df[['TRANS_CONV_TEXT']]
for index, row in positive_train_data.iterrows():
file_handler1 = open("patient_conversation-positive.txt", 'w', encoding='utf-8')
file_handler1.write(str(row[0])+"\n")
file_handler1.close()
for index, row in negative_train_data.iterrows():
file_handler2 = open("patient_conversation-negative.txt", 'w', encoding='utf-8')
file_handler2.write(str(row[0])+"\n")
file_handler2.close()
for index, row in test_data.iterrows():
file_handler3 = open("patient_conversations-test.txt", 'w', encoding='utf-8')
file_handler3.write(str(row[0])+"\n")
file_handler3.close()
class TextCNN(object):
#A CNN for text classification. Uses an embedding layer, followed by a convolutional, max-pooling and softmax layer.
def __init__(
self, sequence_length, num_classes, vocab_size,
embedding_size, filter_sizes, num_filters, l2_reg_lambda=0.0):
# Placeholders for input, output and dropout
self.input_x = tf.placeholder(tf.int32, [None, sequence_length], name="input_x")
self.input_y = tf.placeholder(tf.float32, [None, num_classes], name="input_y")
self.dropout_keep_prob = tf.placeholder(tf.float32, name="dropout_keep_prob")
# Keeping track of l2 regularization loss (optional)
l2_loss = tf.constant(0.0)
# Embedding layer
with tf.device('/gpu:0'), tf.name_scope("embedding"):
self.W = tf.Variable(
tf.random_uniform([vocab_size, embedding_size], -1.0, 1.0),
name="W")
self.embedded_chars = tf.nn.embedding_lookup(self.W, self.input_x)
self.embedded_chars_expanded = tf.expand_dims(self.embedded_chars, -1)
# Create a convolution + maxpool layer for each filter size
pooled_outputs = []
for i, filter_size in enumerate(filter_sizes):
with tf.name_scope("conv-maxpool-%s" % filter_size):
# Convolution Layer
filter_shape = [filter_size, embedding_size, 1, num_filters]
W = tf.Variable(tf.truncated_normal(filter_shape, stddev=0.1), name="W")
b = tf.Variable(tf.constant(0.1, shape=[num_filters]), name="b")
conv = tf.nn.conv2d(
self.embedded_chars_expanded,
W,
strides=[1, 1, 1, 1],
padding="VALID",
name="conv")
# Apply nonlinearity
h = tf.nn.relu(tf.nn.bias_add(conv, b), name="relu")
# Maxpooling over the outputs
pooled = tf.nn.max_pool(
h,
ksize=[1, sequence_length - filter_size + 1, 1, 1],
strides=[1, 1, 1, 1],
padding='VALID',
name="pool")
pooled_outputs.append(pooled)
# Combine all the pooled features
num_filters_total = num_filters * len(filter_sizes)
self.h_pool = tf.concat(pooled_outputs, 3)
self.h_pool_flat = tf.reshape(self.h_pool, [-1, num_filters_total])
# Add dropout
with tf.name_scope("dropout"):
self.h_drop = tf.nn.dropout(self.h_pool_flat, self.dropout_keep_prob)
# Final (unnormalized) scores and predictions
with tf.name_scope("output"):
W = tf.get_variable(
"W",
shape=[num_filters_total, num_classes],
initializer=tf.contrib.layers.xavier_initializer())
b = tf.Variable(tf.constant(0.1, shape=[num_classes]), name="b")
l2_loss += tf.nn.l2_loss(W)
l2_loss += tf.nn.l2_loss(b)
self.scores = tf.nn.xw_plus_b(self.h_drop, W, b, name="scores")
self.predictions = tf.argmax(self.scores, 1, name="predictions")
# Calculate mean cross-entropy loss
with tf.name_scope("loss"):
losses = tf.nn.softmax_cross_entropy_with_logits(logits=self.scores, labels=self.input_y)
self.loss = tf.reduce_mean(losses) + l2_reg_lambda * l2_loss
# Accuracy
with tf.name_scope("accuracy"):
correct_predictions = tf.equal(self.predictions, tf.argmax(self.input_y, 1))
self.accuracy = tf.reduce_mean(tf.cast(correct_predictions, "float"), name="accuracy")
def clean_str(string):
# Tokenization/string cleaning for all datasets.
string = re.sub(r"[^A-Za-z0-9(),!?\'\`]", " ", string)
string = re.sub(r"\'s", " \'s", string)
string = re.sub(r"\'ve", " \'ve", string)
string = re.sub(r"n\'t", " n\'t", string)
string = re.sub(r"\'re", " \'re", string)
string = re.sub(r"\'d", " \'d", string)
string = re.sub(r"\'ll", " \'ll", string)
string = re.sub(r",", " , ", string)
string = re.sub(r"!", " ! ", string)
string = re.sub(r"\(", " \\( ", string)
string = re.sub(r"\)", " \\) ", string)
string = re.sub(r"\?", " \\? ", string)
string = re.sub(r"\s{2,}", " ", string)
return string.strip().lower()
def load_data_and_labels(positive_data_file, negative_data_file):
# Loads data from files, splits the data into words and generates labels. Returns split sentences and labels.
# Load data from files
positive_examples = list(open(positive_data_file, "r", encoding="utf8").readlines())
positive_examples = [s.strip() for s in positive_examples]
negative_examples = list(open(negative_data_file, "r", encoding="utf8").readlines())
negative_examples = [s.strip() for s in negative_examples]
# Split by words
x_text = positive_examples + negative_examples
x_text = [clean_str(sent) for sent in x_text]
# Generate labels
positive_labels = [[0, 1] for _ in positive_examples]
negative_labels = [[1, 0] for _ in negative_examples]
y = np.concatenate([positive_labels, negative_labels], 0)
return [x_text, y]
def batch_iter(data, batch_size, num_epochs, shuffle=True):
# Generates a batch iterator for a dataset.
data = np.array(data)
data_size = len(data)
num_batches_per_epoch = int((len(data)-1)/batch_size) + 1
for epoch in range(num_epochs):
# Shuffle the data at each epoch
if shuffle:
shuffle_indices = np.random.permutation(np.arange(data_size))
shuffled_data = data[shuffle_indices]
else:
shuffled_data = data
for batch_num in range(num_batches_per_epoch):
start_index = batch_num * batch_size
end_index = min((batch_num + 1) * batch_size, data_size)
yield shuffled_data[start_index:end_index]
def del_all_flags(FLAGS):
flags_dict = FLAGS._flags()
keys_list = [keys for keys in flags_dict]
print("\nDeleting Following keys...")
for keys in keys_list:
print(keys)
FLAGS.__delattr__(keys)
def print_all_flags(FLAGS):
flags_dict = FLAGS._flags()
keys_list = [keys for keys in flags_dict]
print("\nCreated Following keys...")
for keys in keys_list:
print(keys)
# Parameters
# ==================================================
# Delete all flags before declaring
del_all_flags(tf.flags.FLAGS)
# Data loading params
tf.flags.DEFINE_float("dev_sample_percentage", .1, "Percentage of the training data to use for validation")
tf.flags.DEFINE_string("positive_data_file", "patient_conversation-positive.txt", "Data source for the positive data.")
tf.flags.DEFINE_string("negative_data_file", "patient_conversation-negative.txt", "Data source for the negative data.")
# Model Hyperparameters
tf.flags.DEFINE_integer("embedding_dim", 128, "Dimensionality of character embedding (default: 128)")
tf.flags.DEFINE_string("filter_sizes", "3,4,5", "Comma-separated filter sizes (default: '3,4,5')")
tf.flags.DEFINE_integer("num_filters", 128, "Number of filters per filter size (default: 128)")
tf.flags.DEFINE_float("dropout_keep_prob", 0.5, "Dropout keep probability (default: 0.5)")
tf.flags.DEFINE_float("l2_reg_lambda", 0.0, "L2 regularization lambda (default: 0.0)")
# Training parameters
tf.flags.DEFINE_integer("batch_size", 64, "Batch Size (default: 64)")
tf.flags.DEFINE_integer("num_epochs", 200, "Number of training epochs (default: 200)")
tf.flags.DEFINE_integer("evaluate_every", 100, "Evaluate model on dev set after this many steps (default: 100)")
tf.flags.DEFINE_integer("checkpoint_every", 100, "Save model after this many steps (default: 100)")
tf.flags.DEFINE_integer("num_checkpoints", 5, "Number of checkpoints to store (default: 5)")
# Misc Parameters
tf.flags.DEFINE_boolean("allow_soft_placement", True, "Allow device soft device placement")
tf.flags.DEFINE_boolean("log_device_placement", False, "Log placement of ops on devices")
tf.flags.DEFINE_string("model_path", "", "Path where the model is saved.")
FLAGS = tf.flags.FLAGS
# Print all flags after declaring
print_all_flags(FLAGS)
def preprocess():
# Data Preparation
# ==================================================
# Load data
print("Loading data...")
x_text, y = load_data_and_labels(FLAGS.positive_data_file, FLAGS.negative_data_file)
# Build vocabulary
max_document_length = max([len(x.split(" ")) for x in x_text])
vocab_processor = learn.preprocessing.VocabularyProcessor(max_document_length)
x = np.array(list(vocab_processor.fit_transform(x_text)))
# Randomly shuffle data
np.random.seed(10)
shuffle_indices = np.random.permutation(np.arange(len(y)))
x_shuffled = x[shuffle_indices]
y_shuffled = y[shuffle_indices]
# Split train/test set
# TODO: This is very crude, should use cross-validation
dev_sample_index = -1 * int(FLAGS.dev_sample_percentage * float(len(y)))
x_train, x_dev = x_shuffled[:dev_sample_index], x_shuffled[dev_sample_index:]
y_train, y_dev = y_shuffled[:dev_sample_index], y_shuffled[dev_sample_index:]
del x, y, x_shuffled, y_shuffled
print("Vocabulary Size: {:d}".format(len(vocab_processor.vocabulary_)))
print("Train/Dev split: {:d}/{:d}".format(len(y_train), len(y_dev)))
return x_train, y_train, vocab_processor, x_dev, y_dev
def restore(sess_var, model_path):
if model_path is not None:
if os.path.exists("{}.index".format(model_path)):
saver = tf.train.Saver(var_list=tf.trainable_variables())
saver.restore(sess_var, model_path)
print("Model at %s restored" % model_path)
else:
print("Model path does not exist, skipping...")
else:
print("Model path is None - Nothing to restore")
def train(x_train, y_train, vocab_processor, x_dev, y_dev):
# Training
# ==================================================
with tf.Graph().as_default():
session_conf = tf.ConfigProto(
allow_soft_placement=FLAGS.allow_soft_placement,
log_device_placement=FLAGS.log_device_placement)
sess = tf.Session(config=session_conf)
with sess.as_default():
cnn = TextCNN(
sequence_length=x_train.shape[1],
num_classes=y_train.shape[1],
vocab_size=len(vocab_processor.vocabulary_),
embedding_size=FLAGS.embedding_dim,
filter_sizes=list(map(int, FLAGS.filter_sizes.split(","))),
num_filters=FLAGS.num_filters,
l2_reg_lambda=FLAGS.l2_reg_lambda)
# Define Training procedure
global_step = tf.Variable(0, name="global_step", trainable=False)
optimizer = tf.train.AdamOptimizer(1e-3)
grads_and_vars = optimizer.compute_gradients(cnn.loss)
train_op = optimizer.apply_gradients(grads_and_vars, global_step=global_step)
# Keep track of gradient values and sparsity (optional)
grad_summaries = []
for g, v in grads_and_vars:
if g is not None:
grad_hist_summary = tf.summary.histogram("{}/grad/hist".format((v.name).replace(":","_")), g)
sparsity_summary = tf.summary.scalar("{}/grad/sparsity".format((v.name).replace(":","_")), tf.nn.zero_fraction(g))
grad_summaries.append(grad_hist_summary)
grad_summaries.append(sparsity_summary)
grad_summaries_merged = tf.summary.merge(grad_summaries)
# Output directory for models and summaries
timestamp = str(int(time.time()))
out_dir = os.path.abspath(os.path.join(os.path.curdir, "runs", timestamp))
print("Writing to {}\n".format(out_dir))
# Summaries for loss and accuracy
loss_summary = tf.summary.scalar("loss", cnn.loss)
acc_summary = tf.summary.scalar("accuracy", cnn.accuracy)
# Train Summaries
train_summary_op = tf.summary.merge([loss_summary, acc_summary, grad_summaries_merged])
train_summary_dir = os.path.join(out_dir, "summaries", "train")
train_summary_writer = tf.summary.FileWriter(train_summary_dir, sess.graph)
# Dev summaries
dev_summary_op = tf.summary.merge([loss_summary, acc_summary])
dev_summary_dir = os.path.join(out_dir, "summaries", "dev")
dev_summary_writer = tf.summary.FileWriter(dev_summary_dir, sess.graph)
# Checkpoint directory. Tensorflow assumes this directory already exists so we need to create it
checkpoint_dir = os.path.abspath(os.path.join(out_dir, "checkpoints"))
checkpoint_prefix = os.path.join(checkpoint_dir, "model")
if not os.path.exists(checkpoint_dir):
os.makedirs(checkpoint_dir)
saver = tf.train.Saver(tf.global_variables(), max_to_keep=FLAGS.num_checkpoints)
# Write vocabulary
vocab_processor.save(os.path.join(out_dir, "vocab"))
# Initialize all variables
sess.run(tf.global_variables_initializer())
def train_step(x_batch, y_batch):
"""
A single training step
"""
feed_dict = {
cnn.input_x: x_batch,
cnn.input_y: y_batch,
cnn.dropout_keep_prob: FLAGS.dropout_keep_prob
}
_, step, summaries, loss, accuracy = sess.run(
[train_op, global_step, train_summary_op, cnn.loss, cnn.accuracy],
feed_dict)
time_str = datetime.datetime.now().isoformat()
print("{}: step {}, loss {:g}, acc {:g}".format(time_str, step, loss, accuracy))
train_summary_writer.add_summary(summaries, step)
def dev_step(x_batch, y_batch, writer=None):
"""
Evaluates model on a dev set
"""
feed_dict = {
cnn.input_x: x_batch,
cnn.input_y: y_batch,
cnn.dropout_keep_prob: 1.0
}
step, summaries, loss, accuracy = sess.run(
[global_step, dev_summary_op, cnn.loss, cnn.accuracy],
feed_dict)
time_str = datetime.datetime.now().isoformat()
print("{}: step {}, loss {:g}, acc {:g}".format(time_str, step, loss, accuracy))
if writer:
writer.add_summary(summaries, step)
# Generate batches
batches = batch_iter(
list(zip(x_train, y_train)), FLAGS.batch_size, FLAGS.num_epochs)
# Training loop. For each batch...
for batch in batches:
x_batch, y_batch = zip(*batch)
train_step(x_batch, y_batch)
current_step = tf.train.global_step(sess, global_step)
if current_step % FLAGS.evaluate_every == 0:
print("\nEvaluation:")
dev_step(x_dev, y_dev, writer=dev_summary_writer)
print("")
if current_step % FLAGS.checkpoint_every == 0:
path = saver.save(sess, checkpoint_prefix, global_step=current_step)
print("Saved model checkpoint to {}\n".format(path))
FLAGS.model_path = path
#restore(sess, FLAGS.model_path)
def test(model_path):
# Delete all flags before declaring
del_all_flags(tf.flags.FLAGS)
tf.flags.DEFINE_string("test_data_file", "patient_conversations-test.txt", "Data source for the test data.")
tf.flags.DEFINE_string("checkpoint_dir", model_path, "Checkpoint directory from training run")
tf.flags.DEFINE_boolean("eval_train", True, "Evaluate on all training data")
tf.flags.DEFINE_integer("batch_size", 2, "Batch Size (default: 64)")
tf.flags.DEFINE_boolean("allow_soft_placement", True, "Allow device soft device placement")
tf.flags.DEFINE_boolean("log_device_placement", False, "Log placement of ops on devices")
FLAGS = tf.flags.FLAGS
# Print all flags after declaring
print_all_flags(FLAGS)
if FLAGS.eval_train:
# Load data from files
test_examples = list(open(FLAGS.test_data_file, "r", encoding="utf8").readlines())
test_examples = [s.strip() for s in test_examples]
# Split by words
x_raw = [clean_str(sent) for sent in test_examples]
else:
x_raw = ["I think I am suffering from cold and flu", "I am really loving this problem"]
# Map data into vocabulary
vocab_path = os.path.join(FLAGS.checkpoint_dir, "./../../", "vocab")
vocab_processor = learn.preprocessing.VocabularyProcessor.restore(vocab_path)
x_test = np.array(list(vocab_processor.transform(x_raw)))
print("\nEvaluating...\n")
# Evaluation
# ==================================================
checkpoint_file = tf.train.latest_checkpoint(os.path.join(FLAGS.checkpoint_dir, "./../"))
graph = tf.Graph()
with graph.as_default():
session_conf = tf.ConfigProto(allow_soft_placement=FLAGS.allow_soft_placement, log_device_placement=FLAGS.log_device_placement)
sess = tf.Session(config=session_conf)
with sess.as_default():
# Load the saved meta graph and restore variables
saver = tf.train.import_meta_graph("{}.meta".format(checkpoint_file))
saver.restore(sess, checkpoint_file)
# Get the placeholders from the graph by name
input_x = graph.get_operation_by_name("input_x").outputs[0]
# input_y = graph.get_operation_by_name("input_y").outputs[0]
dropout_keep_prob = graph.get_operation_by_name("dropout_keep_prob").outputs[0]
# Tensors we want to evaluate
predictions = graph.get_operation_by_name("output/predictions").outputs[0]
# Generate batches for one epoch
batches = batch_iter(list(x_test), FLAGS.batch_size, 1, shuffle=False)
# Collect the predictions here
patient_tag = ["Patient_Tag"]
for x_test_batch in batches:
batch_predictions = sess.run(predictions, {input_x: x_test_batch, dropout_keep_prob: 1.0})
patient_tag = np.concatenate([patient_tag, batch_predictions])
patient_index = ["Index"]
patient_index = np.concatenate([patient_index, np.arange(1,len(x_raw)+1,1)])
conversation_data = ["Conversation_Data"]
conversation_data = np.concatenate([conversation_data, np.array(x_raw)])
# Save the evaluation to a csv
predictions = np.column_stack((patient_index, patient_tag))
predictions_description = np.column_stack((conversation_data, patient_tag))
submission_file = os.path.join(FLAGS.checkpoint_dir, "../../../../", "prediction.csv")
predictions_description_file = os.path.join(FLAGS.checkpoint_dir, "../../../../", "prediction-description.csv")
out_path = os.path.abspath(submission_file)
print("Saving evaluation to {0}".format(out_path))
with open(out_path, 'w+', encoding="utf8", newline='') as f:
csv.writer(f).writerows(predictions)
out_path = os.path.abspath(predictions_description_file)
print("Saving prediction descriptions to {0}".format(out_path))
with open(out_path, 'w+', encoding="utf8", newline='') as f:
csv.writer(f).writerows(predictions_description)
def main(argv=None):
generate_dataset()
x_train, y_train, vocab_processor, x_dev, y_dev = preprocess()
train(x_train, y_train, vocab_processor, x_dev, y_dev)
test(FLAGS.model_path)
if __name__ == '__main__':
tf.app.run()
os._exit(1)
# Download prediction.csv file
files.download("prediction.csv")
files.download("prediction-description.csv")
###Output
_____no_output_____ |
PY_Basics/TMWP_Basics_BuiltIn_DataStructures.ipynb | ###Markdown
Python 3.6 [conda env: PY36] Working With Basic Built In Python Data StructuresThis document gives quick demo of methods for lists, sets, etc. Dictionaries is handled in a separate document.TOC:- [Working with Lists](lst)- [Working with Sets](sets) Working with Lists
###Code
tstLst = [] # build an empty list
print(tstLst)
tstLst.append(9)
tstLst.append("Dog")
print(tstLst)
# adding lists together
tstLst += ["cat", 10, 18, "animal", "mouse"]
tstLst = tstLst + ["snow", "999", 998]
print(tstLst)
tstLst.remove("cat") # throws an error if not found
print("tstLst after .remove('cat'): %s" %tstLst)
print("Poping this value: %s" %tstLst.pop()) # pops the last value off the top
print("Popping the 3rd value: %s" %tstLst.pop(2))
print(tstLst)
# to sort it, the list must all be the same type:
tstLst = [str(i) for i in tstLst]
print(tstLst) # unsorted
tstLst.sort()
print(tstLst) # note: .sort() mutates and returns None
# if this cell throws an error, re-run previous cells before re-running it
# sorted returns a value, this example combines with list comprehensions and string functions
# previous cells concerted all values within the list to strings
sorted([i.lower() for i in tstLst if i.isnumeric() == False]) # note: upper case sorts ahead of lower case
tstLst.insert(1, "13") # inserts after index 1 i original list
print(tstLst)
tstLst.reverse() # mutates original and returns None
print(tstLst)
###Output
['snow', 'mouse', 'animal', 'Dog', '999', '9', '13', '18']
###Markdown
Working with Sets
###Code
var1 = {}
def test_var(var):
print("Type %s: \nContent: %s" %(type(var), var))
test_var(var1)
testSet = set() # build an empty set, note that var = {} builds an empty dictionary
test_var(testSet)
testSet.remove(9) # want an error thrown if you try to remove what does not exist? Use this one
testSet.discard(9) # does not exist? Don't want an error? Use this one
testSet.add(8)
testSet.add(13)
testSet = testSet.union({7,14,15})
print(testSet)
testSet.intersection({1,2,3,4,5,6,7,8,9})
# for future development ...
# demonstrate difference, symmetric_difference, etc ...
###Output
_____no_output_____ |
DAY 401 ~ 500/DAY479_[BaekJoon] 크로스워드 만들기 (Python).ipynb | ###Markdown
2021년 9월 10일 금요일 BaekJoon - 크로스워드 만들기 (Python) 문제 : https://www.acmicpc.net/problem/2804 블로그 : https://somjang.tistory.com/entry/BaekJoon-2804%EB%B2%88-%ED%81%AC%EB%A1%9C%EC%8A%A4%EC%9B%8C%EB%93%9C-%EB%A7%8C%EB%93%A4%EA%B8%B0-Python Solution
###Code
def cross_word(string):
string1, string2 = string.split()
N, M = len(string1), len(string2)
cross_row, cross_col = 0, 0
for idx, char in enumerate(string1):
if char in string2:
cross_row = string2.index(char)
cross_col = idx
break
temp_string = ["." * N for _ in range(M)]
for idx in range(len(temp_string)):
if idx == cross_row:
temp_string[idx] = string1
else:
temp_string[idx] = list(temp_string[idx])
temp_string[idx][cross_col] = string2[idx]
temp_string[idx] = "".join(temp_string[idx])
for temp in temp_string:
print(temp)
if __name__ == "__main__":
string = input()
cross_word(string)
###Output
_____no_output_____ |
project3.ipynb | ###Markdown
Project 3 Step 1. Data Preparation
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import scipy.stats
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
import statsmodels.api as sm
from statsmodels.graphics.gofplots import qqplot_2samples
df = pd.read_csv("ad.csv")
df.head(20)
df.shape
X_train,X_test,y_train,y_test = train_test_split(df.drop("Sales", axis = 1), df.Sales, test_size=0.5, random_state=0)
###Output
_____no_output_____
###Markdown
Step 2. Linear Regression of Original Data Fit the linear regression and get training errors
###Code
m1 = LinearRegression().fit(X_train,y_train)
# Training data errors
train_err = y_train - m1.predict(X_train)
train_err.head(20)
###Output
_____no_output_____
###Markdown
The distribution of training errors by histogram and QQ plot
###Code
plt.hist(train_err)
plt.title("Training Error Empirical Distribution")
plt.show()
#sm.qqplot(train_err, line = '45')
#plt.title("QQ Plot of Training Error")
#plt.show()
###Output
_____no_output_____
###Markdown
Validation Errors
###Code
val_err = y_test - m1.predict(X_test)
val_err.head(20)
###Output
_____no_output_____
###Markdown
The distribution of validation errors by histogram and QQ plot
###Code
plt.hist(val_err)
plt.title("Validation Error Empirical Distribution")
plt.show()
#sm.qqplot(val_err, line = '45')
#plt.title("QQ Plot of Validation Error")
#plt.show()
plt.scatter(np.sort(val_err), np.sort(train_err))
#= plt.xlim()
ypoints = xpoints = plt.ylim()
plt.plot(xpoints, ypoints, color='k', lw=2, scalex=False, scaley=False)
plt.title("QQ Plot of Training and Validation Errors")
plt.show()
###Output
_____no_output_____
###Markdown
Step 3. Remove Outliers Remove outliers in training dataset
###Code
train_err_std = np.std(train_err)
train_err_std
train_err_mean = np.mean(train_err)
train_err_mean
train_outlier_index = []
for i in train_err.index.tolist():
if abs(train_err[i] - train_err_mean)/train_err_std >= 2:
train_outlier_index.append(i)
train_outlier_index
X_train_new = X_train.drop(train_outlier_index)
X_train_new.head(20)
y_train_new = y_train.drop(train_outlier_index)
y_train_new.head(20)
###Output
_____no_output_____
###Markdown
Remove outliers in validation dataset
###Code
val_err_std = np.std(val_err)
val_err_std
val_err_mean = np.mean(val_err)
val_err_mean
val_outlier_index = []
for i in val_err.index.tolist():
if abs(val_err[i] - val_err_mean)/val_err_std >= 2.1: #1.5
val_outlier_index.append(i)
val_outlier_index
X_test_new = X_test.drop(val_outlier_index)
X_test_new.head(20)
y_test_new = y_test.drop(val_outlier_index)
y_test_new.head(20)
###Output
_____no_output_____
###Markdown
Step 4. Refitting LR to Get New Errors New Training Errors
###Code
m2 = LinearRegression().fit(X_train_new, y_train_new)
train_err_new = y_train_new - m2.predict(X_train_new)
train_err_new.head(20)
len(train_err_new) # 4 Outliers were removed from training data
###Output
_____no_output_____
###Markdown
New Validation Errors
###Code
val_err_new = y_test_new - m2.predict(X_test_new)
val_err_new.head(20)
len(val_err_new) # 4 Outliers were removed from validation data
###Output
_____no_output_____
###Markdown
Distribution of new errors
###Code
plt.hist(train_err_new)
plt.title("New Training Error Empirical Distribution")
plt.show()
# Very close to normal distribution
plt.hist(val_err_new)
plt.title("New Validation Error Empirical Distribution")
plt.show()
# Closer to normally distributed than before (which was left skewed)
plt.scatter(np.sort(val_err_new), np.sort(train_err_new))
ypoints = xpoints = plt.xlim()
plt.plot(xpoints, ypoints, color='k', lw=2, scalex=False, scaley=False)
plt.title("QQ Plot of New Training and Validation Errors")
plt.show()
###Output
_____no_output_____
###Markdown
Linear Regression Coefficients
###Code
b1, b2 = m2.coef_
b0 = m2.intercept_
b0
b1
b2
###Output
_____no_output_____
###Markdown
Step 5 Training Models of SAA and Deterministic SAA Model
###Code
from pyomo.environ import *
import pyomo.environ as pyo
import pyomo.gdp as gdp
from pyomo.opt import SolverFactory
np.random.seed(123)
# Lower & Upper Bound for x1 and x2
l1 = min(X_train_new['TV'])
u1 = max(X_train_new['TV'])
l2 = min(X_train_new['Radio'])
u2 = max(X_train_new['Radio'])
def SAA_model(t_err_sample):
m = pyo.ConcreteModel()
m.N = pyo.Set(initialize=range(1, N+1))
# Decision variables
# first-stage variables
m.x1 = Var(within=NonNegativeReals, bounds=(l1,u1))
m.x2 = Var(within=NonNegativeReals, bounds=(l2,u2))
# second-stage variables
m.yA = Var(m.N, within=NonNegativeReals)
m.yB = Var(m.N, within=NonNegativeReals)
# Objective function
m.obj = Objective(expr = - 0.1 * m.x1 - 0.5 * m.x2 +\
(1/N) * sum(3 * m.yA[n] + 5 * m.yB[n] for n in m.N), sense=pyo.maximize)
# Constraints
m.s1 = Constraint(expr = m.x1 + m.x2 <= 200)
m.s2 = Constraint(expr = m.x1 - 0.5 * m.x2 >= 0)
m.s3 = ConstraintList()
m.s4 = ConstraintList()
m.s5 = ConstraintList()
m.s6 = ConstraintList()
for n in m.N:
m.s3.add(m.yA[n] <= 8)
m.s4.add(m.yB[n] <= 12)
m.s5.add(3 * m.yA[n] + 2 * m.yB[n] <= 36)
m.s6.add(m.yA[n] + m.yB[n] <= b0 + m.x1 * b1 + m.x2 * b2 + t_err_sample[n-1])
return m
N = 200
t_err_sample = np.random.choice(train_err_new, N)
m_SAA = SAA_model(t_err_sample)
pyo.SolverFactory('glpk').solve(m_SAA)
MPO_SAA = m_SAA.obj()
MPO_SAA
x1_hat_SAA = m_SAA.x1()
x1_hat_SAA
x2_hat_SAA = m_SAA.x2()
x2_hat_SAA
###Output
_____no_output_____
###Markdown
Deterministic Model
###Code
def det_model():
m = pyo.ConcreteModel()
# Decision variables
# first-stage variables
m.x1 = Var(within=NonNegativeReals, bounds=(l1,u1))
m.x2 = Var(within=NonNegativeReals, bounds=(l2,u2))
# second-stage variables
m.yA = Var(within=NonNegativeReals)
m.yB = Var(within=NonNegativeReals)
# Objective function
m.obj = Objective(expr = - 0.1 * m.x1 - 0.5 * m.x2 +\
3 * m.yA + 5 * m.yB, sense=pyo.maximize)
# Constraints
m.s1 = Constraint(expr = m.x1 + m.x2 <= 200)
m.s2 = Constraint(expr = m.x1 - 0.5 * m.x2 >= 0)
m.s3 = Constraint(expr = m.yA <= 8)
m.s4 = Constraint(expr = m.yB <= 12)
m.s5 = Constraint(expr = 3 * m.yA + 2 * m.yB <= 36)
m.s6 = Constraint(expr = m.yA + m.yB <= b0 + m.x1 * b1 + m.x2 * b2)
return m
m_det = det_model()
pyo.SolverFactory('glpk').solve(m_det)
MPO_det = m_det.obj()
MPO_det
x1_hat_det = m_det.x1()
x1_hat_det
x2_hat_det = m_det.x2()
x2_hat_det
###Output
_____no_output_____
###Markdown
Step 6. Validation
###Code
scipy.stats.chisquare(val_err, train_err).pvalue
###Output
_____no_output_____
###Markdown
SAA Model Model Validation Sample Average Estimate (MVSAE)
###Code
def val_model(x1_hat, x2_hat, error):
m = pyo.ConcreteModel()
# Decision variables
# second-stage variables
m.yA = Var(within=NonNegativeReals)
m.yB = Var(within=NonNegativeReals)
# Objective function
m.obj = Objective(expr = - 0.1 * x1_hat - 0.5 * x2_hat +\
3 * m.yA + 5 * m.yB, sense=pyo.maximize)
# Constraints
m.s3 = Constraint(expr = m.yA <= 8)
m.s4 = Constraint(expr = m.yB <= 12)
m.s5 = Constraint(expr = 3 * m.yA + 2 * m.yB <= 36)
m.s6 = Constraint(expr = m.yA + m.yB <= b0 + x1_hat * b1 + x2_hat * b2 + error)
return m
SAA_val_objs = []
M = 1000
for i in range(0, M):
err = np.random.choice(val_err_new)
m_SAA_val = val_model(x1_hat_SAA, x2_hat_SAA, err)
pyo.SolverFactory('glpk').solve(m_SAA_val)
SAA_val_objs.append(m_SAA_val.obj())
SAA_mean_obj = np.mean(SAA_val_objs)
SAA_se = np.std(SAA_val_objs)/sqrt(M)
print('Model Predicted Objective:', MPO_SAA)
print('95% CI Model Validation Sample Average Estimate:', [SAA_mean_obj - 1.96*SAA_se, SAA_mean_obj + 1.96*SAA_se])
###Output
Model Predicted Objective: 40.340002953123786
95% CI Model Validation Sample Average Estimate: [40.252318389720976, 40.72336787426406]
###Markdown
Deterministic Model Model Validation Sample Average Estimate (MVSAE)
###Code
det_val_objs = []
M = 1000
for i in range(0, M):
err = np.random.choice(val_err_new)
m_det_val = val_model(x1_hat_det, x2_hat_det, err)
pyo.SolverFactory('glpk').solve(m_det_val)
det_val_objs.append(m_det_val.obj())
det_mean_obj = np.mean(det_val_objs)
det_se = np.std(det_val_objs)/sqrt(M)
print('Model Predicted Objective:', MPO_det)
print('95% CI Model Validation Sample Average Estimate:', [det_mean_obj - 1.96*det_se, det_mean_obj + 1.96*det_se])
SAA_CI = [round(SAA_mean_obj - 1.96*SAA_se, 3), round(SAA_mean_obj + 1.96*SAA_se, 3)]
det_CI = [round(det_mean_obj - 1.96*det_se, 3), round(det_mean_obj + 1.96*det_se, 3)]
compare = pd.DataFrame({'Methodology':['Deterministic LP', 'SLP with SAA'],
'x1':[x1_hat_det, x1_hat_SAA],
'x2':[x2_hat_det, x2_hat_SAA],
'MPO':[MPO_det, MPO_SAA],
'MVSAE':[det_CI, SAA_CI]
})
compare
###Output
_____no_output_____
###Markdown
This project was intended to be done by Pycharm but due to technical difficulties and need for fast training by GPU, its development was moved to Google Colaboratory.The cell below asks you to mount your google drive in order to save the outputs as a downloadable file.
###Code
from google.colab import drive
drive.mount('/content/gdrive')
###Output
Go to this URL in a browser: https://accounts.google.com/o/oauth2/auth?client_id=947318989803-6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3aietf%3awg%3aoauth%3a2.0%3aoob&response_type=code&scope=email%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdocs.test%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive.photos.readonly%20https%3a%2f%2fwww.googleapis.com%2fauth%2fpeopleapi.readonly
Enter your authorization code:
··········
Mounted at /content/gdrive
###Markdown
In the cell below, all the packages and classes needed are imported.
###Code
import torch
import torch.nn as net
import torch.nn.functional as F
import torch.optim as optim
from torch.utils import data
from torchvision import transforms as tf, datasets as ds
from torch.optim.lr_scheduler import StepLR
###Output
_____no_output_____
###Markdown
In the cell below, the network class is implemented. This class extends torch.nn.Module, as all neural networks in pytorch should inherit this class in order to function as desired.The overall information about the network comes in comments and details come in a pdf file.
###Code
class Brain(net.Module): #inherits nn.Module
def __init__(self):
super(Brain, self).__init__()
#gets one photo (acting as one feature map) as input, outputs 64 feature maps by using 3x3 convolution with stride 1.
self.conv1 = net.Conv2d(1, 64, 3, 1)
self.conv2 = net.Conv2d(64, 128, 3, 1) #gets the 64 feature maps as input, outputs 128 feature maps by using 3x3 convolution with stride 1.
#Dropout technique with probability of each node's prescence in training = 0.25 and 0.5,in order to get more accurate network,
#less prone to overfitting.
self.dropout1 = net.Dropout2d(0.25)
self.dropout2 = net.Dropout2d(0.5)
#Two fully connected layers for classification. 18432 is the number of params after doing the second convolution.
self.fc1 = net.Linear(18432, 256)
self.fc2 = net.Linear(256, 10)
def forward(self, x): #Applying convolutions and actications one by one.
x = self.conv1(x) #First convolution
x = F.relu(x) #Activation function
x = self.conv2(x) #Second convolution
x = F.relu(x) #Activation function
x = F.max_pool2d(x, 2) #Max pool with stride 2 and kernel size 2.
x = self.dropout1(x) #Dropout for more accuracy in classification and faster training
x = x.view(x.size(0), -1) #Fully connected layers get a one dimentional output. So we have to flatten their inputs, if needed.
x = self.fc1(x) #Applying first one for classification
x = F.relu(x) #Activation function for classification
x = self.dropout2(x) #Dropout for more accuracy in classification and faster training
x = self.fc2(x) #Final classification, 10 outputs (as we have 10 digits)
output = F.log_softmax(x, 1) #One dimentional LogSoftmax, better numerical performance.
return output
###Output
_____no_output_____
###Markdown
In the cell below, we load the dataset. This function downloads the dataset, transforms it to tensors, normalizes it (the normalization std and mean are the global mean and std of MNIST dataset) and shuffles it. Train and test phases are also separated. Inputs:1. root - where the data should be loaded2. batch_size - size of each mini-batch for SGD.3. phase - indicates whether loading the test or the training data4. dataset_name - name of the dataset to be loaded. Default: MNIST5. shuffle - whether shuffling the dataset or not. Used in order to avoid possible overfitting.Output:* Loaded data.Raises exceptions if the dataset is not defined for loading, or entering a phase any other than train or test.
###Code
def load_data(root, batch_size, phase, dataset_name='MNIST', shuffle=True):
if dataset_name == 'MNIST':
if phase == 'train':
transform = tf.Compose([tf.ToTensor(), tf.Normalize((0.1307,), (0.3081,))]) #Transform dictionary.
loaded_data = data.DataLoader(ds.MNIST(root=root, train=True, transform=transform, download=True), batch_size=batch_size, shuffle=shuffle)
return loaded_data
elif phase == 'test':
transform = tf.Compose([tf.ToTensor(), tf.Normalize((0.1307,), (0.3081,))])
loaded_data = data.DataLoader(ds.MNIST(root=root, train=False, transform=transform, download=True), batch_size=batch_size, shuffle=shuffle)
return loaded_data
else:
raise Exception('You can only train me, or test me based on my training!')
else:
raise Exception('Sorry, I have not received training on other datasets so far.')
n_epochs = 50 #Epoch number.
train_batch_size = 200 #Train batch size
test_batch_size = 1000 #Test batch size
learning_rate = 0.05
momentum = 0.6
log_interval = 10 #Intervals between printing the steps in training.
random_seed = 1 #random seed for manual seed
use_cuda = True #Using cuda GPU. Do not forget to set the environment to GPU (Runtime -> Change runtime type -> Hardware accelerator -> GPU)
torch.manual_seed(random_seed) #Seeding RNG (Random Number Generator) for CPU and GPU as pytorch does not guarantee reproducible result across its releases.
train_batch = load_data('train/', train_batch_size, 'train') #Train data
test_batch = load_data('test/', test_batch_size, 'test') #Test data
###Output
Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz to train/MNIST/raw/train-images-idx3-ubyte.gz
###Markdown
Cell below contains the losses and counts of training and testing, so as to use it for plotting graphs. Updated in each iteration.
###Code
train_correct = []
train_count = []
test_correct = []
test_count = [i * len(train_batch.dataset) for i in range(n_epochs + 1)] #50 times testing.
###Output
_____no_output_____
###Markdown
Train and test functions. Details in comments.
###Code
def train(epoch):
correct_detection = 0
network.train() #this train comes from the super class
for batch_index, (data, labels) in enumerate(train_batch): #enumerating train batch to use all the things it contains.
if use_cuda and torch.cuda.is_available(): #Passing the data and labels to GPU if possible
data = data.cuda()
labels = labels.cuda()
optimizer.zero_grad() #Zero gradient in each iteration to avoid inaccurate results
output = network(data) #Getting training output
loss = F.nll_loss(output, labels) #NLL loss
loss.backward() #Backpropagation to learn the parameters and get close to the global minimum
optimizer.step() #Updating weights
if batch_index % log_interval == 0: #Print where we stand
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_index * len(data), len(train_batch.dataset),
100. * batch_index / len(train_batch), loss.item()))
prediction = output.data.max(1, keepdim=True)[1] #The maximum value is the right classification
correct_detection = correct_detection + prediction.eq(labels.view_as(prediction)).sum().item() #Add the number of correct detections
train_correct.append(correct_detection)
train_count.append((batch_index * train_batch_size) + (epoch - 1) * (len(train_batch.dataset)))
def test():
network.eval() #Evaluation function comes from the superclass, to indicate we are in testing phase.
correct_detection = 0 #counting correct detections
test_loss = 0 #Average test loss after each epoch
with torch.no_grad(): #Setting gradients to zero in every iteration to avoid outlier and inaccuracies
for batch_index, (data, labels) in enumerate(test_batch):
if use_cuda and torch.cuda.is_available(): #Pass to GPU
data = data.cuda()
labels = labels.cuda()
output = network(data) #Classify
test_loss = test_loss + F.nll_loss(output, labels, reduction='sum').item() #Calculate sum of losses in each iteration
prediction = output.data.max(1, keepdim=True)[1] #The maximum value is the right classification
correct_detection = correct_detection + prediction.eq(labels.view_as(prediction)).sum().item() #Add the number of correct detections
test_loss = test_loss / len(test_batch.dataset)
test_correct.append(correct_detection/len(test_batch.dataset))
print('\nTest set: Avg. loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
test_loss, correct_detection, len(test_batch.dataset),
100. * correct_detection / len(test_batch.dataset)))
network = Brain() #Create a new instance of the network
optimizer = optim.SGD(network.parameters(), lr=learning_rate, momentum=momentum) #Set optimizer to do stochastic gradient descent
scheduler = StepLR(optimizer, step_size = 1, gamma=0.7) #Scheduler sets optimizer options like learning rate and momentum, helping it decay.
if use_cuda and torch.cuda.is_available(): #Use GPU if existent
network.cuda()
for i in range(1, n_epochs + 1): #Train/test loop
train(i)
test()
scheduler.step() #Set optimizer options
torch.save(network.state_dict(), '/content/gdrive/My Drive/model.pth') #Save the model
torch.save(optimizer.state_dict(), '/content/gdrive/My Drive/optimizer.pth') #save optimizer
print("Model's state_dict:")
for param_tensor in network.state_dict():
print(param_tensor, "\t", network.state_dict()[param_tensor].size())
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Plotting the training accuracy curve
###Code
fig_train = plt.figure(figsize = (16,16))
plt.plot(train_count, train_correct, color = 'blue')
plt.legend(['Train Accuracy'], loc='upper right')
plt.xlabel('number of training example seen', fontsize=18)
plt.ylabel('training accuracy', fontsize=18)
plt.savefig('/content/gdrive/My Drive/train_plot.jpg')
fig_test = plt.figure(figsize = (16,16))
plt.plot(test_count, test_correct, color = 'red')
plt.legend(['Test Accuracy'], loc='upper right')
plt.xlabel('number of training example seen', fontsize=18)
plt.ylabel('test accuracy', fontsize=18)
plt.savefig('/content/gdrive/My Drive/test_plot.jpg')
###Output
_____no_output_____
###Markdown
Project 3 InstructionsIn this project, you will solve a two-dimensional, two-phase flow (i.e. oil and water) reservoir simulation in a heterogeneous reservoir with multiple wells. Essentially, all of the functionality needed to do this was already implemented in [Homework Assignment 20](https://github.com/PGE323M-Students/assignment20). We will use real data from the Nechelik reservoir that we have looked at several times throughout the semester.For this project, you should implement the class below `Project3()` which inherits from `TwoPhaseFlow` (which inherits from `TwoDimReservoir` and `BuckleyLevertt`). You will need to implement some functionality to read the porosity and permeability information from a file as you did in [Project2](https://github.com/PGE323M-Students/project2). You will notice in [inputs.yml](inputs.yml), that these values take the filenames [`Nechelik_perm.dat`](Nechelik_perm.dat) and [`Neckelik_poro.dat`](Nechelik_poro.dat). These files have the permeability and porosity data, respectively, for each grid block. (You probably have already updated your [Homework Assignment 17](https://github.com/PGE323M-Students/assignment17) files to include this functionality and not need to do this again.)You will also need to use your solution from [Project 1](https://github.com/PGE323M-Students/project1) to create a file called `Nechelik_depth.dat` that can be read in a similar structure to the permeability and porosity files above and contains the depth of the reservoir at each grid block. Other than reading the data from a file, you may not need to write any additional code for your simulation to work. However, it might be a good idea to write a few plotting routines to produce some plots like this oneto help you determine if your code is working correctly.As you know, the actual Nechelik field has an irregular geometry as shown in the figure, with maximum $d = 100$ ft, $h = 5753$ ft and maximum $L = 7060.5$ ft. There are $N = 1188$ values in the data files corresponding to $N_x$ = 54 and $N_y$ = 22 grids to be used in the reservoir. The reservoir has constant properties $\mu_w = \mu_o = 1$ cp, $B_w = B_o = 1$, $c_o = c_w = 1 \times 10^{-5}$ psi$^{-1}$, an inital reservoir pressure of $p_{\mbox{initial}} = 3700$ psi, and an initial water saturation $S_{wi} = 0.2$. The Corey-Brooks properties are included in the [input.yml](input.yml) file.The reservoir has the following wells|**Well**|**Location** (ft, ft)|**Well type** | **Operating conditions** (ft$^3$/day or psi)||:-:|:-:|:-:|:-:||1| 5536, 3500| Constant BHP | 2000 ||2| 5474, 4708| Constant BHP | 2000 ||3| 3600, 4937| Constant BHP | 2000 ||4| 2400, 3322| Constant BHP | 2000 ||5| 2500, 4050| Constant rate water injector | 1000 |All wells have a radius of $r_w = 0.25$ ft and negligible skin factor. Suggestion * Because the file [`Neckelik_poro.dat`](Nechelik_poro.dat) has zero porosity values for those grids outside the true reservoir domain, you may get a "divide by 0" error when updating the saturations with the explicit equation. Please take a look at this [StackOverflow answer](https://stackoverflow.com/questions/26248654/numpy-return-0-with-divide-by-zero) for a hint at how to avoid this. You may also need this functionality elsewhere in your code to avoid divide-by-zero errors. TestingThere are no locally available tests for this project, but if your `TwoPhaseFlow` class passed all tests from [Homework Assignment 20](https://github.com/PGE323M-Students/assignment20) you can be reasonably assured it will work correctly. Tests will be run on Github and you will recieve feedback on whether they are passing or not upon submission. You can continue to resubmit until the deadline.I encourage you to come up with your own tests as well. One thing you might do is change the Corey-Brooks parameters to mimic single-phase behavior and set the initial water satruation to $S_{wi} = 1.0$ and compare your results to the results of [Project2](https://github.com/PGE323M-Students/project2). While I have not worked a complete tutorial for this project in CMG, I did record a tutorial for the solving the Buckley-Leverett problem in CMG here: http://youtu.be/zuCHYYxsFQg. If you combine what you learn in this tutorial with your work from [Homework Assignment 18](https://github.com/PGE323M-Students/assignment18), you should be able to solve this project in CMG. **Please Note:** Unlike the single-phase examples we've compared previously, there may be small differences in what your project code produces and the results of CMG. This is due to the fact that we have implemented an IMPES formulation whereas CMG uses a fully implicit solution scheme. However, they should be very close, especially for early times. Backup PlanIf you cannot get this project to work, you may work the project in CMG for 1/2 credit. If you choose to do this, please add your CMG input file, named as `project3.dat` to this repository. These will be graded manually.
###Code
import matplotlib.pyplot as plt
import numpy as np
import scipy
from assignment20 import TwoPhaseFlow
class Project3(TwoPhaseFlow):
def __init__(self, inputs):
super().__init__(inputs)
###Output
_____no_output_____
###Markdown
Project 3 CS5293 SP20 Matthew Huson Note: as this was done for a class, the flow (define function, test function, with acutal execution at the end) is to meet the requirements of the prompt Libraries
###Code
import pandas as pd
import numpy as np
from sklearn.neighbors import KNeighborsClassifier
from sklearn.feature_extraction.text import TfidfVectorizer
###Output
_____no_output_____
###Markdown
Functions Read In Json
###Code
def read_in(filename='yummly.json'):
'''
takes json filename as argument
returns dataframe of recipe ids, cuisines and ingredients
'''
df = pd.read_json(filename) #read_json turns all of the json dictionaries into a dataframe
return df
###Output
_____no_output_____
###Markdown
TEST
###Code
test_df = read_in() #get test dataframe
print(test_df.shape) #dataframe should have 3 columns and 39774 rows
test_df.head() #check contents of dataframe
###Output
(39774, 3)
###Markdown
De-Tokenize Ingredients
###Code
def de_tokenize(df):
'''
takes a dataframe as argument
returns same dataframe with additional column for joined ingredient lists
'''
l = [' '.join(recipe) for recipe in df['ingredients']] #get list of joined ingredient lists
df['ing_join'] = l #create new column in dataframe to hold them
return df
###Output
_____no_output_____
###Markdown
TEST
###Code
test_df = de_tokenize(test_df) #get de-tokenized dataframe
print(test_df.shape) #data frame should have 4 columns and 39774 rows
test_df.head() #check that new column is a string, not a list
###Output
(39774, 4)
###Markdown
Vectorize
###Code
def vectorize(df):
'''
accepts a dataframe as argument
returns a matrix of vectorized ingredients and array of cuisines
'''
vectorizer = TfidfVectorizer(min_df=2) #vectorize; setting min_df to 2 eliminates typos and one-offs
X = vectorizer.fit_transform(df['ing_join']) #fit_transform the joined ingredients lists
y = np.array(df['cuisine']) #get array of cuisine labels
return X,y #return matrix and labels
###Output
_____no_output_____
###Markdown
TEST
###Code
test_X,test_y = vectorize(test_df) #get matrix and array
print(test_X.shape) #matrix should have 39774 rows and some number of columns
print(test_y.shape) #array should have 39774 rows and no columns
###Output
(39774, 2459)
(39774,)
###Markdown
Train Classifier
###Code
def train_classifier(X,y):
'''
takes vector matrix and array of cuisine labels as arguments
fits kNN classifier to the data and returns the classifier
'''
classifier = KNeighborsClassifier(n_neighbors=11) #initialize classifier instance
classifier.fit(X,y) #fit to data
return classifier #return classifier
###Output
_____no_output_____
###Markdown
TEST
###Code
test_classifier = train_classifier(test_X,test_y) #get classifier
print(test_classifier) #check that classifier is KNeighborsClassifier with n_neighbors=11
###Output
KNeighborsClassifier(algorithm='auto', leaf_size=30, metric='minkowski',
metric_params=None, n_jobs=None, n_neighbors=11, p=2,
weights='uniform')
###Markdown
Parse User Input
###Code
def parse_user_input(input_ingredients, df):
'''
takes user-input list of ingredients and the original dataframe as arguments
returns a dataframe with the user ingredients as the last row
'''
temp_df = df.copy() #create copy of original dataframe
joined_input = ' '.join(input_ingredients).strip() #join the list of ingredients into a string
new_row = [None,None,input_ingredients, joined_input] #create new row to add to df
temp_df.loc[len(temp_df['id'])] = new_row #add row to bottom of df
return temp_df #return new df
###Output
_____no_output_____
###Markdown
TEST
###Code
test_input = ['tomatoes', 'garlic', 'spaghetti'] #list of user input ingredients
test_t_df = parse_user_input(test_input, test_df) #get new dataframe
test_t_df.tail() #check that ingredients list has been added to the end
###Output
_____no_output_____
###Markdown
Classify Cuisine for Ingredients
###Code
def classify_ingredients(classifier,X):
'''
takes trained classifier and user input vector as arguments
returns string of cuisine prediction and probability
'''
cuisine = classifier.predict(X)[0] #get cuisine label prediction
cuis_prob = classifier.predict_proba(X)[0].max() #get %of nearest neighbors that share cuisine label
out_string = f'Cuisine: {cuisine.capitalize()}\nProbability: {round(cuis_prob,2)}\n' #create return string
return out_string #return string
###Output
_____no_output_____
###Markdown
TEST
###Code
test_user_vec,y_null = vectorize(test_t_df) #vectorize the user input from before
test_user_vec = test_user_vec[-1]
print(classify_ingredients(test_classifier, test_user_vec)) #check that the predicted cuisine is Italian
###Output
Cuisine: Italian
Probability: 0.91
###Markdown
Get Closest Recipes
###Code
def closest_recipes(classifier,X,df,count=7):
'''
takes trained classifier, user input vector, original dataframe and count number as arguments
returns string of n=count closest recipes and their euclidean distances
'''
recipes = classifier.kneighbors(X, count) #get tuple of closest recipes and their distances
out_string = f'Closest {count} recipies (Euclidean dist.): ' #initialize string to return
for distance, recipe in zip(recipes[0][0],recipes[1][0]): #zip recipe index to recipe distance
rid = df['id'][recipe] #find recipe id in original dataframe
out_string += f'{rid} ({round(distance,2)}), ' #concatenate string
return out_string #return string
###Output
_____no_output_____
###Markdown
TEST
###Code
print(closest_recipes(test_classifier,test_user_vec,test_df)) #check that the correct number of recipes are printed
###Output
Closest 7 recipies (Euclidean dist.): 41633 (0.77), 34842 (0.9), 32897 (0.91), 42475 (0.91), 28639 (0.93), 3050 (0.94), 24276 (0.94),
###Markdown
Main Function
###Code
def main(user_input):
df=read_in() #get yummly dataframe
df=de_tokenize(df) #add de-tokenized column to dataframe
t_df = parse_user_input(user_input,df) #get new dataframe with user data as final row
X,y = vectorize(t_df) #get training matrix and cuisine labels
X_train = X[:-1] #get training ingredients
X_test = X[-1] #get testing ingredients
y = y[:-1] #get training cuisines
classifier = train_classifier(X_train,y) #train classifier
#t_df = parse_user_input(user_input,df) #get new dataframe with user data as final row
#uX = vectorize(t_df, False) #get user input vector
cuisine = classify_ingredients(classifier,X_test) #get predicted cuisine
cuisine += closest_recipes(classifier,X_test,df) #concatenate nearest recipes
return cuisine #return string
###Output
_____no_output_____
###Markdown
Execution Try for a few recipes, and see how long it takes to classify them (time is worth investigating since the model is being trained for each run)Attempt for random ingredients
###Code
us_in = ['biryani masala', 'tortilla', 'cheese'] #get user input
%time print(main(us_in)) #print
###Output
Cuisine: Indian
Probability: 0.91
Closest 7 recipies (Euclidean dist.): 11272 (1.08), 46971 (1.1), 15491 (1.12), 33920 (1.15), 40306 (1.15), 43221 (1.16), 30678 (1.16),
Wall time: 1.65 s
###Markdown
Attempt for Chicken Marsala
###Code
us_in = ['chicken','flour','olive oil','butter','mushrooms','shallot','garlic','marsala wine','cream','thyme','parsley']
%time print(main(us_in))
###Output
Cuisine: Italian
Probability: 1.0
Closest 7 recipies (Euclidean dist.): 35703 (0.81), 20107 (0.86), 663 (0.87), 32785 (0.88), 6783 (0.88), 27118 (0.89), 14344 (0.92),
Wall time: 1.72 s
###Markdown
Attempt for Fried Green Tomatoes
###Code
us_in = ['eggs','flour','cornmeal','salt','pepper','green tomatoes']
%time print(main(us_in))
###Output
Cuisine: Southern_us
Probability: 1.0
Closest 7 recipies (Euclidean dist.): 12290 (0.51), 15266 (0.61), 35185 (0.68), 48273 (0.73), 38766 (0.73), 45470 (0.77), 40197 (0.78),
Wall time: 1.84 s
###Markdown
Background Imports and Installs
###Code
!pip install netcdf4
!pip install pydap
!pip install wget
%pylab inline
import xarray as xr
import wget
import glob
from bs4 import BeautifulSoup
import requests
import pandas as pd
import datetime
import warnings
warnings.filterwarnings(action = "ignore", message = "^internal gelsd")
###Output
Collecting netcdf4
[?25l Downloading https://files.pythonhosted.org/packages/35/4f/d49fe0c65dea4d2ebfdc602d3e3d2a45a172255c151f4497c43f6d94a5f6/netCDF4-1.5.3-cp36-cp36m-manylinux1_x86_64.whl (4.1MB)
[K |████████████████████████████████| 4.1MB 2.8MB/s
[?25hRequirement already satisfied: numpy>=1.7 in /usr/local/lib/python3.6/dist-packages (from netcdf4) (1.17.5)
Collecting cftime
[?25l Downloading https://files.pythonhosted.org/packages/53/35/e2fc52247871c51590d6660e684fdc619a93a29f40e3b64894bd4f8c9041/cftime-1.1.0-cp36-cp36m-manylinux1_x86_64.whl (316kB)
[K |████████████████████████████████| 317kB 45.2MB/s
[?25hInstalling collected packages: cftime, netcdf4
Successfully installed cftime-1.1.0 netcdf4-1.5.3
Collecting pydap
[?25l Downloading https://files.pythonhosted.org/packages/9e/ad/01367f79b24015e223dd7679e4c9b16a6792fe5a9772e45e5f81b2c4a021/Pydap-3.2.2-py3-none-any.whl (2.3MB)
[K |████████████████████████████████| 2.3MB 2.8MB/s
[?25hRequirement already satisfied: docopt in /usr/local/lib/python3.6/dist-packages (from pydap) (0.6.2)
Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from pydap) (1.17.5)
Requirement already satisfied: six>=1.4.0 in /usr/local/lib/python3.6/dist-packages (from pydap) (1.12.0)
Requirement already satisfied: beautifulsoup4 in /usr/local/lib/python3.6/dist-packages (from pydap) (4.6.3)
Collecting Webob
[?25l Downloading https://files.pythonhosted.org/packages/18/3c/de37900faff3c95c7d55dd557aa71bd77477950048983dcd4b53f96fde40/WebOb-1.8.6-py2.py3-none-any.whl (114kB)
[K |████████████████████████████████| 122kB 46.0MB/s
[?25hRequirement already satisfied: Jinja2 in /usr/local/lib/python3.6/dist-packages (from pydap) (2.11.1)
Requirement already satisfied: MarkupSafe>=0.23 in /usr/local/lib/python3.6/dist-packages (from Jinja2->pydap) (1.1.1)
Installing collected packages: Webob, pydap
Successfully installed Webob-1.8.6 pydap-3.2.2
Collecting wget
Downloading https://files.pythonhosted.org/packages/47/6a/62e288da7bcda82b935ff0c6cfe542970f04e29c756b0e147251b2fb251f/wget-3.2.zip
Building wheels for collected packages: wget
Building wheel for wget (setup.py) ... [?25l[?25hdone
Created wheel for wget: filename=wget-3.2-cp36-none-any.whl size=9682 sha256=be9d697ed29620021e9a8f2f6f543a618a70a33e8b20ad94bcbbd2ef8353b871
Stored in directory: /root/.cache/pip/wheels/40/15/30/7d8f7cea2902b4db79e3fea550d7d7b85ecb27ef992b618f3f
Successfully built wget
Installing collected packages: wget
Successfully installed wget-3.2
Populating the interactive namespace from numpy and matplotlib
###Markdown
Mount Drive
###Code
from google.colab import drive
drive.mount('/content/drive')
###Output
Go to this URL in a browser: https://accounts.google.com/o/oauth2/auth?client_id=947318989803-6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3aietf%3awg%3aoauth%3a2.0%3aoob&response_type=code&scope=email%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdocs.test%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive.photos.readonly%20https%3a%2f%2fwww.googleapis.com%2fauth%2fpeopleapi.readonly
Enter your authorization code:
··········
Mounted at /content/drive
###Markdown
Define Functions
###Code
# Create a function to find months JJA
def is_jja(month):
return (month >= 6) & (month <= 8)
###Output
_____no_output_____
###Markdown
Enter the directory in which you would be saving files and loading them.
###Code
YOUR_DIRECTORY = '/content/drive/My Drive/Colab Notebooks/ATMS597/'
###Output
_____no_output_____
###Markdown
Part 1:Aggregate daily rainfall data from the Global Precipitaiton Climatology Project 1 degree daily precipitation data over the period 1996 - 2019 into a single file from daily files, available here: [https://www.ncei.noaa.gov/data/global-precipitation-climatology-project-gpcp-daily/access/].
###Code
url = 'https://www.ncei.noaa.gov/data/global-precipitation-climatology-project-gpcp-daily/access/'
ext = 'nc'
def get_url_paths(url, ext='', params={}):
response = requests.get(url, params=params)
if response.ok:
response_text = response.text
else:
return response.raise_for_status()
soup = BeautifulSoup(response_text, 'html.parser')
parent = [url + node.get('href') for node in soup.find_all('a') if node.get('href').endswith(ext)]
return parent
# Loop through all years and grab all of the datasets
years = np.arange(1996,2020)
datasets = []
for i in years:
result = get_url_paths(url+'{i}/'.format(i=i),ext)
print('working on {i} '.format(i=i))
for j in range(len(result)):
wget.download(result[j])
files = glob.glob('gpcp*.nc')
f = xr.open_mfdataset(files,concat_dim='time')
var = xr.DataArray(f.precip.values,dims=['time','lat','lon'],
coords={'time':f.time.values,
'lat':f.latitude.values,
'lon':f.longitude.values})
datasets.append(var)
!rm gpcp*.nc
#break
# Concatenate the datasets along time dimension
combined = xr.concat(datasets,dim='time')
# Convert to xarray dataset
combined_data = combined.to_dataset(name='precip')
# Convert to netCDF and save
combined_data.to_netcdf(YOUR_DIRECTORY + '/GPCP_aggregate.nc', format='NETCDF4')
###Output
_____no_output_____
###Markdown
Part 2:Determine the 95% values of daily precipitation during a selected particular 3-month period (given in the table below by group) over the grid box closest to the city you are examining. Plot a cumulative distribution function of all values daily precipitation values and illustrate the 95% value of daily precipitation in millimeters.
###Code
# Open the combined dataset
combined_data = xr.open_dataset(YOUR_DIRECTORY + 'GPCP_aggregate.nc')
# Slice data for June, July, and August only
jja_data = combined_data.sel(time=is_jja(combined_data['time.month']))
# Find data point: Shanghai, China lat, lon 31.2304° N, 121.4737° E
slat = 31.2304
slon = 121.4737
shanghai_jja = jja_data.sel(lon=slice(slon-.5,slon+.5),lat=slice(slat-.5,slat+.5))
# Plot the Shanghai Precip values
plt.pcolormesh(shanghai_jja.precip.values[0,:,:])
plt.colorbar()
plt.show()
# Find valid values (remove obvious incorrect values)
valid_ind = np.where((shanghai_jja.precip.values>=0.)&(shanghai_jja.precip.values<=1000.))
# Extract valid values
precip_shanghai = shanghai_jja.precip.values[valid_ind]
# Calculate 95 percentile
perc_95 = np.percentile(precip_shanghai,95)
# Plot
# Plotting parameters
mpl.rcParams['xtick.major.size'] = 14
mpl.rcParams['xtick.major.width'] = 1
mpl.rcParams['xtick.minor.size'] = 14
mpl.rcParams['xtick.minor.width'] = 1
mpl.rcParams['ytick.major.size'] = 14
mpl.rcParams['ytick.major.width'] = 1
mpl.rcParams['ytick.minor.size'] = 14
mpl.rcParams['ytick.minor.width'] = 1
mpl.rcParams['font.family'] = 'sans-serif'
mpl.rcParams['font.serif'] = ['Helvetica']
mpl.rc('xtick',labelsize=16) # Formatting the x ticks
mpl.rc('ytick',labelsize=16)
# Plotting
counts, bin_edges = np.histogram (precip_shanghai, bins=50, normed=True)
cdf = np.cumsum (counts)
plt.figure(figsize=(15,10))
plt.plot (bin_edges[1:], cdf/cdf[-1],'r',label='CDF')
plt.axvline(perc_95,c='b',ls='--',label='$95^{th}$ Percentile')
plt.xlabel('GPCP Daily Precipitation (mm)',fontsize=20)
plt.ylabel('Likelihood of Occurence',fontsize=20)
plt.legend(loc='upper left',fontsize=16)
plt.show()
# Create Dataset for Shanghai significant points
shanghai_95th = shanghai_jja.where(shanghai_jja.precip>=perc_95,drop=True)
###Output
_____no_output_____
###Markdown
Part 3:Using output from the NCEP Reanalysis [https://journals.ametsoc.org/doi/pdf/10.1175/1520-0477(1996)077%3C0437%3ATNYRP%3E2.0.CO%3B2] (Kalnay et al. 1996), compute the global mean fields and seasonal anomaly fields for days meeting and exceeding the threshold of precipitation calculated in the previous step (using the 1981-2010 as a base period for anomalies) of* 250 hPa wind vectors and wind speed,* 500 hPa winds and geopotential height,* 850 hPa temperature, specific humidity, and winds,* skin temperature, and surface winds,* total atmospheric column water vapor. Create functions to grab the data for all respective variables.The first two functions grab the long term mean data (frequency: monthly) from 1981-2010. This is already derived in the NCEP Reanalysis datasets. These functions take in the arguments: variable (name) and level for the upper levels.The second two functions grab the daily averaged data for a selected year. These functions take in the arguments: variable (name), year of interest, and level for the upper levels.Possible VariablesSurface:* air_sfc* uwnd_sfc* vwnd_sfc* pr_wtrUpper level:* uwnd* vwnd* wspd* hgt* air* shumThese variables must be strings.
###Code
# Season of JJA
JJA = [5, 6, 7]
# File strings for functions
filepath = 'https://www.esrl.noaa.gov/psd/thredds/dodsC/Datasets/ncep.reanalysis'
fileperiod = '.1981-2010.ltm.nc'
def grab_LTM_data_sfc(variable):
# Surface
if variable == 'air_sfc':
var = xr.open_dataset(filepath + '.derived/surface/air.sig995.mon' + str(fileperiod), engine='netcdf4').isel(nbnds=0)
elif variable == 'uwnd_sfc':
var = xr.open_dataset(filepath + '.derived/surface/uwnd.sig995.mon' + str(fileperiod), engine='netcdf4').isel(nbnds=0)
elif variable == 'vwnd_sfc':
var = xr.open_dataset(filepath + '.derived/surface/vwnd.sig995.mon' + str(fileperiod), engine='netcdf4').isel(nbnds=0)
elif variable == 'pr_wtr':
var = xr.open_dataset(filepath + '.derived/surface/pr_wtr.eatm.mon' + str(fileperiod), engine='netcdf4').isel(nbnds=0)
else:
print("Error. Please recheck argument inputs.")
# Returns 2D Field
return var
def grab_LTM_data(variable, lvl):
var = xr.open_dataset(filepath + '.derived/pressure/' + str(variable) + '.mon' + str(fileperiod), engine='netcdf4').isel(nbnds=0)
x = var.sel(level=[lvl])
# Returns 2D Field
return x
def grab_yearly_data_sfc(variable, year):
# Surface
if variable == 'air_sfc':
var = xr.open_dataset(filepath + '.dailyavgs/surface/air.sig995.' + str(year) + '.nc', engine='netcdf4')
elif variable == 'uwnd_sfc':
var = xr.open_dataset(filepath + '.dailyavgs/surface/uwnd.sig995.' + str(year) + '.nc', engine='netcdf4')
elif variable == 'vwnd_sfc':
var = xr.open_dataset(filepath + '.dailyavgs/surface/vwnd.sig995.' + str(year) + '.nc', engine='netcdf4')
elif variable == 'pr_wtr':
var = xr.open_dataset(filepath + '.dailyavgs/surface/pr_wtr.eatm.' + str(year) + '.nc', engine='netcdf4')
else:
print("Error. Please recheck argument inputs.")
# Returns 2D Field
return var
def grab_yearly_data(variable, lvl, year):
var = xr.open_dataset(filepath + '.dailyavgs/pressure/' + str(variable) + '.' + str(year) + '.nc', engine='netcdf4')
x = var.sel(level=[lvl])
# Returns 2D Field
return x
###Output
_____no_output_____
###Markdown
Compute means* Long term seasonal mean* Extreme precip days mean* Seasonal anomalies mean
###Code
def calculate_means(variable, lvl, season):
dataset = []
dataset1 = []
dataset2 = []
t = pd.to_datetime(shanghai_95th.time.values)
for i in range(len(shanghai_95th.time.values)):
# Grab data for a specific year
field = grab_yearly_data(variable, lvl, shanghai_95th.time.dt.year[i].values)
# Select extreme precip day
extreme = field.sel(time = field.time.dt.dayofyear == t[i].timetuple().tm_yday).squeeze()
# Select seasonal long term mean
ltm = grab_LTM_data(variable, lvl).isel(time=season).mean(dim='time').squeeze()
# if variable == 'air': # Change from Celsius to Kelvin
# ####
# elif variable == 'shum': # Change specific humidity from g/kg to kg/kg
# ####
# Calculate anomaly
anomaly = extreme[variable] - ltm[variable]
# Append
dataset.append(anomaly)
dataset1.append(extreme[variable])
dataset2.append(ltm[variable])
anomalies = xr.concat(dataset, dim='index').mean(dim="index")
extremes = xr.concat(dataset1, dim='index').mean(dim="index")
ltms = xr.concat(dataset2, dim='index').mean(dim="index")
return anomalies, extremes, ltms
def calculate_means_sfc(variable, season):
dataset = []
dataset1 = []
dataset2 = []
t = pd.to_datetime(shanghai_95th.time.values)
for i in range(len(shanghai_95th.time.values)):
# Grab data for a specific year
field = grab_yearly_data_sfc(variable, shanghai_95th.time.dt.year[i].values)
# Select extreme precip day
extreme = field.sel(time = field.time.dt.dayofyear == t[i].timetuple().tm_yday).squeeze()
# Select seasonal long term mean
ltm = grab_LTM_data_sfc(variable).isel(time=season).mean(dim='time').squeeze()
# if variable == 'air': # Change from Celsius to Kelvin
# ####
# Calculate anomaly
anomaly = extreme[variable] - ltm[variable]
# Append
dataset.append(anomaly)
dataset1.append(extreme[variable])
dataset2.append(ltm[variable])
anomalies = xr.concat(dataset, dim='index').mean(dim="index")
extremes = xr.concat(dataset1, dim='index').mean(dim="index")
ltms = xr.concat(dataset2, dim='index').mean(dim="index")
return anomalies, extremes, ltms
###Output
_____no_output_____
###Markdown
Save anomalies to .nc filePick a level and variable to calculate. This saves the three anomalies to your directory of interest.
###Code
level = 250
variable = 'uwnd'
temp = calculate_means(variable, level, JJA)
which_mean = ['anomaly', 'extreme', 'ltm']
for a in range(len(temp)):
combined_data = temp[a].to_dataset(name=variable)
combined_data.to_netcdf(YOUR_DIRECTORY + str(variable) + '_' + str(level) + '_' + str(which_mean[a]) + '.nc', format='NETCDF4')
###Output
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
/usr/local/lib/python3.6/dist-packages/xarray/coding/times.py:426: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
dtype = _decode_cf_datetime_dtype(data, units, calendar, self.use_cftime)
/usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py:85: SerializationWarning: Unable to decode time axis into full numpy.datetime64 objects, continuing using cftime.datetime objects instead, reason: dates out of range
return array(a, dtype, copy=False, order=order)
###Markdown
Part 4:Create maps showing the mean fields for the extreme precipitation day composites, long term mean composites for the selected months, and the anomaly fields for each variable. Use contours and vectors when appropriate.
###Code
# Import all of the necessary files
uwnd_250_anomaly = xr.open_dataset(YOUR_DIRECTORY + 'uwnd_250_anomaly.nc')
uwnd_250_extreme = xr.open_dataset(YOUR_DIRECTORY + 'uwnd_250_extreme.nc')
uwnd_250_ltm = xr.open_dataset(YOUR_DIRECTORY + 'uwnd_250_ltm.nc')
uwnd_250_anomaly['uwnd'].plot()
uwnd_250_extreme['uwnd'].plot()
uwnd_250_ltm['uwnd'].plot()
###Output
_____no_output_____
###Markdown
###Code
%%capture
import sys
if 'google.colab' in sys.modules:
# Install packages in Colab
!pip install category_encoders==2.*
!pip install pandas-profiling==2.*
!pip install eli5
!pip install catboost
from google.colab import files
uploaded = files.upload()
import pandas as pd
df = pd.read_csv('train.csv')
test = pd.read_csv('test.csv')
###Output
_____no_output_____
###Markdown
데이터 탐색
###Code
df.shape, test.shape
df.head()
test.head()
df.info()
df.T.duplicated(subset=None, keep='first')
[(x, df[x].isnull().sum()) for x in df.columns if df[x].isnull().any()]
df.head(5)
from pandas_profiling import ProfileReport
profile = ProfileReport(df, title="Pandas Profiling Report")
df['Class'].unique()
columns_name = []
for x in df.columns:
columns_name.append(x)
print(columns_name)
df['satisfaction'].value_counts(normalize=True)
import seaborn as sns
sns.countplot(df['satisfaction'],)
###Output
_____no_output_____
###Markdown
데이터 전처리
###Code
df.head()
def clean_df(df):
# 지연된 시간 합치기
df['Delay_Minutes']=df['Departure_Delay_in_Minutes'] + df['Arrival_Delay_in_Minutes']
df.drop(labels=['Unnamed: 0','id','Departure_Delay_in_Minutes','Arrival_Delay_in_Minutes'], axis=1,inplace=True )
# 결측치 대체
df['Delay_Minutes']= df['Delay_Minutes'].fillna(0)
#인코더
df["Gender"] = df["Gender"].map({'Male': 0,'Female': 1})
df['Customer_Type']=df['Customer_Type'].map({'Loyal Customer': 0,'disloyal Customer': 1})
df['Type_of_Travel']=df['Type_of_Travel'].map({'Personal Travel': 0,'Business travel': 1})
df['Class']=df['Class'].map({'Eco': 0,'Eco Plus': 1, 'Business':2 })
df['satisfaction']=df['satisfaction'].map({'neutral or dissatisfied': 0,'satisfied': 1})
# 컬럼명 변경
df.rename(columns={'Type_of_Travel':'Travel_Tyep', 'Flight_Distance':'Distance','Inflight_wifi_service':'wifi_service', 'Departure_Arrival_time_convenient':'time_convenient','Ease_of_Online_booking':'Online_booking', 'Food_and_drink': 'Food_drink',
'Inflight_entertainment':'entertainment'},inplace=True )
clean_df(df)
clean_df(test)
[(x, df[x].isnull().sum()) for x in df.columns if df[x].isnull().any()]
[(x, test[x].isnull().sum()) for x in test.columns if test[x].isnull().any()]
df.head()
from sklearn.model_selection import train_test_split
X=df[['Gender','Travel_Tyep','Customer_Type','Class','Age','wifi_service', 'Online_booking','entertainment','Checkin_service','Inflight_service', 'time_convenient']] #'price'를 제외한 df를 특성으로 지정
y=df['satisfaction']
X_train, X_val, y_train, y_val = train_test_split(X,y,
test_size=0.3,
random_state=2)
X_test = test[['Gender','Travel_Tyep','Customer_Type','Class','Age','wifi_service', 'Online_booking','entertainment','Checkin_service','Inflight_service', 'time_convenient']]
y_test = test['satisfaction']
X_train.shape, X_val.shape, X_test.shape
X_train.head()
###Output
_____no_output_____
###Markdown
훈련 LGBMClassifier
###Code
from lightgbm import LGBMClassifier
model =LGBMClassifier(colsample_bytree= 0.85,
max_depth= 15,
min_split_gain= 0.1,
n_estimators= 200,
num_leaves= 50,
reg_alpha= 1.2,
reg_lambda= 1.2,
subsample= 0.95,
subsample_freq= 20)
eval_set = [(X_train, y_train),
(X_val, y_val)]
model.fit(X_train, y_train,
eval_set=eval_set,
#eval_metric=error,
early_stopping_rounds=50
)
from sklearn.metrics import classification_report
y_pred = model.predict(X_train)
print(classification_report(y_train, y_pred))
y_pred2 = model.predict(X_val)
print(classification_report(y_val, y_pred2))
###Output
_____no_output_____
###Markdown
XGB
###Code
from xgboost import XGBClassifier
model2 = XGBClassifier(
n_estimators=1000, # <= 1000 트리로 설정했지만, early stopping 에 따라 조절됩니다.
max_depth=7, # default=3, high cardinality 특성을 위해 기본보다 높여 보았습니다.
learning_rate=0.2,
# scale_pos_weight=ratio, # imbalance 데이터 일 경우 비율을 적용합니다.
n_jobs=-1
)
eval_set = [(X_train, y_train),
(X_val, y_val)]
model2.fit(X_train, y_train,
eval_set=eval_set,
eval_metric='error', # #(wrong cases)/#(all cases)
early_stopping_rounds=50
)
y_pred_1 = model.predict(X_train)
print(classification_report(y_train, y_pred_1))
y_pred_2= model.predict(X_val)
print(classification_report(y_val, y_pred_2))
!pip install catboost
###Output
_____no_output_____
###Markdown
Cat
###Code
from catboost import CatBoostClassifier
model3=CatBoostClassifier(n_estimators=1000,
learning_rate= 0.15513836571095696,
max_depth= 10,
random_state=10)
eval_set = [(X_train, y_train),
(X_val, y_val)]
model3.fit(X_train, y_train,
eval_set=eval_set,
#eval_metric='error', # #(wrong cases)/#(all cases)
early_stopping_rounds=50
)
y_pred_3 = model.predict(X_train)
print(classification_report(y_train, y_pred_3))
y_pred_4= model.predict(X_val)
print(classification_report(y_val, y_pred_4))
###Output
precision recall f1-score support
0 0.95 0.97 0.96 17668
1 0.96 0.94 0.95 13504
accuracy 0.95 31172
macro avg 0.95 0.95 0.95 31172
weighted avg 0.95 0.95 0.95 31172
###Markdown
평가 지료 비교 Light precision recall f1-score support 0 0.95 0.97 0.96 17668 1 0.96 0.94 0.95 13504 accuracy 0.95 31172XGB precision recall f1-score support 0 0.95 0.97 0.96 17668 1 0.96 0.94 0.95 13504 accuracy 0.95 31172CAT precision recall f1-score support 0 0.95 0.97 0.96 17668 1 0.96 0.94 0.95 13504 accuracy 0.95 31172**평가지표를 비교하니 대부분 비슷하므로 속도가 빠른 LightGBM로 결정**
###Code
y_pred3 = model.predict(X_test)
print(classification_report(y_test, y_pred3))
###Output
precision recall f1-score support
0 0.95 0.97 0.96 14573
1 0.95 0.94 0.95 11403
accuracy 0.95 25976
macro avg 0.95 0.95 0.95 25976
weighted avg 0.95 0.95 0.95 25976
###Markdown
cf ) 순열 중요도
###Code
pip install eli5
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
import eli5
from eli5.sklearn import PermutationImportance
# permuter 정의
permuter = PermutationImportance(
model, # model
scoring='accuracy', # metric
n_iter=5, # 다른 random seed를 사용하여 5번 반복
random_state=2
)
# permuter 계산은 preprocessing 된 X_val을 사용합니다.
#X_val_transformed = pipe.named_steps['preprocessing'].transform(X_val)
# 실제로 fit 의미보다는 스코어를 다시 계산하는 작업입니다
permuter.fit(X_val, y_val);
feature_names = X_val.columns.tolist()
eli5.show_weights(
permuter,
top=None, # top n 지정 가능, None 일 경우 모든 특성
feature_names=feature_names # list 형식으로 넣어야 합니다
)
###Output
_____no_output_____
###Markdown
모델 저장하기
###Code
import pickle
from sklearn.externals import joblib
pickle.dump(model, open('light.pkl', 'wb'))
saved_model = pickle.dumps(model)
clf_from_pickle = pickle.loads(saved_model)
clf_from_pickle.predict(X)
joblib.dump(model, 'light.pkl')
clf_from_joblib = joblib.load('light.pkl')
clf_from_joblib.predict(X)
###Output
_____no_output_____
###Markdown
Hand Recognition Filipe F. Borba Franklin W. Olin College of Engineering Data Science, Prof. Allen Downey Machine Learning is very useful for a variety of real-life problems. It is commonly used for tasks such as classification, recognition, detection and predictions. Moreover, it is very efficient to automate processes that use data. The basic idea is to use data to produce a model capable of returning an output. This output may give a right answer with a new input or produce predictions towards the known data.The goal of this project is to train a Machine Learning algorithm capable of classifying images of different hand gestures, such as a fist, palm, showing the thumb, and others. With this, I'll be able to understand more about this field and create my own program that fits the data that I have. This particular classification problem can be useful for [Gesture Navigation](https://www.youtube.com/watch?v=Lbma7c55wf8), for example. The method I'll be using is Deep Learning.Deep Learning is part of a broader family of machine learning methods. It is based on the use of layers that process the input data, extracting features from them and producing a mathematical model. The creation of this said 'model' will be more clear in the next session. In this specific project, we'll be aiming to classify different images of hand gestures, which means that the computer will have to "learn" the features of each gesture and classify them correctly. For example, if it is given an image of a hand doing a thumbs up gesture, the output of the model needs to be "the hand is doing a thumbs up gesture".Obs: This project was developed using the Google Colab environment.
###Code
# Here we import everything we need for the project
%matplotlib inline
#from google.colab import files
import os
# TensorFlow and tf.keras
import tensorflow as tf
#from tensorflow import keras
# Helper libraries
import numpy as np
import matplotlib.pyplot as plt
import cv2
import pandas as pd
# Sklearn
from sklearn.model_selection import train_test_split # Helps with organizing data for training
from sklearn.metrics import confusion_matrix # Helps present results as a confusion-matrix
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Loading DataThis project uses the [Hand Gesture Recognition Database](https://www.kaggle.com/gti-upm/leapgestrecog/version/1) (citation below) available on Kaggle. It contains 20000 images with different hands and hand gestures. There is a total of 10 hand gestures of 10 different people presented in the dataset. There are 5 female subjects and 5 male subjects.The images were captured using the Leap Motion hand tracking device.>Hand Gesture | Label used>--- | ---> Thumb down | 0> Palm (Horizontal) | 1> L | 2> Fist (Horizontal) | 3> Fist (Vertical) | 4> Thumbs up | 5> Index | 6> OK | 7> Palm (Vertical) | 8> C | 9Table 1 - Classification used for every hand gesture.T. Mantecón, C.R. del Blanco, F. Jaureguizar, N. García, “Hand Gesture Recognition using Infrared Imagery Provided by Leap Motion Controller”, Int. Conf. on Advanced Concepts for Intelligent Vision Systems, ACIVS 2016, Lecce, Italy, pp. 47-57, 24-27 Oct. 2016. (doi: 10.1007/978-3-319-48680-2_5) Overview:- Load images- Some validation- Preparing the images for training- Use of train_test_split
###Code
# Unzip images, ignore this cell if files are already in the workspace
#!unzip leapGestRecog.zip
# We need to get all the paths for the images to later load them
imagepaths = []
# Go through all the files and subdirectories inside a folder and save path to images inside list
for root, dirs, files in os.walk("C:/Users/admin/Desktop/HandRecognition-master/leapGestRecog/", topdown=False):
for name in files:
path = os.path.join(root, name)
if path.endswith("png"): # We want only the images
imagepaths.append(path)
print(len(imagepaths)) # If > 0, then a PNG image was loaded
# This function is used more for debugging and showing results later. It plots the image into the notebook
def plot_image(path):
img = cv2.imread(path) # Reads the image into a numpy.array
img_cvt = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # Converts into the corret colorspace (RGB)
print(img_cvt.shape) # Prints the shape of the image just to check
plt.grid(False) # Without grid so we can see better
plt.imshow(img_cvt) # Shows the image
plt.xlabel("Width")
plt.ylabel("Height")
plt.title("Image " + path)
plot_image(imagepaths[0]) #We plot the first image from our imagepaths array
###Output
_____no_output_____
###Markdown
Now that we loaded the images and checked if it's everything we expected, we have to prepare the images to train the algorithm. We have to load all the images into an array that we will call **X** and all the labels into another array called **y**.
###Code
X = [] # Image data
y = [] # Labels
# Loops through imagepaths to load images and labels into arrays
for path in imagepaths:
img = cv2.imread(path) # Reads image and returns np.array
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # Converts into the corret colorspace (GRAY)
img = cv2.resize(img, (320, 120)) # Reduce image size so training can be faster
X.append(img)
# Processing label in image path
category = path.split("/")[3]
label = int(category.split("_")[0][1]) # We need to convert 10_down to 00_down, or else it crashes
y.append(label)
# Turn X and y into np.array to speed up train_test_split
X = np.array(X, dtype="uint8")
X = X.reshape(len(imagepaths), 120, 320, 1) # Needed to reshape so CNN knows it's different images
y = np.array(y)
print("Images loaded: ", len(X))
print("Labels loaded: ", len(y))
print(y[0], imagepaths[0]) # Debugging
###Output
_____no_output_____
###Markdown
Scipy's train_test_split allows us to split our data into a training set and a test set. The training set will be used to build our model. Then, the test data will be used to check if our predictions are correct. A random_state seed is used so the randomness of our results can be reproduced. The function will shuffle the images it's using to minimize training loss.
###Code
ts = 0.3 # Percentage of images that we want to use for testing. The rest is used for training.
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=ts, random_state=42)
###Output
_____no_output_____
###Markdown
Creating ModelTo simplify the idea of the model being constructed here, we're going to use the concept of Linear Regression. By using linear regression, we can create a simple model and represent it using the equation ```y = ax + b```. ```a``` and ```b``` (slope and intercept, respectively) are the parameters that we're trying to find. By finding the best parameters, for any given value of x, we can predict y. This is the same idea here, but much more complex, with the use of Convolutional Neural Networks.A Convolutional Neural Network (ConvNet/CNN) is a Deep Learning algorithm which can take in an input image, assign importance (learnable weights and biases) to various aspects/objects in the image and be able to differentiate one from the other. The pre-processing required in a ConvNet is much lower as compared to other classification algorithms. While in primitive methods filters are hand-engineered, with enough training, ConvNets have the ability to learn these filters/characteristics.Figure 1 - Example of Convolutional Neural Network.From Figure 1 and imagining the Linear Regression model equation that we talked about, we can imagine that the input layer is x and the output layer is y. The hidden layers vary from model to model, but they are used to "learn" the parameters for our model. Each one has a different function, but they work towards getting the best "slope and intercept".Overview:- Import what the need- Creation of CNN- Compiling and training model- Saving model for later use
###Code
# Recreate the exact same model, including weights and optimizer.
# model = keras.models.load_model('handrecognition_model.h5')
# model.summary()
# To use the pre-trained model, just load it and skip to the next session.
# Import of keras model and hidden layers for our convolutional network
from keras.models import Sequential
from keras.layers.convolutional import Conv2D, MaxPooling2D
from keras.layers import Dense, Flatten
###Output
_____no_output_____
###Markdown
Convolutional neural networks (CNNs) are the current state-of-the-art model architecture for image classification tasks. CNNs apply a series of filters to the raw pixel data of an image to extract and learn higher-level features, which the model can then use for classification. CNNs contains three components:- Convolutional layers, which apply a specified number of convolution filters to the image. For each subregion, the layer performs a set of mathematical operations to produce a single value in the output feature map. Convolutional layers then typically apply a ReLU activation function to the output to introduce nonlinearities into the model.- Pooling layers, which downsample the image data extracted by the convolutional layers to reduce the dimensionality of the feature map in order to decrease processing time. A commonly used pooling algorithm is max pooling, which extracts subregions of the feature map (e.g., 2x2-pixel tiles), keeps their maximum value, and discards all other values.- Dense (fully connected) layers, which perform classification on the features extracted by the convolutional layers and downsampled by the pooling layers. In a dense layer, every node in the layer is connected to every node in the preceding layer.https://www.tensorflow.org/tutorials/estimators/cnn
###Code
# Construction of model
model = Sequential()
model.add(Conv2D(32, (5, 5), activation='relu', input_shape=(120, 320, 1)))
model.add(MaxPooling2D((2, 2)))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D((2, 2)))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D((2, 2)))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dense(10, activation='softmax'))
###Output
_____no_output_____
###Markdown
Our Convolutional Neural Network consists of different layers that have different functions. As explained before, the Conv2D layer performs a 2-D convolutional operation, which can be basically interpreted as a mathematical operation to calculate weights inside the image. In order to maximize the network's performance, we need to understand the parameters required by them.The first required by the Conv2D layer is the number of filters that the convolutional layer will learn. Layers early in the network architecture (closer to the actual input image) learn fewer convolutional filters while layers deeper in the network (closer to the output predictions) will learn more filters. This permits information to flow through the network without loss. These filters emulate edge detectors, blob detectors and other feature extraction methods.It is necessary to tune the values of the filters, but it is common practice to use powers of 2, starting with 32, 64, 128 and increasing to 256, 512, 1024, for example.Figure 2 - Example of 2D convolution operation.Another parameter required by the Conv2D layer is the kernel_size, a 2-tuple specifying the width and height of the 2D convolution window. The kernel_size must be an odd integer, with typical values of (1, 1) , (3, 3) , (5, 5) , (7, 7) . It’s rare to see kernel sizes larger than 7×7. If the input images are greater than 128×128 it is recommended to test a kernel size > 3 to help learn larger spatial filters and to help reduce volume size.Then, MaxPooling2D is used to reduce the spatial dimensions of the output volume. It reduces processing time and allows assumptions to be made about features contained in the sub-regions binned. It is possible to notice in this network that our output spatial volume is decreasing and our number of filters learned is increasing. This is a common practice in designing CNN architectures.Finally, ReLU stands for rectified linear unit, and is a type of activation function. ReLU is the most commonly used activation function in neural networks, especially in CNNs. ReLU is linear (identity) for all positive values, and zero for all negative values. This means that it's cheap to compute as there is no complicated math. The model can therefore take less time to train or run. Also, it converges faster by applying non-linearities to the model, so there is no 'vanishing gradient problem' suffered by other activation functions like sigmoid or tanh.In the end, there is a lot of trial and error to get the best parameters and network architecture. These are some common practices that help reach the best result faster.
###Code
# Configures the model for training
model.compile(optimizer='adam', # Optimization routine, which tells the computer how to adjust the parameter values to minimize the loss function.
loss='sparse_categorical_crossentropy', # Loss function, which tells us how bad our predictions are.
metrics=['accuracy']) # List of metrics to be evaluated by the model during training and testing.
# Trains the model for a given number of epochs (iterations on a dataset) and validates it.
model.fit(X_train, y_train, epochs=5, batch_size=64, verbose=2, validation_data=(X_test, y_test))
# Save entire model to a HDF5 file
model.save('handrecognition_model.h5')
###Output
_____no_output_____
###Markdown
Testing ModelNow that we have the model compiled and trained, we need to check if it's good. First, we run ```model.evaluate``` to test the accuracy. Then, we make predictions and plot the images as long with the predicted labels and true labels to check everything. With that, we can see how our algorithm is working. Later, we produce a confusion matrix, which is a specific table layout that allows visualization of the performance of an algorithm. Overview:- Evaluate model- Predictions- Plot images with predictions- Visualize model
###Code
test_loss, test_acc = model.evaluate(X_test, y_test)
print('Test accuracy: {:2.2f}%'.format(test_acc*100))
predictions = model.predict(X_test) # Make predictions towards the test set
np.argmax(predictions[0]), y_test[0] # If same, got it right
# Function to plot images and labels for validation purposes
def validate_9_images(predictions_array, true_label_array, img_array):
# Array for pretty printing and then figure size
class_names = ["down", "palm", "l", "fist", "fist_moved", "thumb", "index", "ok", "palm_moved", "c"]
plt.figure(figsize=(15,5))
for i in range(1, 10):
# Just assigning variables
prediction = predictions_array[i]
true_label = true_label_array[i]
img = img_array[i]
img = cv2.cvtColor(img, cv2.COLOR_GRAY2RGB)
# Plot in a good way
plt.subplot(3,3,i)
plt.grid(False)
plt.xticks([])
plt.yticks([])
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(prediction) # Get index of the predicted label from prediction
# Change color of title based on good prediction or not
if predicted_label == true_label:
color = 'blue'
else:
color = 'red'
plt.xlabel("Predicted: {} {:2.0f}% (True: {})".format(class_names[predicted_label],
100*np.max(prediction),
class_names[true_label]),
color=color)
plt.show()
validate_9_images(predictions, y_test, X_test)
y_pred = np.argmax(predictions, axis=1) # Transform predictions into 1-D array with label number
# H = Horizontal
# V = Vertical
pd.DataFrame(confusion_matrix(y_test, y_pred),
columns=["Predicted Thumb Down", "Predicted Palm (H)", "Predicted L", "Predicted Fist (H)", "Predicted Fist (V)", "Predicted Thumbs up", "Predicted Index", "Predicted OK", "Predicted Palm (V)", "Predicted C"],
index=["Actual Thumb Down", "Actual Palm (H)", "Actual L", "Actual Fist (H)", "Actual Fist (V)", "Actual Thumbs up", "Actual Index", "Actual OK", "Actual Palm (V)", "Actual C"])
###Output
_____no_output_____ |
components/gcp/dataproc/delete_cluster/sample.ipynb | ###Markdown
NameData preparation by deleting a cluster in Cloud Dataproc LabelCloud Dataproc, cluster, GCP, Cloud Storage, Kubeflow, Pipeline SummaryA Kubeflow Pipeline component to delete a cluster in Cloud Dataproc. Intended useUse this component at the start of a Kubeflow Pipeline to delete a temporary Cloud Dataproc cluster to run Cloud Dataproc jobs as steps in the pipeline. This component is usually used with an [exit handler](https://github.com/kubeflow/pipelines/blob/master/samples/core/exit_handler/exit_handler.py) to run at the end of a pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------|-------------|----------|-----------|-----------------|---------|| project_id | The Google Cloud Platform (GCP) project ID that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region in which to handle the request. | No | GCPRegion | | || name | The name of the cluster to delete. | No | String | | || wait_interval | The number of seconds to pause between polling the operation. | Yes | Integer | | 30 | Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* Run the component under a secret [Kubeflow user service account](https://www.kubeflow.org/docs/started/getting-started-gke/gcp-service-accounts) in a Kubeflow cluster. For example: ``` component_op(...).apply(gcp.use_gcp_secret('user-gcp-sa')) ```* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component deletes a Dataproc cluster by using [Dataproc delete cluster REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.clusters/delete).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK:
###Code
%%capture --no-stderr
KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.14/kfp.tar.gz'
!pip3 install $KFP_PACKAGE --upgrade
###Output
_____no_output_____
###Markdown
2. Load the component using KFP SDK
###Code
import kfp.components as comp
dataproc_delete_cluster_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/caa2dc56f29b0dce5216bec390b1685fc0cdc4b7/components/gcp/dataproc/delete_cluster/component.yaml')
help(dataproc_delete_cluster_op)
###Output
_____no_output_____
###Markdown
SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Prerequisites[Create a Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) before running the sample code. Set sample parameters
###Code
PROJECT_ID = '<Please put your project ID here>'
CLUSTER_NAME = '<Please put your existing cluster name here>'
REGION = 'us-central1'
EXPERIMENT_NAME = 'Dataproc - Delete Cluster'
###Output
_____no_output_____
###Markdown
Example pipeline that uses the component
###Code
import kfp.dsl as dsl
import kfp.gcp as gcp
import json
@dsl.pipeline(
name='Dataproc delete cluster pipeline',
description='Dataproc delete cluster pipeline'
)
def dataproc_delete_cluster_pipeline(
project_id = PROJECT_ID,
region = REGION,
name = CLUSTER_NAME
):
dataproc_delete_cluster_op(
project_id=project_id,
region=region,
name=name).apply(gcp.use_gcp_secret('user-gcp-sa'))
###Output
_____no_output_____
###Markdown
Compile the pipeline
###Code
pipeline_func = dataproc_delete_cluster_pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
###Output
_____no_output_____
###Markdown
NameData preparation by deleting a cluster in Cloud Dataproc LabelCloud Dataproc, cluster, GCP, Cloud Storage, Kubeflow, Pipeline SummaryA Kubeflow Pipeline component to delete a cluster in Cloud Dataproc. Intended useUse this component at the start of a Kubeflow Pipeline to delete a temporary Cloud Dataproc cluster to run Cloud Dataproc jobs as steps in the pipeline. This component is usually used with an [exit handler](https://github.com/kubeflow/pipelines/blob/master/samples/core/exit_handler/exit_handler.py) to run at the end of a pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------|-------------|----------|-----------|-----------------|---------|| project_id | The Google Cloud Platform (GCP) project ID that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region in which to handle the request. | No | GCPRegion | | || name | The name of the cluster to delete. | No | String | | || wait_interval | The number of seconds to pause between polling the operation. | Yes | Integer | | 30 | Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* The component can authenticate to GCP. Refer to [Authenticating Pipelines to GCP](https://www.kubeflow.org/docs/gke/authentication-pipelines/) for details.* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component deletes a Dataproc cluster by using [Dataproc delete cluster REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.clusters/delete).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK:
###Code
%%capture --no-stderr
KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.14/kfp.tar.gz'
!pip3 install $KFP_PACKAGE --upgrade
###Output
_____no_output_____
###Markdown
2. Load the component using KFP SDK
###Code
import kfp.components as comp
dataproc_delete_cluster_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/01a23ae8672d3b18e88adf3036071496aca3552d/components/gcp/dataproc/delete_cluster/component.yaml')
help(dataproc_delete_cluster_op)
###Output
_____no_output_____
###Markdown
SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Prerequisites[Create a Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) before running the sample code. Set sample parameters
###Code
PROJECT_ID = '<Please put your project ID here>'
CLUSTER_NAME = '<Please put your existing cluster name here>'
REGION = 'us-central1'
EXPERIMENT_NAME = 'Dataproc - Delete Cluster'
###Output
_____no_output_____
###Markdown
Example pipeline that uses the component
###Code
import kfp.dsl as dsl
import json
@dsl.pipeline(
name='Dataproc delete cluster pipeline',
description='Dataproc delete cluster pipeline'
)
def dataproc_delete_cluster_pipeline(
project_id = PROJECT_ID,
region = REGION,
name = CLUSTER_NAME
):
dataproc_delete_cluster_op(
project_id=project_id,
region=region,
name=name)
###Output
_____no_output_____
###Markdown
Compile the pipeline
###Code
pipeline_func = dataproc_delete_cluster_pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
###Output
_____no_output_____
###Markdown
NameData preparation by deleting a cluster in Cloud Dataproc LabelCloud Dataproc, cluster, GCP, Cloud Storage, Kubeflow, Pipeline SummaryA Kubeflow Pipeline component to delete a cluster in Cloud Dataproc. Intended useUse this component at the start of a Kubeflow Pipeline to delete a temporary Cloud Dataproc cluster to run Cloud Dataproc jobs as steps in the pipeline. This component is usually used with an [exit handler](https://github.com/kubeflow/pipelines/blob/master/samples/core/exit_handler/exit_handler.py) to run at the end of a pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------|-------------|----------|-----------|-----------------|---------|| project_id | The Google Cloud Platform (GCP) project ID that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region in which to handle the request. | No | GCPRegion | | || name | The name of the cluster to delete. | No | String | | || wait_interval | The number of seconds to pause between polling the operation. | Yes | Integer | | 30 | Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* The component can authenticate to GCP. Refer to [Authenticating Pipelines to GCP](https://www.kubeflow.org/docs/gke/authentication-pipelines/) for details.* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component deletes a Dataproc cluster by using [Dataproc delete cluster REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.clusters/delete).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK:
###Code
%%capture --no-stderr
KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.14/kfp.tar.gz'
!pip3 install $KFP_PACKAGE --upgrade
###Output
_____no_output_____
###Markdown
2. Load the component using KFP SDK
###Code
import kfp.components as comp
dataproc_delete_cluster_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/1.0.0/components/gcp/dataproc/delete_cluster/component.yaml')
help(dataproc_delete_cluster_op)
###Output
_____no_output_____
###Markdown
SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Prerequisites[Create a Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) before running the sample code. Set sample parameters
###Code
PROJECT_ID = '<Please put your project ID here>'
CLUSTER_NAME = '<Please put your existing cluster name here>'
REGION = 'us-central1'
EXPERIMENT_NAME = 'Dataproc - Delete Cluster'
###Output
_____no_output_____
###Markdown
Example pipeline that uses the component
###Code
import kfp.dsl as dsl
import json
@dsl.pipeline(
name='Dataproc delete cluster pipeline',
description='Dataproc delete cluster pipeline'
)
def dataproc_delete_cluster_pipeline(
project_id = PROJECT_ID,
region = REGION,
name = CLUSTER_NAME
):
dataproc_delete_cluster_op(
project_id=project_id,
region=region,
name=name)
###Output
_____no_output_____
###Markdown
Compile the pipeline
###Code
pipeline_func = dataproc_delete_cluster_pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
###Output
_____no_output_____
###Markdown
NameData preparation by deleting a cluster in Cloud Dataproc LabelCloud Dataproc, cluster, GCP, Cloud Storage, Kubeflow, Pipeline SummaryA Kubeflow Pipeline component to delete a cluster in Cloud Dataproc. Intended useUse this component at the start of a Kubeflow Pipeline to delete a temporary Cloud Dataproc cluster to run Cloud Dataproc jobs as steps in the pipeline. This component is usually used with an [exit handler](https://github.com/kubeflow/pipelines/blob/master/samples/core/exit_handler/exit_handler.py) to run at the end of a pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------|-------------|----------|-----------|-----------------|---------|| project_id | The Google Cloud Platform (GCP) project ID that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region in which to handle the request. | No | GCPRegion | | || name | The name of the cluster to delete. | No | String | | || wait_interval | The number of seconds to pause between polling the operation. | Yes | Integer | | 30 | Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* The component can authenticate to GCP. Refer to [Authenticating Pipelines to GCP](https://www.kubeflow.org/docs/gke/authentication-pipelines/) for details.* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component deletes a Dataproc cluster by using [Dataproc delete cluster REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.clusters/delete).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK:
###Code
%%capture --no-stderr
!pip3 install kfp --upgrade
###Output
_____no_output_____
###Markdown
2. Load the component using KFP SDK
###Code
import kfp.components as comp
dataproc_delete_cluster_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/1.5.0-rc.1/components/gcp/dataproc/delete_cluster/component.yaml')
help(dataproc_delete_cluster_op)
###Output
_____no_output_____
###Markdown
SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Prerequisites[Create a Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) before running the sample code. Set sample parameters
###Code
PROJECT_ID = '<Please put your project ID here>'
CLUSTER_NAME = '<Please put your existing cluster name here>'
REGION = 'us-central1'
EXPERIMENT_NAME = 'Dataproc - Delete Cluster'
###Output
_____no_output_____
###Markdown
Example pipeline that uses the component
###Code
import kfp.dsl as dsl
import json
@dsl.pipeline(
name='Dataproc delete cluster pipeline',
description='Dataproc delete cluster pipeline'
)
def dataproc_delete_cluster_pipeline(
project_id = PROJECT_ID,
region = REGION,
name = CLUSTER_NAME
):
dataproc_delete_cluster_op(
project_id=project_id,
region=region,
name=name)
###Output
_____no_output_____
###Markdown
Compile the pipeline
###Code
pipeline_func = dataproc_delete_cluster_pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
###Output
_____no_output_____
###Markdown
NameData preparation by deleting a cluster in Cloud Dataproc LabelCloud Dataproc, cluster, GCP, Cloud Storage, Kubeflow, Pipeline SummaryA Kubeflow Pipeline component to delete a cluster in Cloud Dataproc. Intended useUse this component at the start of a Kubeflow Pipeline to delete a temporary Cloud Dataproc cluster to run Cloud Dataproc jobs as steps in the pipeline. This component is usually used with an [exit handler](https://github.com/kubeflow/pipelines/blob/master/samples/core/exit_handler/exit_handler.py) to run at the end of a pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------|-------------|----------|-----------|-----------------|---------|| project_id | The Google Cloud Platform (GCP) project ID that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region in which to handle the request. | No | GCPRegion | | || name | The name of the cluster to delete. | No | String | | || wait_interval | The number of seconds to pause between polling the operation. | Yes | Integer | | 30 | Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* Run the component under a secret [Kubeflow user service account](https://www.kubeflow.org/docs/started/getting-started-gke/gcp-service-accounts) in a Kubeflow cluster. For example: ``` component_op(...).apply(gcp.use_gcp_secret('user-gcp-sa')) ```* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component deletes a Dataproc cluster by using [Dataproc delete cluster REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.clusters/delete).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK:
###Code
%%capture --no-stderr
KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.14/kfp.tar.gz'
!pip3 install $KFP_PACKAGE --upgrade
###Output
_____no_output_____
###Markdown
2. Load the component using KFP SDK
###Code
import kfp.components as comp
dataproc_delete_cluster_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/e7a021ed1da6b0ff21f7ba30422decbdcdda0c20/components/gcp/dataproc/delete_cluster/component.yaml')
help(dataproc_delete_cluster_op)
###Output
_____no_output_____
###Markdown
SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Prerequisites[Create a Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) before running the sample code. Set sample parameters
###Code
PROJECT_ID = '<Please put your project ID here>'
CLUSTER_NAME = '<Please put your existing cluster name here>'
REGION = 'us-central1'
EXPERIMENT_NAME = 'Dataproc - Delete Cluster'
###Output
_____no_output_____
###Markdown
Example pipeline that uses the component
###Code
import kfp.dsl as dsl
import kfp.gcp as gcp
import json
@dsl.pipeline(
name='Dataproc delete cluster pipeline',
description='Dataproc delete cluster pipeline'
)
def dataproc_delete_cluster_pipeline(
project_id = PROJECT_ID,
region = REGION,
name = CLUSTER_NAME
):
dataproc_delete_cluster_op(
project_id=project_id,
region=region,
name=name).apply(gcp.use_gcp_secret('user-gcp-sa'))
###Output
_____no_output_____
###Markdown
Compile the pipeline
###Code
pipeline_func = dataproc_delete_cluster_pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
###Output
_____no_output_____
###Markdown
NameData preparation by deleting a cluster in Cloud Dataproc LabelCloud Dataproc, cluster, GCP, Cloud Storage, Kubeflow, Pipeline SummaryA Kubeflow Pipeline component to delete a cluster in Cloud Dataproc. Intended useUse this component at the start of a Kubeflow Pipeline to delete a temporary Cloud Dataproc cluster to run Cloud Dataproc jobs as steps in the pipeline. This component is usually used with an [exit handler](https://github.com/kubeflow/pipelines/blob/master/samples/core/exit_handler/exit_handler.py) to run at the end of a pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------|-------------|----------|-----------|-----------------|---------|| project_id | The Google Cloud Platform (GCP) project ID that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region in which to handle the request. | No | GCPRegion | | || name | The name of the cluster to delete. | No | String | | || wait_interval | The number of seconds to pause between polling the operation. | Yes | Integer | | 30 | Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* The component can authenticate to GCP. Refer to [Authenticating Pipelines to GCP](https://www.kubeflow.org/docs/gke/authentication-pipelines/) for details.* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component deletes a Dataproc cluster by using [Dataproc delete cluster REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.clusters/delete).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK:
###Code
%%capture --no-stderr
!pip3 install kfp --upgrade
###Output
_____no_output_____
###Markdown
2. Load the component using KFP SDK
###Code
import kfp.components as comp
dataproc_delete_cluster_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/1.7.0-alpha.2/components/gcp/dataproc/delete_cluster/component.yaml')
help(dataproc_delete_cluster_op)
###Output
_____no_output_____
###Markdown
SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Prerequisites[Create a Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) before running the sample code. Set sample parameters
###Code
PROJECT_ID = '<Please put your project ID here>'
CLUSTER_NAME = '<Please put your existing cluster name here>'
REGION = 'us-central1'
EXPERIMENT_NAME = 'Dataproc - Delete Cluster'
###Output
_____no_output_____
###Markdown
Example pipeline that uses the component
###Code
import kfp.dsl as dsl
import json
@dsl.pipeline(
name='Dataproc delete cluster pipeline',
description='Dataproc delete cluster pipeline'
)
def dataproc_delete_cluster_pipeline(
project_id = PROJECT_ID,
region = REGION,
name = CLUSTER_NAME
):
dataproc_delete_cluster_op(
project_id=project_id,
region=region,
name=name)
###Output
_____no_output_____
###Markdown
Compile the pipeline
###Code
pipeline_func = dataproc_delete_cluster_pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
###Output
_____no_output_____
###Markdown
NameData preparation by deleting a cluster in Cloud Dataproc LabelCloud Dataproc, cluster, GCP, Cloud Storage, Kubeflow, Pipeline SummaryA Kubeflow Pipeline component to delete a cluster in Cloud Dataproc. Intended useUse this component at the start of a Kubeflow Pipeline to delete a temporary Cloud Dataproc cluster to run Cloud Dataproc jobs as steps in the pipeline. This component is usually used with an [exit handler](https://github.com/kubeflow/pipelines/blob/master/samples/core/exit_handler/exit_handler.py) to run at the end of a pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------|-------------|----------|-----------|-----------------|---------|| project_id | The Google Cloud Platform (GCP) project ID that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region in which to handle the request. | No | GCPRegion | | || name | The name of the cluster to delete. | No | String | | || wait_interval | The number of seconds to pause between polling the operation. | Yes | Integer | | 30 | Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* The component can authenticate to GCP. Refer to [Authenticating Pipelines to GCP](https://www.kubeflow.org/docs/gke/authentication-pipelines/) for details.* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component deletes a Dataproc cluster by using [Dataproc delete cluster REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.clusters/delete).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK:
###Code
%%capture --no-stderr
!pip3 install kfp --upgrade
###Output
_____no_output_____
###Markdown
2. Load the component using KFP SDK
###Code
import kfp.components as comp
dataproc_delete_cluster_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/1.1.1-beta.1/components/gcp/dataproc/delete_cluster/component.yaml')
help(dataproc_delete_cluster_op)
###Output
_____no_output_____
###Markdown
SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Prerequisites[Create a Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) before running the sample code. Set sample parameters
###Code
PROJECT_ID = '<Please put your project ID here>'
CLUSTER_NAME = '<Please put your existing cluster name here>'
REGION = 'us-central1'
EXPERIMENT_NAME = 'Dataproc - Delete Cluster'
###Output
_____no_output_____
###Markdown
Example pipeline that uses the component
###Code
import kfp.dsl as dsl
import json
@dsl.pipeline(
name='Dataproc delete cluster pipeline',
description='Dataproc delete cluster pipeline'
)
def dataproc_delete_cluster_pipeline(
project_id = PROJECT_ID,
region = REGION,
name = CLUSTER_NAME
):
dataproc_delete_cluster_op(
project_id=project_id,
region=region,
name=name)
###Output
_____no_output_____
###Markdown
Compile the pipeline
###Code
pipeline_func = dataproc_delete_cluster_pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
###Output
_____no_output_____
###Markdown
NameData preparation by deleting a cluster in Cloud Dataproc LabelCloud Dataproc, cluster, GCP, Cloud Storage, Kubeflow, Pipeline SummaryA Kubeflow Pipeline component to delete a cluster in Cloud Dataproc. Intended useUse this component at the start of a Kubeflow Pipeline to delete a temporary Cloud Dataproc cluster to run Cloud Dataproc jobs as steps in the pipeline. This component is usually used with an [exit handler](https://github.com/kubeflow/pipelines/blob/master/samples/core/exit_handler/exit_handler.py) to run at the end of a pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------|-------------|----------|-----------|-----------------|---------|| project_id | The Google Cloud Platform (GCP) project ID that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region in which to handle the request. | No | GCPRegion | | || name | The name of the cluster to delete. | No | String | | || wait_interval | The number of seconds to pause between polling the operation. | Yes | Integer | | 30 | Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* The component can authenticate to GCP. Refer to [Authenticating Pipelines to GCP](https://www.kubeflow.org/docs/gke/authentication-pipelines/) for details.* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component deletes a Dataproc cluster by using [Dataproc delete cluster REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.clusters/delete).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK:
###Code
%%capture --no-stderr
KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.14/kfp.tar.gz'
!pip3 install $KFP_PACKAGE --upgrade
###Output
_____no_output_____
###Markdown
2. Load the component using KFP SDK
###Code
import kfp.components as comp
dataproc_delete_cluster_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/3f4b80127f35e40760eeb1813ce1d3f641502222/components/gcp/dataproc/delete_cluster/component.yaml')
help(dataproc_delete_cluster_op)
###Output
_____no_output_____
###Markdown
SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Prerequisites[Create a Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) before running the sample code. Set sample parameters
###Code
PROJECT_ID = '<Please put your project ID here>'
CLUSTER_NAME = '<Please put your existing cluster name here>'
REGION = 'us-central1'
EXPERIMENT_NAME = 'Dataproc - Delete Cluster'
###Output
_____no_output_____
###Markdown
Example pipeline that uses the component
###Code
import kfp.dsl as dsl
import json
@dsl.pipeline(
name='Dataproc delete cluster pipeline',
description='Dataproc delete cluster pipeline'
)
def dataproc_delete_cluster_pipeline(
project_id = PROJECT_ID,
region = REGION,
name = CLUSTER_NAME
):
dataproc_delete_cluster_op(
project_id=project_id,
region=region,
name=name)
###Output
_____no_output_____
###Markdown
Compile the pipeline
###Code
pipeline_func = dataproc_delete_cluster_pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
###Output
_____no_output_____
###Markdown
NameData preparation by deleting a cluster in Cloud Dataproc LabelCloud Dataproc, cluster, GCP, Cloud Storage, Kubeflow, Pipeline SummaryA Kubeflow Pipeline component to delete a cluster in Cloud Dataproc. Intended useUse this component at the start of a Kubeflow Pipeline to delete a temporary Cloud Dataproc cluster to run Cloud Dataproc jobs as steps in the pipeline. This component is usually used with an [exit handler](https://github.com/kubeflow/pipelines/blob/master/samples/core/exit_handler/exit_handler.py) to run at the end of a pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------|-------------|----------|-----------|-----------------|---------|| project_id | The Google Cloud Platform (GCP) project ID that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region in which to handle the request. | No | GCPRegion | | || name | The name of the cluster to delete. | No | String | | || wait_interval | The number of seconds to pause between polling the operation. | Yes | Integer | | 30 | Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* The component can authenticate to GCP. Refer to [Authenticating Pipelines to GCP](https://www.kubeflow.org/docs/gke/authentication-pipelines/) for details.* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component deletes a Dataproc cluster by using [Dataproc delete cluster REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.clusters/delete).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK:
###Code
%%capture --no-stderr
KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.14/kfp.tar.gz'
!pip3 install $KFP_PACKAGE --upgrade
###Output
_____no_output_____
###Markdown
2. Load the component using KFP SDK
###Code
import kfp.components as comp
dataproc_delete_cluster_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/2df775a28045bda15372d6dd4644f71dcfe41bfe/components/gcp/dataproc/delete_cluster/component.yaml')
help(dataproc_delete_cluster_op)
###Output
_____no_output_____
###Markdown
SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Prerequisites[Create a Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) before running the sample code. Set sample parameters
###Code
PROJECT_ID = '<Please put your project ID here>'
CLUSTER_NAME = '<Please put your existing cluster name here>'
REGION = 'us-central1'
EXPERIMENT_NAME = 'Dataproc - Delete Cluster'
###Output
_____no_output_____
###Markdown
Example pipeline that uses the component
###Code
import kfp.dsl as dsl
import json
@dsl.pipeline(
name='Dataproc delete cluster pipeline',
description='Dataproc delete cluster pipeline'
)
def dataproc_delete_cluster_pipeline(
project_id = PROJECT_ID,
region = REGION,
name = CLUSTER_NAME
):
dataproc_delete_cluster_op(
project_id=project_id,
region=region,
name=name)
###Output
_____no_output_____
###Markdown
Compile the pipeline
###Code
pipeline_func = dataproc_delete_cluster_pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
###Output
_____no_output_____
###Markdown
NameData preparation by deleting a cluster in Cloud Dataproc LabelCloud Dataproc, cluster, GCP, Cloud Storage, Kubeflow, Pipeline SummaryA Kubeflow Pipeline component to delete a cluster in Cloud Dataproc. Intended useUse this component at the start of a Kubeflow Pipeline to delete a temporary Cloud Dataproc cluster to run Cloud Dataproc jobs as steps in the pipeline. This component is usually used with an [exit handler](https://github.com/kubeflow/pipelines/blob/master/samples/core/exit_handler/exit_handler.py) to run at the end of a pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------|-------------|----------|-----------|-----------------|---------|| project_id | The Google Cloud Platform (GCP) project ID that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region in which to handle the request. | No | GCPRegion | | || name | The name of the cluster to delete. | No | String | | || wait_interval | The number of seconds to pause between polling the operation. | Yes | Integer | | 30 | Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* Run the component under a secret [Kubeflow user service account](https://www.kubeflow.org/docs/started/getting-started-gke/gcp-service-accounts) in a Kubeflow cluster. For example: ``` component_op(...).apply(gcp.use_gcp_secret('user-gcp-sa')) ```* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component deletes a Dataproc cluster by using [Dataproc delete cluster REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.clusters/delete).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK:
###Code
%%capture --no-stderr
KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.14/kfp.tar.gz'
!pip3 install $KFP_PACKAGE --upgrade
###Output
_____no_output_____
###Markdown
2. Load the component using KFP SDK
###Code
import kfp.components as comp
dataproc_delete_cluster_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/02c991dd265054b040265b3dfa1903d5b49df859/components/gcp/dataproc/delete_cluster/component.yaml')
help(dataproc_delete_cluster_op)
###Output
_____no_output_____
###Markdown
SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Prerequisites[Create a Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) before running the sample code. Set sample parameters
###Code
PROJECT_ID = '<Please put your project ID here>'
CLUSTER_NAME = '<Please put your existing cluster name here>'
REGION = 'us-central1'
EXPERIMENT_NAME = 'Dataproc - Delete Cluster'
###Output
_____no_output_____
###Markdown
Example pipeline that uses the component
###Code
import kfp.dsl as dsl
import kfp.gcp as gcp
import json
@dsl.pipeline(
name='Dataproc delete cluster pipeline',
description='Dataproc delete cluster pipeline'
)
def dataproc_delete_cluster_pipeline(
project_id = PROJECT_ID,
region = REGION,
name = CLUSTER_NAME
):
dataproc_delete_cluster_op(
project_id=project_id,
region=region,
name=name).apply(gcp.use_gcp_secret('user-gcp-sa'))
###Output
_____no_output_____
###Markdown
Compile the pipeline
###Code
pipeline_func = dataproc_delete_cluster_pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
###Output
_____no_output_____
###Markdown
NameData preparation by deleting a cluster in Cloud Dataproc LabelCloud Dataproc, cluster, GCP, Cloud Storage, Kubeflow, Pipeline SummaryA Kubeflow Pipeline component to delete a cluster in Cloud Dataproc. Intended useUse this component at the start of a Kubeflow Pipeline to delete a temporary Cloud Dataproc cluster to run Cloud Dataproc jobs as steps in the pipeline. This component is usually used with an [exit handler](https://github.com/kubeflow/pipelines/blob/master/samples/core/exit_handler/exit_handler.py) to run at the end of a pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------|-------------|----------|-----------|-----------------|---------|| project_id | The Google Cloud Platform (GCP) project ID that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region in which to handle the request. | No | GCPRegion | | || name | The name of the cluster to delete. | No | String | | || wait_interval | The number of seconds to pause between polling the operation. | Yes | Integer | | 30 | Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* The component can authenticate to GCP. Refer to [Authenticating Pipelines to GCP](https://www.kubeflow.org/docs/gke/authentication-pipelines/) for details.* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component deletes a Dataproc cluster by using [Dataproc delete cluster REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.clusters/delete).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK:
###Code
%%capture --no-stderr
!pip3 install kfp --upgrade
###Output
_____no_output_____
###Markdown
2. Load the component using KFP SDK
###Code
import kfp.components as comp
dataproc_delete_cluster_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/1.1.2/components/gcp/dataproc/delete_cluster/component.yaml')
help(dataproc_delete_cluster_op)
###Output
_____no_output_____
###Markdown
SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Prerequisites[Create a Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) before running the sample code. Set sample parameters
###Code
PROJECT_ID = '<Please put your project ID here>'
CLUSTER_NAME = '<Please put your existing cluster name here>'
REGION = 'us-central1'
EXPERIMENT_NAME = 'Dataproc - Delete Cluster'
###Output
_____no_output_____
###Markdown
Example pipeline that uses the component
###Code
import kfp.dsl as dsl
import json
@dsl.pipeline(
name='Dataproc delete cluster pipeline',
description='Dataproc delete cluster pipeline'
)
def dataproc_delete_cluster_pipeline(
project_id = PROJECT_ID,
region = REGION,
name = CLUSTER_NAME
):
dataproc_delete_cluster_op(
project_id=project_id,
region=region,
name=name)
###Output
_____no_output_____
###Markdown
Compile the pipeline
###Code
pipeline_func = dataproc_delete_cluster_pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
###Output
_____no_output_____
###Markdown
NameData preparation by deleting a cluster in Cloud Dataproc LabelCloud Dataproc, cluster, GCP, Cloud Storage, Kubeflow, Pipeline SummaryA Kubeflow Pipeline component to delete a cluster in Cloud Dataproc. Intended useUse this component at the start of a Kubeflow Pipeline to delete a temporary Cloud Dataproc cluster to run Cloud Dataproc jobs as steps in the pipeline. This component is usually used with an [exit handler](https://github.com/kubeflow/pipelines/blob/master/samples/core/exit_handler/exit_handler.py) to run at the end of a pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------|-------------|----------|-----------|-----------------|---------|| project_id | The Google Cloud Platform (GCP) project ID that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region in which to handle the request. | No | GCPRegion | | || name | The name of the cluster to delete. | No | String | | || wait_interval | The number of seconds to pause between polling the operation. | Yes | Integer | | 30 | Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* The component can authenticate to GCP. Refer to [Authenticating Pipelines to GCP](https://www.kubeflow.org/docs/gke/authentication-pipelines/) for details.* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component deletes a Dataproc cluster by using [Dataproc delete cluster REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.clusters/delete).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK:
###Code
%%capture --no-stderr
KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.14/kfp.tar.gz'
!pip3 install $KFP_PACKAGE --upgrade
###Output
_____no_output_____
###Markdown
2. Load the component using KFP SDK
###Code
import kfp.components as comp
dataproc_delete_cluster_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/38771da09094640cd2786a4b5130b26ea140f864/components/gcp/dataproc/delete_cluster/component.yaml')
help(dataproc_delete_cluster_op)
###Output
_____no_output_____
###Markdown
SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Prerequisites[Create a Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) before running the sample code. Set sample parameters
###Code
PROJECT_ID = '<Please put your project ID here>'
CLUSTER_NAME = '<Please put your existing cluster name here>'
REGION = 'us-central1'
EXPERIMENT_NAME = 'Dataproc - Delete Cluster'
###Output
_____no_output_____
###Markdown
Example pipeline that uses the component
###Code
import kfp.dsl as dsl
import json
@dsl.pipeline(
name='Dataproc delete cluster pipeline',
description='Dataproc delete cluster pipeline'
)
def dataproc_delete_cluster_pipeline(
project_id = PROJECT_ID,
region = REGION,
name = CLUSTER_NAME
):
dataproc_delete_cluster_op(
project_id=project_id,
region=region,
name=name)
###Output
_____no_output_____
###Markdown
Compile the pipeline
###Code
pipeline_func = dataproc_delete_cluster_pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
###Output
_____no_output_____
###Markdown
NameData preparation by deleting a cluster in Cloud Dataproc LabelCloud Dataproc, cluster, GCP, Cloud Storage, Kubeflow, Pipeline SummaryA Kubeflow Pipeline component to delete a cluster in Cloud Dataproc. Intended useUse this component at the start of a Kubeflow Pipeline to delete a temporary Cloud Dataproc cluster to run Cloud Dataproc jobs as steps in the pipeline. This component is usually used with an [exit handler](https://github.com/kubeflow/pipelines/blob/master/samples/core/exit_handler/exit_handler.py) to run at the end of a pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------|-------------|----------|-----------|-----------------|---------|| project_id | The Google Cloud Platform (GCP) project ID that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region in which to handle the request. | No | GCPRegion | | || name | The name of the cluster to delete. | No | String | | || wait_interval | The number of seconds to pause between polling the operation. | Yes | Integer | | 30 | Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* The component can authenticate to GCP. Refer to [Authenticating Pipelines to GCP](https://www.kubeflow.org/docs/gke/authentication-pipelines/) for details.* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component deletes a Dataproc cluster by using [Dataproc delete cluster REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.clusters/delete).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK:
###Code
%%capture --no-stderr
!pip3 install kfp --upgrade
###Output
_____no_output_____
###Markdown
2. Load the component using KFP SDK
###Code
import kfp.components as comp
dataproc_delete_cluster_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/1.7.0-rc.1/components/gcp/dataproc/delete_cluster/component.yaml')
help(dataproc_delete_cluster_op)
###Output
_____no_output_____
###Markdown
SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Prerequisites[Create a Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) before running the sample code. Set sample parameters
###Code
PROJECT_ID = '<Please put your project ID here>'
CLUSTER_NAME = '<Please put your existing cluster name here>'
REGION = 'us-central1'
EXPERIMENT_NAME = 'Dataproc - Delete Cluster'
###Output
_____no_output_____
###Markdown
Example pipeline that uses the component
###Code
import kfp.dsl as dsl
import json
@dsl.pipeline(
name='Dataproc delete cluster pipeline',
description='Dataproc delete cluster pipeline'
)
def dataproc_delete_cluster_pipeline(
project_id = PROJECT_ID,
region = REGION,
name = CLUSTER_NAME
):
dataproc_delete_cluster_op(
project_id=project_id,
region=region,
name=name)
###Output
_____no_output_____
###Markdown
Compile the pipeline
###Code
pipeline_func = dataproc_delete_cluster_pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
###Output
_____no_output_____
###Markdown
NameData preparation by deleting a cluster in Cloud Dataproc LabelCloud Dataproc, cluster, GCP, Cloud Storage, Kubeflow, Pipeline SummaryA Kubeflow Pipeline component to delete a cluster in Cloud Dataproc. Intended useUse this component at the start of a Kubeflow Pipeline to delete a temporary Cloud Dataproc cluster to run Cloud Dataproc jobs as steps in the pipeline. This component is usually used with an [exit handler](https://github.com/kubeflow/pipelines/blob/master/samples/basic/exit_handler.py) to run at the end of a pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------|-------------|----------|-----------|-----------------|---------|| project_id | The Google Cloud Platform (GCP) project ID that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region in which to handle the request. | No | GCPRegion | | || name | The name of the cluster to delete. | No | String | | || wait_interval | The number of seconds to pause between polling the operation. | Yes | Integer | | 30 | Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* Run the component under a secret [Kubeflow user service account](https://www.kubeflow.org/docs/started/getting-started-gke/gcp-service-accounts) in a Kubeflow cluster. For example: ``` component_op(...).apply(gcp.use_gcp_secret('user-gcp-sa')) ```* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component deletes a Dataproc cluster by using [Dataproc delete cluster REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.clusters/delete).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK:
###Code
%%capture --no-stderr
KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.14/kfp.tar.gz'
!pip3 install $KFP_PACKAGE --upgrade
###Output
_____no_output_____
###Markdown
2. Load the component using KFP SDK
###Code
import kfp.components as comp
dataproc_delete_cluster_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/eb830cd73ca148e5a1a6485a9374c2dc068314bc/components/gcp/dataproc/delete_cluster/component.yaml')
help(dataproc_delete_cluster_op)
###Output
_____no_output_____
###Markdown
SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Prerequisites[Create a Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) before running the sample code. Set sample parameters
###Code
PROJECT_ID = '<Please put your project ID here>'
CLUSTER_NAME = '<Please put your existing cluster name here>'
REGION = 'us-central1'
EXPERIMENT_NAME = 'Dataproc - Delete Cluster'
###Output
_____no_output_____
###Markdown
Example pipeline that uses the component
###Code
import kfp.dsl as dsl
import kfp.gcp as gcp
import json
@dsl.pipeline(
name='Dataproc delete cluster pipeline',
description='Dataproc delete cluster pipeline'
)
def dataproc_delete_cluster_pipeline(
project_id = PROJECT_ID,
region = REGION,
name = CLUSTER_NAME
):
dataproc_delete_cluster_op(
project_id=project_id,
region=region,
name=name).apply(gcp.use_gcp_secret('user-gcp-sa'))
###Output
_____no_output_____
###Markdown
Compile the pipeline
###Code
pipeline_func = dataproc_delete_cluster_pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
###Output
_____no_output_____
###Markdown
NameData preparation by deleting a cluster in Cloud Dataproc LabelCloud Dataproc, cluster, GCP, Cloud Storage, Kubeflow, Pipeline SummaryA Kubeflow Pipeline component to delete a cluster in Cloud Dataproc. Intended useUse this component at the start of a Kubeflow Pipeline to delete a temporary Cloud Dataproc cluster to run Cloud Dataproc jobs as steps in the pipeline. This component is usually used with an [exit handler](https://github.com/kubeflow/pipelines/blob/master/samples/basic/exit_handler.py) to run at the end of a pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------|-------------|----------|-----------|-----------------|---------|| project_id | The Google Cloud Platform (GCP) project ID that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region in which to handle the request. | No | GCPRegion | | || name | The name of the cluster to delete. | No | String | | || wait_interval | The number of seconds to pause between polling the operation. | Yes | Integer | | 30 | Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* Run the component under a secret [Kubeflow user service account](https://www.kubeflow.org/docs/started/getting-started-gke/gcp-service-accounts) in a Kubeflow cluster. For example: ``` component_op(...).apply(gcp.use_gcp_secret('user-gcp-sa')) ```* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component deletes a Dataproc cluster by using [Dataproc delete cluster REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.clusters/delete).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK:
###Code
%%capture --no-stderr
KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.14/kfp.tar.gz'
!pip3 install $KFP_PACKAGE --upgrade
###Output
_____no_output_____
###Markdown
2. Load the component using KFP SDK
###Code
import kfp.components as comp
dataproc_delete_cluster_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/e8524eefb138725fc06600d1956da0f4dd477178/components/gcp/dataproc/delete_cluster/component.yaml')
help(dataproc_delete_cluster_op)
###Output
_____no_output_____
###Markdown
SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Prerequisites[Create a Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) before running the sample code. Set sample parameters
###Code
PROJECT_ID = '<Please put your project ID here>'
CLUSTER_NAME = '<Please put your existing cluster name here>'
REGION = 'us-central1'
EXPERIMENT_NAME = 'Dataproc - Delete Cluster'
###Output
_____no_output_____
###Markdown
Example pipeline that uses the component
###Code
import kfp.dsl as dsl
import kfp.gcp as gcp
import json
@dsl.pipeline(
name='Dataproc delete cluster pipeline',
description='Dataproc delete cluster pipeline'
)
def dataproc_delete_cluster_pipeline(
project_id = PROJECT_ID,
region = REGION,
name = CLUSTER_NAME
):
dataproc_delete_cluster_op(
project_id=project_id,
region=region,
name=name).apply(gcp.use_gcp_secret('user-gcp-sa'))
###Output
_____no_output_____
###Markdown
Compile the pipeline
###Code
pipeline_func = dataproc_delete_cluster_pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
###Output
_____no_output_____
###Markdown
NameData preparation by deleting a cluster in Cloud Dataproc LabelCloud Dataproc, cluster, GCP, Cloud Storage, Kubeflow, Pipeline SummaryA Kubeflow Pipeline component to delete a cluster in Cloud Dataproc. Intended useUse this component at the start of a Kubeflow Pipeline to delete a temporary Cloud Dataproc cluster to run Cloud Dataproc jobs as steps in the pipeline. This component is usually used with an [exit handler](https://github.com/kubeflow/pipelines/blob/master/samples/core/exit_handler/exit_handler.py) to run at the end of a pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------|-------------|----------|-----------|-----------------|---------|| project_id | The Google Cloud Platform (GCP) project ID that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region in which to handle the request. | No | GCPRegion | | || name | The name of the cluster to delete. | No | String | | || wait_interval | The number of seconds to pause between polling the operation. | Yes | Integer | | 30 | Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* The component can authenticate to GCP. Refer to [Authenticating Pipelines to GCP](https://www.kubeflow.org/docs/gke/authentication-pipelines/) for details.* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component deletes a Dataproc cluster by using [Dataproc delete cluster REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.clusters/delete).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK:
###Code
%%capture --no-stderr
KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.14/kfp.tar.gz'
!pip3 install $KFP_PACKAGE --upgrade
###Output
_____no_output_____
###Markdown
2. Load the component using KFP SDK
###Code
import kfp.components as comp
dataproc_delete_cluster_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/1.0.4/components/gcp/dataproc/delete_cluster/component.yaml')
help(dataproc_delete_cluster_op)
###Output
_____no_output_____
###Markdown
SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Prerequisites[Create a Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) before running the sample code. Set sample parameters
###Code
PROJECT_ID = '<Please put your project ID here>'
CLUSTER_NAME = '<Please put your existing cluster name here>'
REGION = 'us-central1'
EXPERIMENT_NAME = 'Dataproc - Delete Cluster'
###Output
_____no_output_____
###Markdown
Example pipeline that uses the component
###Code
import kfp.dsl as dsl
import json
@dsl.pipeline(
name='Dataproc delete cluster pipeline',
description='Dataproc delete cluster pipeline'
)
def dataproc_delete_cluster_pipeline(
project_id = PROJECT_ID,
region = REGION,
name = CLUSTER_NAME
):
dataproc_delete_cluster_op(
project_id=project_id,
region=region,
name=name)
###Output
_____no_output_____
###Markdown
Compile the pipeline
###Code
pipeline_func = dataproc_delete_cluster_pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
###Output
_____no_output_____
###Markdown
NameData preparation by deleting a cluster in Cloud Dataproc LabelCloud Dataproc, cluster, GCP, Cloud Storage, Kubeflow, Pipeline SummaryA Kubeflow Pipeline component to delete a cluster in Cloud Dataproc. Intended useUse this component at the start of a Kubeflow Pipeline to delete a temporary Cloud Dataproc cluster to run Cloud Dataproc jobs as steps in the pipeline. This component is usually used with an [exit handler](https://github.com/kubeflow/pipelines/blob/master/samples/core/exit_handler/exit_handler.py) to run at the end of a pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------|-------------|----------|-----------|-----------------|---------|| project_id | The Google Cloud Platform (GCP) project ID that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region in which to handle the request. | No | GCPRegion | | || name | The name of the cluster to delete. | No | String | | || wait_interval | The number of seconds to pause between polling the operation. | Yes | Integer | | 30 | Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* The component can authenticate to GCP. Refer to [Authenticating Pipelines to GCP](https://www.kubeflow.org/docs/gke/authentication-pipelines/) for details.* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component deletes a Dataproc cluster by using [Dataproc delete cluster REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.clusters/delete).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK:
###Code
%%capture --no-stderr
!pip3 install kfp --upgrade
###Output
_____no_output_____
###Markdown
2. Load the component using KFP SDK
###Code
import kfp.components as comp
dataproc_delete_cluster_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/1.3.0/components/gcp/dataproc/delete_cluster/component.yaml')
help(dataproc_delete_cluster_op)
###Output
_____no_output_____
###Markdown
SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Prerequisites[Create a Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) before running the sample code. Set sample parameters
###Code
PROJECT_ID = '<Please put your project ID here>'
CLUSTER_NAME = '<Please put your existing cluster name here>'
REGION = 'us-central1'
EXPERIMENT_NAME = 'Dataproc - Delete Cluster'
###Output
_____no_output_____
###Markdown
Example pipeline that uses the component
###Code
import kfp.dsl as dsl
import json
@dsl.pipeline(
name='Dataproc delete cluster pipeline',
description='Dataproc delete cluster pipeline'
)
def dataproc_delete_cluster_pipeline(
project_id = PROJECT_ID,
region = REGION,
name = CLUSTER_NAME
):
dataproc_delete_cluster_op(
project_id=project_id,
region=region,
name=name)
###Output
_____no_output_____
###Markdown
Compile the pipeline
###Code
pipeline_func = dataproc_delete_cluster_pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
###Output
_____no_output_____
###Markdown
NameData preparation by deleting a cluster in Cloud Dataproc LabelCloud Dataproc, cluster, GCP, Cloud Storage, Kubeflow, Pipeline SummaryA Kubeflow Pipeline component to delete a cluster in Cloud Dataproc. Intended useUse this component at the start of a Kubeflow Pipeline to delete a temporary Cloud Dataproc cluster to run Cloud Dataproc jobs as steps in the pipeline. This component is usually used with an [exit handler](https://github.com/kubeflow/pipelines/blob/master/samples/core/exit_handler/exit_handler.py) to run at the end of a pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------|-------------|----------|-----------|-----------------|---------|| project_id | The Google Cloud Platform (GCP) project ID that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region in which to handle the request. | No | GCPRegion | | || name | The name of the cluster to delete. | No | String | | || wait_interval | The number of seconds to pause between polling the operation. | Yes | Integer | | 30 | Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* The component can authenticate to GCP. Refer to [Authenticating Pipelines to GCP](https://www.kubeflow.org/docs/gke/authentication-pipelines/) for details.* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component deletes a Dataproc cluster by using [Dataproc delete cluster REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.clusters/delete).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK:
###Code
%%capture --no-stderr
!pip3 install kfp --upgrade
###Output
_____no_output_____
###Markdown
2. Load the component using KFP SDK
###Code
import kfp.components as comp
dataproc_delete_cluster_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/1.4.0/components/gcp/dataproc/delete_cluster/component.yaml')
help(dataproc_delete_cluster_op)
###Output
_____no_output_____
###Markdown
SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Prerequisites[Create a Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) before running the sample code. Set sample parameters
###Code
PROJECT_ID = '<Please put your project ID here>'
CLUSTER_NAME = '<Please put your existing cluster name here>'
REGION = 'us-central1'
EXPERIMENT_NAME = 'Dataproc - Delete Cluster'
###Output
_____no_output_____
###Markdown
Example pipeline that uses the component
###Code
import kfp.dsl as dsl
import json
@dsl.pipeline(
name='Dataproc delete cluster pipeline',
description='Dataproc delete cluster pipeline'
)
def dataproc_delete_cluster_pipeline(
project_id = PROJECT_ID,
region = REGION,
name = CLUSTER_NAME
):
dataproc_delete_cluster_op(
project_id=project_id,
region=region,
name=name)
###Output
_____no_output_____
###Markdown
Compile the pipeline
###Code
pipeline_func = dataproc_delete_cluster_pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
###Output
_____no_output_____
###Markdown
NameData preparation by deleting a cluster in Cloud Dataproc LabelCloud Dataproc, cluster, GCP, Cloud Storage, Kubeflow, Pipeline SummaryA Kubeflow Pipeline component to delete a cluster in Cloud Dataproc. Intended useUse this component at the start of a Kubeflow Pipeline to delete a temporary Cloud Dataproc cluster to run Cloud Dataproc jobs as steps in the pipeline. This component is usually used with an [exit handler](https://github.com/kubeflow/pipelines/blob/master/samples/basic/exit_handler.py) to run at the end of a pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------|-------------|----------|-----------|-----------------|---------|| project_id | The Google Cloud Platform (GCP) project ID that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region in which to handle the request. | No | GCPRegion | | || name | The name of the cluster to delete. | No | String | | || wait_interval | The number of seconds to pause between polling the operation. | Yes | Integer | | 30 | Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* Run the component under a secret [Kubeflow user service account](https://www.kubeflow.org/docs/started/getting-started-gke/gcp-service-accounts) in a Kubeflow cluster. For example: ``` component_op(...).apply(gcp.use_gcp_secret('user-gcp-sa')) ```* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component deletes a Dataproc cluster by using [Dataproc delete cluster REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.clusters/delete).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK:
###Code
%%capture --no-stderr
KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.14/kfp.tar.gz'
!pip3 install $KFP_PACKAGE --upgrade
###Output
_____no_output_____
###Markdown
2. Load the component using KFP SDK
###Code
import kfp.components as comp
dataproc_delete_cluster_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/74d8e592174ae90175f66c3c00ba76a835cfba6d/components/gcp/dataproc/delete_cluster/component.yaml')
help(dataproc_delete_cluster_op)
###Output
_____no_output_____
###Markdown
SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Prerequisites[Create a Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) before running the sample code. Set sample parameters
###Code
PROJECT_ID = '<Please put your project ID here>'
CLUSTER_NAME = '<Please put your existing cluster name here>'
REGION = 'us-central1'
EXPERIMENT_NAME = 'Dataproc - Delete Cluster'
###Output
_____no_output_____
###Markdown
Example pipeline that uses the component
###Code
import kfp.dsl as dsl
import kfp.gcp as gcp
import json
@dsl.pipeline(
name='Dataproc delete cluster pipeline',
description='Dataproc delete cluster pipeline'
)
def dataproc_delete_cluster_pipeline(
project_id = PROJECT_ID,
region = REGION,
name = CLUSTER_NAME
):
dataproc_delete_cluster_op(
project_id=project_id,
region=region,
name=name).apply(gcp.use_gcp_secret('user-gcp-sa'))
###Output
_____no_output_____
###Markdown
Compile the pipeline
###Code
pipeline_func = dataproc_delete_cluster_pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
###Output
_____no_output_____
###Markdown
NameData preparation by deleting a cluster in Cloud Dataproc LabelCloud Dataproc, cluster, GCP, Cloud Storage, Kubeflow, Pipeline SummaryA Kubeflow Pipeline component to delete a cluster in Cloud Dataproc. Intended useUse this component at the start of a Kubeflow Pipeline to delete a temporary Cloud Dataproc cluster to run Cloud Dataproc jobs as steps in the pipeline. This component is usually used with an [exit handler](https://github.com/kubeflow/pipelines/blob/master/samples/core/exit_handler/exit_handler.py) to run at the end of a pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------|-------------|----------|-----------|-----------------|---------|| project_id | The Google Cloud Platform (GCP) project ID that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region in which to handle the request. | No | GCPRegion | | || name | The name of the cluster to delete. | No | String | | || wait_interval | The number of seconds to pause between polling the operation. | Yes | Integer | | 30 | Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* Run the component under a secret [Kubeflow user service account](https://www.kubeflow.org/docs/started/getting-started-gke/gcp-service-accounts) in a Kubeflow cluster. For example: ``` component_op(...).apply(gcp.use_gcp_secret('user-gcp-sa')) ```* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component deletes a Dataproc cluster by using [Dataproc delete cluster REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.clusters/delete).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK:
###Code
%%capture --no-stderr
KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.14/kfp.tar.gz'
!pip3 install $KFP_PACKAGE --upgrade
###Output
_____no_output_____
###Markdown
2. Load the component using KFP SDK
###Code
import kfp.components as comp
dataproc_delete_cluster_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/06401ecc8f1561509ef095901a70b3543c2ca30f/components/gcp/dataproc/delete_cluster/component.yaml')
help(dataproc_delete_cluster_op)
###Output
_____no_output_____
###Markdown
SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Prerequisites[Create a Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) before running the sample code. Set sample parameters
###Code
PROJECT_ID = '<Please put your project ID here>'
CLUSTER_NAME = '<Please put your existing cluster name here>'
REGION = 'us-central1'
EXPERIMENT_NAME = 'Dataproc - Delete Cluster'
###Output
_____no_output_____
###Markdown
Example pipeline that uses the component
###Code
import kfp.dsl as dsl
import kfp.gcp as gcp
import json
@dsl.pipeline(
name='Dataproc delete cluster pipeline',
description='Dataproc delete cluster pipeline'
)
def dataproc_delete_cluster_pipeline(
project_id = PROJECT_ID,
region = REGION,
name = CLUSTER_NAME
):
dataproc_delete_cluster_op(
project_id=project_id,
region=region,
name=name).apply(gcp.use_gcp_secret('user-gcp-sa'))
###Output
_____no_output_____
###Markdown
Compile the pipeline
###Code
pipeline_func = dataproc_delete_cluster_pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
###Output
_____no_output_____
###Markdown
NameData preparation by deleting a cluster in Cloud Dataproc LabelCloud Dataproc, cluster, GCP, Cloud Storage, Kubeflow, Pipeline SummaryA Kubeflow Pipeline component to delete a cluster in Cloud Dataproc. Intended useUse this component at the start of a Kubeflow Pipeline to delete a temporary Cloud Dataproc cluster to run Cloud Dataproc jobs as steps in the pipeline. This component is usually used with an [exit handler](https://github.com/kubeflow/pipelines/blob/master/samples/core/exit_handler/exit_handler.py) to run at the end of a pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------|-------------|----------|-----------|-----------------|---------|| project_id | The Google Cloud Platform (GCP) project ID that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region in which to handle the request. | No | GCPRegion | | || name | The name of the cluster to delete. | No | String | | || wait_interval | The number of seconds to pause between polling the operation. | Yes | Integer | | 30 | Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* The component can authenticate to GCP. Refer to [Authenticating Pipelines to GCP](https://www.kubeflow.org/docs/gke/authentication-pipelines/) for details.* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component deletes a Dataproc cluster by using [Dataproc delete cluster REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.clusters/delete).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK:
###Code
%%capture --no-stderr
!pip3 install kfp --upgrade
###Output
_____no_output_____
###Markdown
2. Load the component using KFP SDK
###Code
import kfp.components as comp
dataproc_delete_cluster_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/1.6.0-rc.0/components/gcp/dataproc/delete_cluster/component.yaml')
help(dataproc_delete_cluster_op)
###Output
_____no_output_____
###Markdown
SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Prerequisites[Create a Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) before running the sample code. Set sample parameters
###Code
PROJECT_ID = '<Please put your project ID here>'
CLUSTER_NAME = '<Please put your existing cluster name here>'
REGION = 'us-central1'
EXPERIMENT_NAME = 'Dataproc - Delete Cluster'
###Output
_____no_output_____
###Markdown
Example pipeline that uses the component
###Code
import kfp.dsl as dsl
import json
@dsl.pipeline(
name='Dataproc delete cluster pipeline',
description='Dataproc delete cluster pipeline'
)
def dataproc_delete_cluster_pipeline(
project_id = PROJECT_ID,
region = REGION,
name = CLUSTER_NAME
):
dataproc_delete_cluster_op(
project_id=project_id,
region=region,
name=name)
###Output
_____no_output_____
###Markdown
Compile the pipeline
###Code
pipeline_func = dataproc_delete_cluster_pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
###Output
_____no_output_____
###Markdown
NameData preparation by deleting a cluster in Cloud Dataproc LabelCloud Dataproc, cluster, GCP, Cloud Storage, Kubeflow, Pipeline SummaryA Kubeflow Pipeline component to delete a cluster in Cloud Dataproc. Intended useUse this component at the start of a Kubeflow Pipeline to delete a temporary Cloud Dataproc cluster to run Cloud Dataproc jobs as steps in the pipeline. This component is usually used with an [exit handler](https://github.com/kubeflow/pipelines/blob/master/samples/core/exit_handler/exit_handler.py) to run at the end of a pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------|-------------|----------|-----------|-----------------|---------|| project_id | The Google Cloud Platform (GCP) project ID that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region in which to handle the request. | No | GCPRegion | | || name | The name of the cluster to delete. | No | String | | || wait_interval | The number of seconds to pause between polling the operation. | Yes | Integer | | 30 | Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* Run the component under a secret [Kubeflow user service account](https://www.kubeflow.org/docs/started/getting-started-gke/gcp-service-accounts) in a Kubeflow cluster. For example: ``` component_op(...).apply(gcp.use_gcp_secret('user-gcp-sa')) ```* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component deletes a Dataproc cluster by using [Dataproc delete cluster REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.clusters/delete).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK:
###Code
%%capture --no-stderr
KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.14/kfp.tar.gz'
!pip3 install $KFP_PACKAGE --upgrade
###Output
_____no_output_____
###Markdown
2. Load the component using KFP SDK
###Code
import kfp.components as comp
dataproc_delete_cluster_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/a97f1d0ad0e7b92203f35c5b0b9af3a314952e05/components/gcp/dataproc/delete_cluster/component.yaml')
help(dataproc_delete_cluster_op)
###Output
_____no_output_____
###Markdown
SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Prerequisites[Create a Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) before running the sample code. Set sample parameters
###Code
PROJECT_ID = '<Please put your project ID here>'
CLUSTER_NAME = '<Please put your existing cluster name here>'
REGION = 'us-central1'
EXPERIMENT_NAME = 'Dataproc - Delete Cluster'
###Output
_____no_output_____
###Markdown
Example pipeline that uses the component
###Code
import kfp.dsl as dsl
import kfp.gcp as gcp
import json
@dsl.pipeline(
name='Dataproc delete cluster pipeline',
description='Dataproc delete cluster pipeline'
)
def dataproc_delete_cluster_pipeline(
project_id = PROJECT_ID,
region = REGION,
name = CLUSTER_NAME
):
dataproc_delete_cluster_op(
project_id=project_id,
region=region,
name=name).apply(gcp.use_gcp_secret('user-gcp-sa'))
###Output
_____no_output_____
###Markdown
Compile the pipeline
###Code
pipeline_func = dataproc_delete_cluster_pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
###Output
_____no_output_____
###Markdown
NameData preparation by deleting a cluster in Cloud Dataproc LabelCloud Dataproc, cluster, GCP, Cloud Storage, Kubeflow, Pipeline SummaryA Kubeflow Pipeline component to delete a cluster in Cloud Dataproc. Intended useUse this component at the start of a Kubeflow Pipeline to delete a temporary Cloud Dataproc cluster to run Cloud Dataproc jobs as steps in the pipeline. This component is usually used with an [exit handler](https://github.com/kubeflow/pipelines/blob/master/samples/core/exit_handler/exit_handler.py) to run at the end of a pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------|-------------|----------|-----------|-----------------|---------|| project_id | The Google Cloud Platform (GCP) project ID that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region in which to handle the request. | No | GCPRegion | | || name | The name of the cluster to delete. | No | String | | || wait_interval | The number of seconds to pause between polling the operation. | Yes | Integer | | 30 | Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* The component can authenticate to GCP. Refer to [Authenticating Pipelines to GCP](https://www.kubeflow.org/docs/gke/authentication-pipelines/) for details.* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component deletes a Dataproc cluster by using [Dataproc delete cluster REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.clusters/delete).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK:
###Code
%%capture --no-stderr
!pip3 install kfp --upgrade
###Output
_____no_output_____
###Markdown
2. Load the component using KFP SDK
###Code
import kfp.components as comp
dataproc_delete_cluster_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/1.7.0-alpha.1/components/gcp/dataproc/delete_cluster/component.yaml')
help(dataproc_delete_cluster_op)
###Output
_____no_output_____
###Markdown
SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Prerequisites[Create a Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) before running the sample code. Set sample parameters
###Code
PROJECT_ID = '<Please put your project ID here>'
CLUSTER_NAME = '<Please put your existing cluster name here>'
REGION = 'us-central1'
EXPERIMENT_NAME = 'Dataproc - Delete Cluster'
###Output
_____no_output_____
###Markdown
Example pipeline that uses the component
###Code
import kfp.dsl as dsl
import json
@dsl.pipeline(
name='Dataproc delete cluster pipeline',
description='Dataproc delete cluster pipeline'
)
def dataproc_delete_cluster_pipeline(
project_id = PROJECT_ID,
region = REGION,
name = CLUSTER_NAME
):
dataproc_delete_cluster_op(
project_id=project_id,
region=region,
name=name)
###Output
_____no_output_____
###Markdown
Compile the pipeline
###Code
pipeline_func = dataproc_delete_cluster_pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
###Output
_____no_output_____
###Markdown
NameData preparation by deleting a cluster in Cloud Dataproc LabelCloud Dataproc, cluster, GCP, Cloud Storage, Kubeflow, Pipeline SummaryA Kubeflow Pipeline component to delete a cluster in Cloud Dataproc. Intended useUse this component at the start of a Kubeflow Pipeline to delete a temporary Cloud Dataproc cluster to run Cloud Dataproc jobs as steps in the pipeline. This component is usually used with an [exit handler](https://github.com/kubeflow/pipelines/blob/master/samples/core/exit_handler/exit_handler.py) to run at the end of a pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------|-------------|----------|-----------|-----------------|---------|| project_id | The Google Cloud Platform (GCP) project ID that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region in which to handle the request. | No | GCPRegion | | || name | The name of the cluster to delete. | No | String | | || wait_interval | The number of seconds to pause between polling the operation. | Yes | Integer | | 30 | Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* The component can authenticate to GCP. Refer to [Authenticating Pipelines to GCP](https://www.kubeflow.org/docs/gke/authentication-pipelines/) for details.* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component deletes a Dataproc cluster by using [Dataproc delete cluster REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.clusters/delete).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK:
###Code
%%capture --no-stderr
!pip3 install kfp --upgrade
###Output
_____no_output_____
###Markdown
2. Load the component using KFP SDK
###Code
import kfp.components as comp
dataproc_delete_cluster_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/v1.7.0-alpha.3/components/gcp/dataproc/delete_cluster/component.yaml')
help(dataproc_delete_cluster_op)
###Output
_____no_output_____
###Markdown
SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Prerequisites[Create a Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) before running the sample code. Set sample parameters
###Code
PROJECT_ID = '<Please put your project ID here>'
CLUSTER_NAME = '<Please put your existing cluster name here>'
REGION = 'us-central1'
EXPERIMENT_NAME = 'Dataproc - Delete Cluster'
###Output
_____no_output_____
###Markdown
Example pipeline that uses the component
###Code
import kfp.dsl as dsl
import json
@dsl.pipeline(
name='Dataproc delete cluster pipeline',
description='Dataproc delete cluster pipeline'
)
def dataproc_delete_cluster_pipeline(
project_id = PROJECT_ID,
region = REGION,
name = CLUSTER_NAME
):
dataproc_delete_cluster_op(
project_id=project_id,
region=region,
name=name)
###Output
_____no_output_____
###Markdown
Compile the pipeline
###Code
pipeline_func = dataproc_delete_cluster_pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
###Output
_____no_output_____
###Markdown
NameData preparation by deleting a cluster in Cloud Dataproc LabelCloud Dataproc, cluster, GCP, Cloud Storage, Kubeflow, Pipeline SummaryA Kubeflow Pipeline component to delete a cluster in Cloud Dataproc. Intended useUse this component at the start of a Kubeflow Pipeline to delete a temporary Cloud Dataproc cluster to run Cloud Dataproc jobs as steps in the pipeline. This component is usually used with an [exit handler](https://github.com/kubeflow/pipelines/blob/master/samples/core/exit_handler/exit_handler.py) to run at the end of a pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------|-------------|----------|-----------|-----------------|---------|| project_id | The Google Cloud Platform (GCP) project ID that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region in which to handle the request. | No | GCPRegion | | || name | The name of the cluster to delete. | No | String | | || wait_interval | The number of seconds to pause between polling the operation. | Yes | Integer | | 30 | Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* The component can authenticate to GCP. Refer to [Authenticating Pipelines to GCP](https://www.kubeflow.org/docs/gke/authentication-pipelines/) for details.* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component deletes a Dataproc cluster by using [Dataproc delete cluster REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.clusters/delete).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK:
###Code
%%capture --no-stderr
!pip3 install kfp --upgrade
###Output
_____no_output_____
###Markdown
2. Load the component using KFP SDK
###Code
import kfp.components as comp
dataproc_delete_cluster_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/1.4.0-rc.1/components/gcp/dataproc/delete_cluster/component.yaml')
help(dataproc_delete_cluster_op)
###Output
_____no_output_____
###Markdown
SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Prerequisites[Create a Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) before running the sample code. Set sample parameters
###Code
PROJECT_ID = '<Please put your project ID here>'
CLUSTER_NAME = '<Please put your existing cluster name here>'
REGION = 'us-central1'
EXPERIMENT_NAME = 'Dataproc - Delete Cluster'
###Output
_____no_output_____
###Markdown
Example pipeline that uses the component
###Code
import kfp.dsl as dsl
import json
@dsl.pipeline(
name='Dataproc delete cluster pipeline',
description='Dataproc delete cluster pipeline'
)
def dataproc_delete_cluster_pipeline(
project_id = PROJECT_ID,
region = REGION,
name = CLUSTER_NAME
):
dataproc_delete_cluster_op(
project_id=project_id,
region=region,
name=name)
###Output
_____no_output_____
###Markdown
Compile the pipeline
###Code
pipeline_func = dataproc_delete_cluster_pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
###Output
_____no_output_____
###Markdown
NameData preparation by deleting a cluster in Cloud Dataproc LabelCloud Dataproc, cluster, GCP, Cloud Storage, Kubeflow, Pipeline SummaryA Kubeflow Pipeline component to delete a cluster in Cloud Dataproc. Intended useUse this component at the start of a Kubeflow Pipeline to delete a temporary Cloud Dataproc cluster to run Cloud Dataproc jobs as steps in the pipeline. This component is usually used with an [exit handler](https://github.com/kubeflow/pipelines/blob/master/samples/core/exit_handler/exit_handler.py) to run at the end of a pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------|-------------|----------|-----------|-----------------|---------|| project_id | The Google Cloud Platform (GCP) project ID that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region in which to handle the request. | No | GCPRegion | | || name | The name of the cluster to delete. | No | String | | || wait_interval | The number of seconds to pause between polling the operation. | Yes | Integer | | 30 | Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* The component can authenticate to GCP. Refer to [Authenticating Pipelines to GCP](https://www.kubeflow.org/docs/gke/authentication-pipelines/) for details.* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component deletes a Dataproc cluster by using [Dataproc delete cluster REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.clusters/delete).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK:
###Code
%%capture --no-stderr
!pip3 install kfp --upgrade
###Output
_____no_output_____
###Markdown
2. Load the component using KFP SDK
###Code
import kfp.components as comp
dataproc_delete_cluster_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/1.1.2-rc.1/components/gcp/dataproc/delete_cluster/component.yaml')
help(dataproc_delete_cluster_op)
###Output
_____no_output_____
###Markdown
SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Prerequisites[Create a Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) before running the sample code. Set sample parameters
###Code
PROJECT_ID = '<Please put your project ID here>'
CLUSTER_NAME = '<Please put your existing cluster name here>'
REGION = 'us-central1'
EXPERIMENT_NAME = 'Dataproc - Delete Cluster'
###Output
_____no_output_____
###Markdown
Example pipeline that uses the component
###Code
import kfp.dsl as dsl
import json
@dsl.pipeline(
name='Dataproc delete cluster pipeline',
description='Dataproc delete cluster pipeline'
)
def dataproc_delete_cluster_pipeline(
project_id = PROJECT_ID,
region = REGION,
name = CLUSTER_NAME
):
dataproc_delete_cluster_op(
project_id=project_id,
region=region,
name=name)
###Output
_____no_output_____
###Markdown
Compile the pipeline
###Code
pipeline_func = dataproc_delete_cluster_pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
###Output
_____no_output_____
###Markdown
NameData preparation by deleting a cluster in Cloud Dataproc LabelCloud Dataproc, cluster, GCP, Cloud Storage, Kubeflow, Pipeline SummaryA Kubeflow Pipeline component to delete a cluster in Cloud Dataproc. Intended useUse this component at the start of a Kubeflow Pipeline to delete a temporary Cloud Dataproc cluster to run Cloud Dataproc jobs as steps in the pipeline. This component is usually used with an [exit handler](https://github.com/kubeflow/pipelines/blob/master/samples/core/exit_handler/exit_handler.py) to run at the end of a pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------|-------------|----------|-----------|-----------------|---------|| project_id | The Google Cloud Platform (GCP) project ID that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region in which to handle the request. | No | GCPRegion | | || name | The name of the cluster to delete. | No | String | | || wait_interval | The number of seconds to pause between polling the operation. | Yes | Integer | | 30 | Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* The component can authenticate to GCP. Refer to [Authenticating Pipelines to GCP](https://www.kubeflow.org/docs/gke/authentication-pipelines/) for details.* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component deletes a Dataproc cluster by using [Dataproc delete cluster REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.clusters/delete).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK:
###Code
%%capture --no-stderr
!pip3 install kfp --upgrade
###Output
_____no_output_____
###Markdown
2. Load the component using KFP SDK
###Code
import kfp.components as comp
dataproc_delete_cluster_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/1.4.1/components/gcp/dataproc/delete_cluster/component.yaml')
help(dataproc_delete_cluster_op)
###Output
_____no_output_____
###Markdown
SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Prerequisites[Create a Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) before running the sample code. Set sample parameters
###Code
PROJECT_ID = '<Please put your project ID here>'
CLUSTER_NAME = '<Please put your existing cluster name here>'
REGION = 'us-central1'
EXPERIMENT_NAME = 'Dataproc - Delete Cluster'
###Output
_____no_output_____
###Markdown
Example pipeline that uses the component
###Code
import kfp.dsl as dsl
import json
@dsl.pipeline(
name='Dataproc delete cluster pipeline',
description='Dataproc delete cluster pipeline'
)
def dataproc_delete_cluster_pipeline(
project_id = PROJECT_ID,
region = REGION,
name = CLUSTER_NAME
):
dataproc_delete_cluster_op(
project_id=project_id,
region=region,
name=name)
###Output
_____no_output_____
###Markdown
Compile the pipeline
###Code
pipeline_func = dataproc_delete_cluster_pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
###Output
_____no_output_____
###Markdown
NameData preparation by deleting a cluster in Cloud Dataproc LabelCloud Dataproc, cluster, GCP, Cloud Storage, Kubeflow, Pipeline SummaryA Kubeflow Pipeline component to delete a cluster in Cloud Dataproc. Intended useUse this component at the start of a Kubeflow Pipeline to delete a temporary Cloud Dataproc cluster to run Cloud Dataproc jobs as steps in the pipeline. This component is usually used with an [exit handler](https://github.com/kubeflow/pipelines/blob/master/samples/basic/exit_handler.py) to run at the end of a pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------|-------------|----------|-----------|-----------------|---------|| project_id | The Google Cloud Platform (GCP) project ID that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region in which to handle the request. | No | GCPRegion | | || name | The name of the cluster to delete. | No | String | | || wait_interval | The number of seconds to pause between polling the operation. | Yes | Integer | | 30 | Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* Run the component under a secret [Kubeflow user service account](https://www.kubeflow.org/docs/started/getting-started-gke/gcp-service-accounts) in a Kubeflow cluster. For example: ``` component_op(...).apply(gcp.use_gcp_secret('user-gcp-sa')) ```* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component deletes a Dataproc cluster by using [Dataproc delete cluster REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.clusters/delete).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK:
###Code
%%capture --no-stderr
KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.14/kfp.tar.gz'
!pip3 install $KFP_PACKAGE --upgrade
###Output
_____no_output_____
###Markdown
2. Load the component using KFP SDK
###Code
import kfp.components as comp
dataproc_delete_cluster_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/1f65a564d4d44fa5a0dc6c59929ca2211ebb3d1c/components/gcp/dataproc/delete_cluster/component.yaml')
help(dataproc_delete_cluster_op)
###Output
_____no_output_____
###Markdown
SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Prerequisites[Create a Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) before running the sample code. Set sample parameters
###Code
PROJECT_ID = '<Please put your project ID here>'
CLUSTER_NAME = '<Please put your existing cluster name here>'
REGION = 'us-central1'
EXPERIMENT_NAME = 'Dataproc - Delete Cluster'
###Output
_____no_output_____
###Markdown
Example pipeline that uses the component
###Code
import kfp.dsl as dsl
import kfp.gcp as gcp
import json
@dsl.pipeline(
name='Dataproc delete cluster pipeline',
description='Dataproc delete cluster pipeline'
)
def dataproc_delete_cluster_pipeline(
project_id = PROJECT_ID,
region = REGION,
name = CLUSTER_NAME
):
dataproc_delete_cluster_op(
project_id=project_id,
region=region,
name=name).apply(gcp.use_gcp_secret('user-gcp-sa'))
###Output
_____no_output_____
###Markdown
Compile the pipeline
###Code
pipeline_func = dataproc_delete_cluster_pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
###Output
_____no_output_____
###Markdown
Dataproc - Delete Cluster Intended UseA Kubeflow Pipeline component to delete a cluster in Google Cloud Dataproc service. Run-Time Parameters:Name | Description:--- | :----------project_id | Required. The ID of the Google Cloud Platform project that the cluster belongs to.region | Required. The Cloud Dataproc region in which to handle the request.name | Required. The cluster name to delete.wait_interval | The wait seconds between polling the operation. Defaults to 30s. SampleNote: the sample code below works in both IPython notebook or python code directly. PrerequisitesBefore running the sample code, you need to [create a Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster). Set sample parameters
###Code
PROJECT_ID = '<Please put your project ID here>'
CLUSTER_NAME = '<Please put your existing cluster name here>'
REGION = 'us-central1'
EXPERIMENT_NAME = 'Dataproc - Delete Cluster'
COMPONENT_SPEC_URI = 'https://raw.githubusercontent.com/kubeflow/pipelines/d2f5cc92a46012b9927209e2aaccab70961582dc/components/gcp/dataproc/delete_cluster/component.yaml'
###Output
_____no_output_____
###Markdown
Install KFP SDKInstall the SDK (Uncomment the code if the SDK is not installed before)
###Code
#KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.12/kfp.tar.gz'
#!pip3 install $KFP_PACKAGE --upgrade
###Output
_____no_output_____
###Markdown
Load component definitions
###Code
import kfp.components as comp
dataproc_delete_cluster_op = comp.load_component_from_url(COMPONENT_SPEC_URI)
display(dataproc_delete_cluster_op)
###Output
_____no_output_____
###Markdown
Here is an illustrative pipeline that uses the component
###Code
import kfp.dsl as dsl
import kfp.gcp as gcp
import json
@dsl.pipeline(
name='Dataproc delete cluster pipeline',
description='Dataproc delete cluster pipeline'
)
def dataproc_delete_cluster_pipeline(
project_id = PROJECT_ID,
region = REGION,
name = CLUSTER_NAME
):
dataproc_delete_cluster_op(project_id, region, name).apply(gcp.use_gcp_secret('user-gcp-sa'))
###Output
_____no_output_____
###Markdown
Compile the pipeline
###Code
pipeline_func = dataproc_delete_cluster_pipeline
pipeline_filename = pipeline_func.__name__ + '.pipeline.tar.gz'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
###Output
_____no_output_____
###Markdown
NameData preparation by deleting a cluster in Cloud Dataproc LabelCloud Dataproc, cluster, GCP, Cloud Storage, Kubeflow, Pipeline SummaryA Kubeflow Pipeline component to delete a cluster in Cloud Dataproc. Intended useUse this component at the start of a Kubeflow Pipeline to delete a temporary Cloud Dataproc cluster to run Cloud Dataproc jobs as steps in the pipeline. This component is usually used with an [exit handler](https://github.com/kubeflow/pipelines/blob/master/samples/core/exit_handler/exit_handler.py) to run at the end of a pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------|-------------|----------|-----------|-----------------|---------|| project_id | The Google Cloud Platform (GCP) project ID that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region in which to handle the request. | No | GCPRegion | | || name | The name of the cluster to delete. | No | String | | || wait_interval | The number of seconds to pause between polling the operation. | Yes | Integer | | 30 | Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* Run the component under a secret [Kubeflow user service account](https://www.kubeflow.org/docs/started/getting-started-gke/gcp-service-accounts) in a Kubeflow cluster. For example: ``` component_op(...).apply(gcp.use_gcp_secret('user-gcp-sa')) ```* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component deletes a Dataproc cluster by using [Dataproc delete cluster REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.clusters/delete).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK:
###Code
%%capture --no-stderr
KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.14/kfp.tar.gz'
!pip3 install $KFP_PACKAGE --upgrade
###Output
_____no_output_____
###Markdown
2. Load the component using KFP SDK
###Code
import kfp.components as comp
dataproc_delete_cluster_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/e4d9e2b67cf39c5f12b9c1477cae11feb1a74dc7/components/gcp/dataproc/delete_cluster/component.yaml')
help(dataproc_delete_cluster_op)
###Output
_____no_output_____
###Markdown
SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Prerequisites[Create a Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) before running the sample code. Set sample parameters
###Code
PROJECT_ID = '<Please put your project ID here>'
CLUSTER_NAME = '<Please put your existing cluster name here>'
REGION = 'us-central1'
EXPERIMENT_NAME = 'Dataproc - Delete Cluster'
###Output
_____no_output_____
###Markdown
Example pipeline that uses the component
###Code
import kfp.dsl as dsl
import kfp.gcp as gcp
import json
@dsl.pipeline(
name='Dataproc delete cluster pipeline',
description='Dataproc delete cluster pipeline'
)
def dataproc_delete_cluster_pipeline(
project_id = PROJECT_ID,
region = REGION,
name = CLUSTER_NAME
):
dataproc_delete_cluster_op(
project_id=project_id,
region=region,
name=name).apply(gcp.use_gcp_secret('user-gcp-sa'))
###Output
_____no_output_____
###Markdown
Compile the pipeline
###Code
pipeline_func = dataproc_delete_cluster_pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
###Output
_____no_output_____
###Markdown
NameData preparation by deleting a cluster in Cloud Dataproc LabelCloud Dataproc, cluster, GCP, Cloud Storage, Kubeflow, Pipeline SummaryA Kubeflow Pipeline component to delete a cluster in Cloud Dataproc. Intended useUse this component at the start of a Kubeflow Pipeline to delete a temporary Cloud Dataproc cluster to run Cloud Dataproc jobs as steps in the pipeline. This component is usually used with an [exit handler](https://github.com/kubeflow/pipelines/blob/master/samples/core/exit_handler/exit_handler.py) to run at the end of a pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------|-------------|----------|-----------|-----------------|---------|| project_id | The Google Cloud Platform (GCP) project ID that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region in which to handle the request. | No | GCPRegion | | || name | The name of the cluster to delete. | No | String | | || wait_interval | The number of seconds to pause between polling the operation. | Yes | Integer | | 30 | Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* Run the component under a secret [Kubeflow user service account](https://www.kubeflow.org/docs/started/getting-started-gke/gcp-service-accounts) in a Kubeflow cluster. For example: ``` component_op(...).apply(gcp.use_gcp_secret('user-gcp-sa')) ```* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component deletes a Dataproc cluster by using [Dataproc delete cluster REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.clusters/delete).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK:
###Code
%%capture --no-stderr
KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.14/kfp.tar.gz'
!pip3 install $KFP_PACKAGE --upgrade
###Output
_____no_output_____
###Markdown
2. Load the component using KFP SDK
###Code
import kfp.components as comp
dataproc_delete_cluster_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/4e7e6e866c1256e641b0c3effc55438e6e4b30f6/components/gcp/dataproc/delete_cluster/component.yaml')
help(dataproc_delete_cluster_op)
###Output
_____no_output_____
###Markdown
SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Prerequisites[Create a Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) before running the sample code. Set sample parameters
###Code
PROJECT_ID = '<Please put your project ID here>'
CLUSTER_NAME = '<Please put your existing cluster name here>'
REGION = 'us-central1'
EXPERIMENT_NAME = 'Dataproc - Delete Cluster'
###Output
_____no_output_____
###Markdown
Example pipeline that uses the component
###Code
import kfp.dsl as dsl
import kfp.gcp as gcp
import json
@dsl.pipeline(
name='Dataproc delete cluster pipeline',
description='Dataproc delete cluster pipeline'
)
def dataproc_delete_cluster_pipeline(
project_id = PROJECT_ID,
region = REGION,
name = CLUSTER_NAME
):
dataproc_delete_cluster_op(
project_id=project_id,
region=region,
name=name).apply(gcp.use_gcp_secret('user-gcp-sa'))
###Output
_____no_output_____
###Markdown
Compile the pipeline
###Code
pipeline_func = dataproc_delete_cluster_pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
###Output
_____no_output_____
###Markdown
NameData preparation by deleting a cluster in Cloud Dataproc LabelCloud Dataproc, cluster, GCP, Cloud Storage, Kubeflow, Pipeline SummaryA Kubeflow Pipeline component to delete a cluster in Cloud Dataproc. Intended useUse this component at the start of a Kubeflow Pipeline to delete a temporary Cloud Dataproc cluster to run Cloud Dataproc jobs as steps in the pipeline. This component is usually used with an [exit handler](https://github.com/kubeflow/pipelines/blob/master/samples/core/exit_handler/exit_handler.py) to run at the end of a pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------|-------------|----------|-----------|-----------------|---------|| project_id | The Google Cloud Platform (GCP) project ID that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region in which to handle the request. | No | GCPRegion | | || name | The name of the cluster to delete. | No | String | | || wait_interval | The number of seconds to pause between polling the operation. | Yes | Integer | | 30 | Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* The component can authenticate to GCP. Refer to [Authenticating Pipelines to GCP](https://www.kubeflow.org/docs/gke/authentication-pipelines/) for details.* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component deletes a Dataproc cluster by using [Dataproc delete cluster REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.clusters/delete).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK:
###Code
%%capture --no-stderr
!pip3 install kfp --upgrade
###Output
_____no_output_____
###Markdown
2. Load the component using KFP SDK
###Code
import kfp.components as comp
dataproc_delete_cluster_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/1.6.0/components/gcp/dataproc/delete_cluster/component.yaml')
help(dataproc_delete_cluster_op)
###Output
_____no_output_____
###Markdown
SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Prerequisites[Create a Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) before running the sample code. Set sample parameters
###Code
PROJECT_ID = '<Please put your project ID here>'
CLUSTER_NAME = '<Please put your existing cluster name here>'
REGION = 'us-central1'
EXPERIMENT_NAME = 'Dataproc - Delete Cluster'
###Output
_____no_output_____
###Markdown
Example pipeline that uses the component
###Code
import kfp.dsl as dsl
import json
@dsl.pipeline(
name='Dataproc delete cluster pipeline',
description='Dataproc delete cluster pipeline'
)
def dataproc_delete_cluster_pipeline(
project_id = PROJECT_ID,
region = REGION,
name = CLUSTER_NAME
):
dataproc_delete_cluster_op(
project_id=project_id,
region=region,
name=name)
###Output
_____no_output_____
###Markdown
Compile the pipeline
###Code
pipeline_func = dataproc_delete_cluster_pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
###Output
_____no_output_____
###Markdown
NameData preparation by deleting a cluster in Cloud Dataproc LabelCloud Dataproc, cluster, GCP, Cloud Storage, Kubeflow, Pipeline SummaryA Kubeflow Pipeline component to delete a cluster in Cloud Dataproc. Intended useUse this component at the start of a Kubeflow Pipeline to delete a temporary Cloud Dataproc cluster to run Cloud Dataproc jobs as steps in the pipeline. This component is usually used with an [exit handler](https://github.com/kubeflow/pipelines/blob/master/samples/core/exit_handler/exit_handler.py) to run at the end of a pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------|-------------|----------|-----------|-----------------|---------|| project_id | The Google Cloud Platform (GCP) project ID that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region in which to handle the request. | No | GCPRegion | | || name | The name of the cluster to delete. | No | String | | || wait_interval | The number of seconds to pause between polling the operation. | Yes | Integer | | 30 | Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* Run the component under a secret [Kubeflow user service account](https://www.kubeflow.org/docs/started/getting-started-gke/gcp-service-accounts) in a Kubeflow cluster. For example: ``` component_op(...).apply(gcp.use_gcp_secret('user-gcp-sa')) ```* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component deletes a Dataproc cluster by using [Dataproc delete cluster REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.clusters/delete).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK:
###Code
%%capture --no-stderr
KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.14/kfp.tar.gz'
!pip3 install $KFP_PACKAGE --upgrade
###Output
_____no_output_____
###Markdown
2. Load the component using KFP SDK
###Code
import kfp.components as comp
dataproc_delete_cluster_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/48dd338c8ab328084633c51704cda77db79ac8c2/components/gcp/dataproc/delete_cluster/component.yaml')
help(dataproc_delete_cluster_op)
###Output
_____no_output_____
###Markdown
SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Prerequisites[Create a Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) before running the sample code. Set sample parameters
###Code
PROJECT_ID = '<Please put your project ID here>'
CLUSTER_NAME = '<Please put your existing cluster name here>'
REGION = 'us-central1'
EXPERIMENT_NAME = 'Dataproc - Delete Cluster'
###Output
_____no_output_____
###Markdown
Example pipeline that uses the component
###Code
import kfp.dsl as dsl
import kfp.gcp as gcp
import json
@dsl.pipeline(
name='Dataproc delete cluster pipeline',
description='Dataproc delete cluster pipeline'
)
def dataproc_delete_cluster_pipeline(
project_id = PROJECT_ID,
region = REGION,
name = CLUSTER_NAME
):
dataproc_delete_cluster_op(
project_id=project_id,
region=region,
name=name).apply(gcp.use_gcp_secret('user-gcp-sa'))
###Output
_____no_output_____
###Markdown
Compile the pipeline
###Code
pipeline_func = dataproc_delete_cluster_pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
###Output
_____no_output_____
###Markdown
NameData preparation by deleting a cluster in Cloud Dataproc LabelCloud Dataproc, cluster, GCP, Cloud Storage, Kubeflow, Pipeline SummaryA Kubeflow Pipeline component to delete a cluster in Cloud Dataproc. Intended useUse this component at the start of a Kubeflow Pipeline to delete a temporary Cloud Dataproc cluster to run Cloud Dataproc jobs as steps in the pipeline. This component is usually used with an [exit handler](https://github.com/kubeflow/pipelines/blob/master/samples/core/exit_handler/exit_handler.py) to run at the end of a pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------|-------------|----------|-----------|-----------------|---------|| project_id | The Google Cloud Platform (GCP) project ID that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region in which to handle the request. | No | GCPRegion | | || name | The name of the cluster to delete. | No | String | | || wait_interval | The number of seconds to pause between polling the operation. | Yes | Integer | | 30 | Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* The component can authenticate to GCP. Refer to [Authenticating Pipelines to GCP](https://www.kubeflow.org/docs/gke/authentication-pipelines/) for details.* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component deletes a Dataproc cluster by using [Dataproc delete cluster REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.clusters/delete).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK:
###Code
%%capture --no-stderr
!pip3 install kfp --upgrade
###Output
_____no_output_____
###Markdown
2. Load the component using KFP SDK
###Code
import kfp.components as comp
dataproc_delete_cluster_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/1.5.0-rc.2/components/gcp/dataproc/delete_cluster/component.yaml')
help(dataproc_delete_cluster_op)
###Output
_____no_output_____
###Markdown
SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Prerequisites[Create a Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) before running the sample code. Set sample parameters
###Code
PROJECT_ID = '<Please put your project ID here>'
CLUSTER_NAME = '<Please put your existing cluster name here>'
REGION = 'us-central1'
EXPERIMENT_NAME = 'Dataproc - Delete Cluster'
###Output
_____no_output_____
###Markdown
Example pipeline that uses the component
###Code
import kfp.dsl as dsl
import json
@dsl.pipeline(
name='Dataproc delete cluster pipeline',
description='Dataproc delete cluster pipeline'
)
def dataproc_delete_cluster_pipeline(
project_id = PROJECT_ID,
region = REGION,
name = CLUSTER_NAME
):
dataproc_delete_cluster_op(
project_id=project_id,
region=region,
name=name)
###Output
_____no_output_____
###Markdown
Compile the pipeline
###Code
pipeline_func = dataproc_delete_cluster_pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
###Output
_____no_output_____
###Markdown
NameData preparation by deleting a cluster in Cloud Dataproc LabelCloud Dataproc, cluster, GCP, Cloud Storage, Kubeflow, Pipeline SummaryA Kubeflow Pipeline component to delete a cluster in Cloud Dataproc. Intended useUse this component at the start of a Kubeflow Pipeline to delete a temporary Cloud Dataproc cluster to run Cloud Dataproc jobs as steps in the pipeline. This component is usually used with an [exit handler](https://github.com/kubeflow/pipelines/blob/master/samples/core/exit_handler/exit_handler.py) to run at the end of a pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------|-------------|----------|-----------|-----------------|---------|| project_id | The Google Cloud Platform (GCP) project ID that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region in which to handle the request. | No | GCPRegion | | || name | The name of the cluster to delete. | No | String | | || wait_interval | The number of seconds to pause between polling the operation. | Yes | Integer | | 30 | Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* The component can authenticate to GCP. Refer to [Authenticating Pipelines to GCP](https://www.kubeflow.org/docs/gke/authentication-pipelines/) for details.* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component deletes a Dataproc cluster by using [Dataproc delete cluster REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.clusters/delete).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK:
###Code
%%capture --no-stderr
KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.14/kfp.tar.gz'
!pip3 install $KFP_PACKAGE --upgrade
###Output
_____no_output_____
###Markdown
2. Load the component using KFP SDK
###Code
import kfp.components as comp
dataproc_delete_cluster_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/ff116b6f1a0f0cdaafb64fcd04214c169045e6fc/components/gcp/dataproc/delete_cluster/component.yaml')
help(dataproc_delete_cluster_op)
###Output
_____no_output_____
###Markdown
SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Prerequisites[Create a Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) before running the sample code. Set sample parameters
###Code
PROJECT_ID = '<Please put your project ID here>'
CLUSTER_NAME = '<Please put your existing cluster name here>'
REGION = 'us-central1'
EXPERIMENT_NAME = 'Dataproc - Delete Cluster'
###Output
_____no_output_____
###Markdown
Example pipeline that uses the component
###Code
import kfp.dsl as dsl
import json
@dsl.pipeline(
name='Dataproc delete cluster pipeline',
description='Dataproc delete cluster pipeline'
)
def dataproc_delete_cluster_pipeline(
project_id = PROJECT_ID,
region = REGION,
name = CLUSTER_NAME
):
dataproc_delete_cluster_op(
project_id=project_id,
region=region,
name=name)
###Output
_____no_output_____
###Markdown
Compile the pipeline
###Code
pipeline_func = dataproc_delete_cluster_pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
###Output
_____no_output_____
###Markdown
NameData preparation by deleting a cluster in Cloud Dataproc LabelCloud Dataproc, cluster, GCP, Cloud Storage, Kubeflow, Pipeline SummaryA Kubeflow Pipeline component to delete a cluster in Cloud Dataproc. Intended useUse this component at the start of a Kubeflow Pipeline to delete a temporary Cloud Dataproc cluster to run Cloud Dataproc jobs as steps in the pipeline. This component is usually used with an [exit handler](https://github.com/kubeflow/pipelines/blob/master/samples/basic/exit_handler.py) to run at the end of a pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------|-------------|----------|-----------|-----------------|---------|| project_id | The Google Cloud Platform (GCP) project ID that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region in which to handle the request. | No | GCPRegion | | || name | The name of the cluster to delete. | No | String | | || wait_interval | The number of seconds to pause between polling the operation. | Yes | Integer | | 30 | Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* Run the component under a secret [Kubeflow user service account](https://www.kubeflow.org/docs/started/getting-started-gke/gcp-service-accounts) in a Kubeflow cluster. For example: ``` component_op(...).apply(gcp.use_gcp_secret('user-gcp-sa')) ```* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component deletes a Dataproc cluster by using [Dataproc delete cluster REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.clusters/delete).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK:
###Code
%%capture --no-stderr
KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.14/kfp.tar.gz'
!pip3 install $KFP_PACKAGE --upgrade
###Output
_____no_output_____
###Markdown
2. Load the component using KFP SDK
###Code
import kfp.components as comp
dataproc_delete_cluster_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/0b07e456b1f319d8b7a7301274f55c00fda9f537/components/gcp/dataproc/delete_cluster/component.yaml')
help(dataproc_delete_cluster_op)
###Output
_____no_output_____
###Markdown
SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Prerequisites[Create a Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) before running the sample code. Set sample parameters
###Code
PROJECT_ID = '<Please put your project ID here>'
CLUSTER_NAME = '<Please put your existing cluster name here>'
REGION = 'us-central1'
EXPERIMENT_NAME = 'Dataproc - Delete Cluster'
###Output
_____no_output_____
###Markdown
Example pipeline that uses the component
###Code
import kfp.dsl as dsl
import kfp.gcp as gcp
import json
@dsl.pipeline(
name='Dataproc delete cluster pipeline',
description='Dataproc delete cluster pipeline'
)
def dataproc_delete_cluster_pipeline(
project_id = PROJECT_ID,
region = REGION,
name = CLUSTER_NAME
):
dataproc_delete_cluster_op(
project_id=project_id,
region=region,
name=name).apply(gcp.use_gcp_secret('user-gcp-sa'))
###Output
_____no_output_____
###Markdown
Compile the pipeline
###Code
pipeline_func = dataproc_delete_cluster_pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
###Output
_____no_output_____
###Markdown
NameData preparation by deleting a cluster in Cloud Dataproc LabelCloud Dataproc, cluster, GCP, Cloud Storage, Kubeflow, Pipeline SummaryA Kubeflow Pipeline component to delete a cluster in Cloud Dataproc. Intended useUse this component at the start of a Kubeflow Pipeline to delete a temporary Cloud Dataproc cluster to run Cloud Dataproc jobs as steps in the pipeline. This component is usually used with an [exit handler](https://github.com/kubeflow/pipelines/blob/master/samples/basic/exit_handler.py) to run at the end of a pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------|-------------|----------|-----------|-----------------|---------|| project_id | The Google Cloud Platform (GCP) project ID that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region in which to handle the request. | No | GCPRegion | | || name | The name of the cluster to delete. | No | String | | || wait_interval | The number of seconds to pause between polling the operation. | Yes | Integer | | 30 | Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* Run the component under a secret [Kubeflow user service account](https://www.kubeflow.org/docs/started/getting-started-gke/gcp-service-accounts) in a Kubeflow cluster. For example: ``` component_op(...).apply(gcp.use_gcp_secret('user-gcp-sa')) ```* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component deletes a Dataproc cluster by using [Dataproc delete cluster REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.clusters/delete).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK:
###Code
%%capture --no-stderr
KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.14/kfp.tar.gz'
!pip3 install $KFP_PACKAGE --upgrade
###Output
_____no_output_____
###Markdown
2. Load the component using KFP SDK
###Code
import kfp.components as comp
dataproc_delete_cluster_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/2e52e54166795d20e92d287bde7b800b181eda02/components/gcp/dataproc/delete_cluster/component.yaml')
help(dataproc_delete_cluster_op)
###Output
_____no_output_____
###Markdown
SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Prerequisites[Create a Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) before running the sample code. Set sample parameters
###Code
PROJECT_ID = '<Please put your project ID here>'
CLUSTER_NAME = '<Please put your existing cluster name here>'
REGION = 'us-central1'
EXPERIMENT_NAME = 'Dataproc - Delete Cluster'
###Output
_____no_output_____
###Markdown
Example pipeline that uses the component
###Code
import kfp.dsl as dsl
import kfp.gcp as gcp
import json
@dsl.pipeline(
name='Dataproc delete cluster pipeline',
description='Dataproc delete cluster pipeline'
)
def dataproc_delete_cluster_pipeline(
project_id = PROJECT_ID,
region = REGION,
name = CLUSTER_NAME
):
dataproc_delete_cluster_op(
project_id=project_id,
region=region,
name=name).apply(gcp.use_gcp_secret('user-gcp-sa'))
###Output
_____no_output_____
###Markdown
Compile the pipeline
###Code
pipeline_func = dataproc_delete_cluster_pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
###Output
_____no_output_____
###Markdown
NameData preparation by deleting a cluster in Cloud Dataproc LabelCloud Dataproc, cluster, GCP, Cloud Storage, Kubeflow, Pipeline SummaryA Kubeflow Pipeline component to delete a cluster in Cloud Dataproc. Intended useUse this component at the start of a Kubeflow Pipeline to delete a temporary Cloud Dataproc cluster to run Cloud Dataproc jobs as steps in the pipeline. This component is usually used with an [exit handler](https://github.com/kubeflow/pipelines/blob/master/samples/basic/exit_handler.py) to run at the end of a pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------|-------------|----------|-----------|-----------------|---------|| project_id | The Google Cloud Platform (GCP) project ID that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region in which to handle the request. | No | GCPRegion | | || name | The name of the cluster to delete. | No | String | | || wait_interval | The number of seconds to pause between polling the operation. | Yes | Integer | | 30 | Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* Run the component under a secret [Kubeflow user service account](https://www.kubeflow.org/docs/started/getting-started-gke/gcp-service-accounts) in a Kubeflow cluster. For example: ``` component_op(...).apply(gcp.use_gcp_secret('user-gcp-sa')) ```* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component deletes a Dataproc cluster by using [Dataproc delete cluster REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.clusters/delete).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK:
###Code
%%capture --no-stderr
KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.14/kfp.tar.gz'
!pip3 install $KFP_PACKAGE --upgrade
###Output
_____no_output_____
###Markdown
2. Load the component using KFP SDK
###Code
import kfp.components as comp
dataproc_delete_cluster_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/f379080516a34d9c257a198cde9ac219d625ab84/components/gcp/dataproc/delete_cluster/component.yaml')
help(dataproc_delete_cluster_op)
###Output
_____no_output_____
###Markdown
SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Prerequisites[Create a Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) before running the sample code. Set sample parameters
###Code
PROJECT_ID = '<Please put your project ID here>'
CLUSTER_NAME = '<Please put your existing cluster name here>'
REGION = 'us-central1'
EXPERIMENT_NAME = 'Dataproc - Delete Cluster'
###Output
_____no_output_____
###Markdown
Example pipeline that uses the component
###Code
import kfp.dsl as dsl
import kfp.gcp as gcp
import json
@dsl.pipeline(
name='Dataproc delete cluster pipeline',
description='Dataproc delete cluster pipeline'
)
def dataproc_delete_cluster_pipeline(
project_id = PROJECT_ID,
region = REGION,
name = CLUSTER_NAME
):
dataproc_delete_cluster_op(
project_id=project_id,
region=region,
name=name).apply(gcp.use_gcp_secret('user-gcp-sa'))
###Output
_____no_output_____
###Markdown
Compile the pipeline
###Code
pipeline_func = dataproc_delete_cluster_pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
###Output
_____no_output_____
###Markdown
NameData preparation by deleting a cluster in Cloud Dataproc LabelCloud Dataproc, cluster, GCP, Cloud Storage, Kubeflow, Pipeline SummaryA Kubeflow Pipeline component to delete a cluster in Cloud Dataproc. Intended useUse this component at the start of a Kubeflow Pipeline to delete a temporary Cloud Dataproc cluster to run Cloud Dataproc jobs as steps in the pipeline. This component is usually used with an [exit handler](https://github.com/kubeflow/pipelines/blob/master/samples/core/exit_handler/exit_handler.py) to run at the end of a pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------|-------------|----------|-----------|-----------------|---------|| project_id | The Google Cloud Platform (GCP) project ID that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region in which to handle the request. | No | GCPRegion | | || name | The name of the cluster to delete. | No | String | | || wait_interval | The number of seconds to pause between polling the operation. | Yes | Integer | | 30 | Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* The component can authenticate to GCP. Refer to [Authenticating Pipelines to GCP](https://www.kubeflow.org/docs/gke/authentication-pipelines/) for details.* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component deletes a Dataproc cluster by using [Dataproc delete cluster REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.clusters/delete).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK:
###Code
%%capture --no-stderr
!pip3 install kfp --upgrade
###Output
_____no_output_____
###Markdown
2. Load the component using KFP SDK
###Code
import kfp.components as comp
dataproc_delete_cluster_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/1.2.0/components/gcp/dataproc/delete_cluster/component.yaml')
help(dataproc_delete_cluster_op)
###Output
_____no_output_____
###Markdown
SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Prerequisites[Create a Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) before running the sample code. Set sample parameters
###Code
PROJECT_ID = '<Please put your project ID here>'
CLUSTER_NAME = '<Please put your existing cluster name here>'
REGION = 'us-central1'
EXPERIMENT_NAME = 'Dataproc - Delete Cluster'
###Output
_____no_output_____
###Markdown
Example pipeline that uses the component
###Code
import kfp.dsl as dsl
import json
@dsl.pipeline(
name='Dataproc delete cluster pipeline',
description='Dataproc delete cluster pipeline'
)
def dataproc_delete_cluster_pipeline(
project_id = PROJECT_ID,
region = REGION,
name = CLUSTER_NAME
):
dataproc_delete_cluster_op(
project_id=project_id,
region=region,
name=name)
###Output
_____no_output_____
###Markdown
Compile the pipeline
###Code
pipeline_func = dataproc_delete_cluster_pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
###Output
_____no_output_____
###Markdown
Deleting a Cluster with Cloud DataprocA Kubeflow Pipeline component to delete a cluster in Cloud Dataproc service. Intended UseUse the component to recycle a Dataproc cluster as one of the step in a KFP pipeline. This component is usually used with an [exit handler](https://github.com/kubeflow/pipelines/blob/master/samples/basic/exit_handler.py) to run at the end of a pipeline. Runtime argumentsName | Description | Type | Optional | Default:--- | :---------- | :--- | :------- | :------project_id | The Google Cloud Platform (GCP) project ID that the cluster belongs to. | GCPProjectID | No |region | The Cloud Dataproc region runs the cluster to delete. | GCPRegion | No |name | The cluster name to delete. | String | No |wait_interval | The number of seconds to pause between polling the delete operation done status. | Integer | Yes | `30` Cautions & requirementsTo use the component, you must:* Setup project by following the [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* The component is running under a secret of [Kubeflow user service account](https://www.kubeflow.org/docs/started/getting-started-gke/gcp-service-accounts) in a Kubeflow cluster. For example:```component_op(...).apply(gcp.use_gcp_secret('user-gcp-sa'))```* Grant Kubeflow user service account the `roles/dataproc.editor` role on the project. Detailed DescriptionThis component deletes a Dataproc cluster by using [Dataproc delete cluster REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.clusters/delete).Here are the steps to use the component in a pipeline:1. Install KFP SDK
###Code
%%capture --no-stderr
KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.14/kfp.tar.gz'
!pip3 install $KFP_PACKAGE --upgrade
###Output
_____no_output_____
###Markdown
2. Load the component using KFP SDK
###Code
import kfp.components as comp
dataproc_delete_cluster_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/d2f5cc92a46012b9927209e2aaccab70961582dc/components/gcp/dataproc/delete_cluster/component.yaml')
help(dataproc_delete_cluster_op)
###Output
_____no_output_____
###Markdown
For more information about the component, please checkout:* [Component python code](https://github.com/kubeflow/pipelines/blob/master/component_sdk/python/kfp_component/google/dataproc/_delete_cluster.py)* [Component docker file](https://github.com/kubeflow/pipelines/blob/master/components/gcp/container/Dockerfile)* [Sample notebook](https://github.com/kubeflow/pipelines/blob/master/components/gcp/dataproc/delete_cluster/sample.ipynb)* [Dataproc delete cluster REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.clusters/delete) SampleNote: the sample code below works in both IPython notebook or python code directly. PrerequisitesBefore running the sample code, you need to [create a Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster). Set sample parameters
###Code
PROJECT_ID = '<Please put your project ID here>'
CLUSTER_NAME = '<Please put your existing cluster name here>'
REGION = 'us-central1'
EXPERIMENT_NAME = 'Dataproc - Delete Cluster'
###Output
_____no_output_____
###Markdown
Example pipeline that uses the component
###Code
import kfp.dsl as dsl
import kfp.gcp as gcp
import json
@dsl.pipeline(
name='Dataproc delete cluster pipeline',
description='Dataproc delete cluster pipeline'
)
def dataproc_delete_cluster_pipeline(
project_id = PROJECT_ID,
region = REGION,
name = CLUSTER_NAME
):
dataproc_delete_cluster_op(
project_id=project_id,
region=region,
name=name).apply(gcp.use_gcp_secret('user-gcp-sa'))
###Output
_____no_output_____
###Markdown
Compile the pipeline
###Code
pipeline_func = dataproc_delete_cluster_pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
###Output
_____no_output_____
###Markdown
NameData preparation by deleting a cluster in Cloud Dataproc LabelCloud Dataproc, cluster, GCP, Cloud Storage, Kubeflow, Pipeline SummaryA Kubeflow Pipeline component to delete a cluster in Cloud Dataproc. Intended useUse this component at the start of a Kubeflow Pipeline to delete a temporary Cloud Dataproc cluster to run Cloud Dataproc jobs as steps in the pipeline. This component is usually used with an [exit handler](https://github.com/kubeflow/pipelines/blob/master/samples/core/exit_handler/exit_handler.py) to run at the end of a pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------|-------------|----------|-----------|-----------------|---------|| project_id | The Google Cloud Platform (GCP) project ID that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region in which to handle the request. | No | GCPRegion | | || name | The name of the cluster to delete. | No | String | | || wait_interval | The number of seconds to pause between polling the operation. | Yes | Integer | | 30 | Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* The component can authenticate to GCP. Refer to [Authenticating Pipelines to GCP](https://www.kubeflow.org/docs/gke/authentication-pipelines/) for details.* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component deletes a Dataproc cluster by using [Dataproc delete cluster REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.clusters/delete).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK:
###Code
%%capture --no-stderr
!pip3 install kfp --upgrade
###Output
_____no_output_____
###Markdown
2. Load the component using KFP SDK
###Code
import kfp.components as comp
dataproc_delete_cluster_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/1.5.0/components/gcp/dataproc/delete_cluster/component.yaml')
help(dataproc_delete_cluster_op)
###Output
_____no_output_____
###Markdown
SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Prerequisites[Create a Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) before running the sample code. Set sample parameters
###Code
PROJECT_ID = '<Please put your project ID here>'
CLUSTER_NAME = '<Please put your existing cluster name here>'
REGION = 'us-central1'
EXPERIMENT_NAME = 'Dataproc - Delete Cluster'
###Output
_____no_output_____
###Markdown
Example pipeline that uses the component
###Code
import kfp.dsl as dsl
import json
@dsl.pipeline(
name='Dataproc delete cluster pipeline',
description='Dataproc delete cluster pipeline'
)
def dataproc_delete_cluster_pipeline(
project_id = PROJECT_ID,
region = REGION,
name = CLUSTER_NAME
):
dataproc_delete_cluster_op(
project_id=project_id,
region=region,
name=name)
###Output
_____no_output_____
###Markdown
Compile the pipeline
###Code
pipeline_func = dataproc_delete_cluster_pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
###Output
_____no_output_____
###Markdown
NameData preparation by deleting a cluster in Cloud Dataproc LabelCloud Dataproc, cluster, GCP, Cloud Storage, Kubeflow, Pipeline SummaryA Kubeflow Pipeline component to delete a cluster in Cloud Dataproc. Intended useUse this component at the start of a Kubeflow Pipeline to delete a temporary Cloud Dataproc cluster to run Cloud Dataproc jobs as steps in the pipeline. This component is usually used with an [exit handler](https://github.com/kubeflow/pipelines/blob/master/samples/core/exit_handler/exit_handler.py) to run at the end of a pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------|-------------|----------|-----------|-----------------|---------|| project_id | The Google Cloud Platform (GCP) project ID that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region in which to handle the request. | No | GCPRegion | | || name | The name of the cluster to delete. | No | String | | || wait_interval | The number of seconds to pause between polling the operation. | Yes | Integer | | 30 | Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* The component can authenticate to GCP. Refer to [Authenticating Pipelines to GCP](https://www.kubeflow.org/docs/gke/authentication-pipelines/) for details.* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component deletes a Dataproc cluster by using [Dataproc delete cluster REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.clusters/delete).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK:
###Code
%%capture --no-stderr
!pip3 install kfp --upgrade
###Output
_____no_output_____
###Markdown
2. Load the component using KFP SDK
###Code
import kfp.components as comp
dataproc_delete_cluster_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/1.5.0-rc.0/components/gcp/dataproc/delete_cluster/component.yaml')
help(dataproc_delete_cluster_op)
###Output
_____no_output_____
###Markdown
SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Prerequisites[Create a Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) before running the sample code. Set sample parameters
###Code
PROJECT_ID = '<Please put your project ID here>'
CLUSTER_NAME = '<Please put your existing cluster name here>'
REGION = 'us-central1'
EXPERIMENT_NAME = 'Dataproc - Delete Cluster'
###Output
_____no_output_____
###Markdown
Example pipeline that uses the component
###Code
import kfp.dsl as dsl
import json
@dsl.pipeline(
name='Dataproc delete cluster pipeline',
description='Dataproc delete cluster pipeline'
)
def dataproc_delete_cluster_pipeline(
project_id = PROJECT_ID,
region = REGION,
name = CLUSTER_NAME
):
dataproc_delete_cluster_op(
project_id=project_id,
region=region,
name=name)
###Output
_____no_output_____
###Markdown
Compile the pipeline
###Code
pipeline_func = dataproc_delete_cluster_pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
###Output
_____no_output_____
###Markdown
NameData preparation by deleting a cluster in Cloud Dataproc LabelCloud Dataproc, cluster, GCP, Cloud Storage, Kubeflow, Pipeline SummaryA Kubeflow Pipeline component to delete a cluster in Cloud Dataproc. Intended useUse this component at the start of a Kubeflow Pipeline to delete a temporary Cloud Dataproc cluster to run Cloud Dataproc jobs as steps in the pipeline. This component is usually used with an [exit handler](https://github.com/kubeflow/pipelines/blob/master/samples/core/exit_handler/exit_handler.py) to run at the end of a pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------|-------------|----------|-----------|-----------------|---------|| project_id | The Google Cloud Platform (GCP) project ID that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region in which to handle the request. | No | GCPRegion | | || name | The name of the cluster to delete. | No | String | | || wait_interval | The number of seconds to pause between polling the operation. | Yes | Integer | | 30 | Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* The component can authenticate to GCP. Refer to [Authenticating Pipelines to GCP](https://www.kubeflow.org/docs/gke/authentication-pipelines/) for details.* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component deletes a Dataproc cluster by using [Dataproc delete cluster REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.clusters/delete).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK:
###Code
%%capture --no-stderr
!pip3 install kfp --upgrade
###Output
_____no_output_____
###Markdown
2. Load the component using KFP SDK
###Code
import kfp.components as comp
dataproc_delete_cluster_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/1.1.0-alpha.1/components/gcp/dataproc/delete_cluster/component.yaml')
help(dataproc_delete_cluster_op)
###Output
_____no_output_____
###Markdown
SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Prerequisites[Create a Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) before running the sample code. Set sample parameters
###Code
PROJECT_ID = '<Please put your project ID here>'
CLUSTER_NAME = '<Please put your existing cluster name here>'
REGION = 'us-central1'
EXPERIMENT_NAME = 'Dataproc - Delete Cluster'
###Output
_____no_output_____
###Markdown
Example pipeline that uses the component
###Code
import kfp.dsl as dsl
import json
@dsl.pipeline(
name='Dataproc delete cluster pipeline',
description='Dataproc delete cluster pipeline'
)
def dataproc_delete_cluster_pipeline(
project_id = PROJECT_ID,
region = REGION,
name = CLUSTER_NAME
):
dataproc_delete_cluster_op(
project_id=project_id,
region=region,
name=name)
###Output
_____no_output_____
###Markdown
Compile the pipeline
###Code
pipeline_func = dataproc_delete_cluster_pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
###Output
_____no_output_____
###Markdown
NameData preparation by deleting a cluster in Cloud Dataproc LabelCloud Dataproc, cluster, GCP, Cloud Storage, Kubeflow, Pipeline SummaryA Kubeflow Pipeline component to delete a cluster in Cloud Dataproc. Intended useUse this component at the start of a Kubeflow Pipeline to delete a temporary Cloud Dataproc cluster to run Cloud Dataproc jobs as steps in the pipeline. This component is usually used with an [exit handler](https://github.com/kubeflow/pipelines/blob/master/samples/core/exit_handler/exit_handler.py) to run at the end of a pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------|-------------|----------|-----------|-----------------|---------|| project_id | The Google Cloud Platform (GCP) project ID that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region in which to handle the request. | No | GCPRegion | | || name | The name of the cluster to delete. | No | String | | || wait_interval | The number of seconds to pause between polling the operation. | Yes | Integer | | 30 | Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* Run the component under a secret [Kubeflow user service account](https://www.kubeflow.org/docs/started/getting-started-gke/gcp-service-accounts) in a Kubeflow cluster. For example: ``` component_op(...).apply(gcp.use_gcp_secret('user-gcp-sa')) ```* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component deletes a Dataproc cluster by using [Dataproc delete cluster REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.clusters/delete).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK:
###Code
%%capture --no-stderr
KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.14/kfp.tar.gz'
!pip3 install $KFP_PACKAGE --upgrade
###Output
_____no_output_____
###Markdown
2. Load the component using KFP SDK
###Code
import kfp.components as comp
dataproc_delete_cluster_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/a8d3b6977df26a89701cd229f01c1840a8475521/components/gcp/dataproc/delete_cluster/component.yaml')
help(dataproc_delete_cluster_op)
###Output
_____no_output_____
###Markdown
SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Prerequisites[Create a Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) before running the sample code. Set sample parameters
###Code
PROJECT_ID = '<Please put your project ID here>'
CLUSTER_NAME = '<Please put your existing cluster name here>'
REGION = 'us-central1'
EXPERIMENT_NAME = 'Dataproc - Delete Cluster'
###Output
_____no_output_____
###Markdown
Example pipeline that uses the component
###Code
import kfp.dsl as dsl
import kfp.gcp as gcp
import json
@dsl.pipeline(
name='Dataproc delete cluster pipeline',
description='Dataproc delete cluster pipeline'
)
def dataproc_delete_cluster_pipeline(
project_id = PROJECT_ID,
region = REGION,
name = CLUSTER_NAME
):
dataproc_delete_cluster_op(
project_id=project_id,
region=region,
name=name).apply(gcp.use_gcp_secret('user-gcp-sa'))
###Output
_____no_output_____
###Markdown
Compile the pipeline
###Code
pipeline_func = dataproc_delete_cluster_pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
###Output
_____no_output_____
###Markdown
NameData preparation by deleting a cluster in Cloud Dataproc LabelCloud Dataproc, cluster, GCP, Cloud Storage, Kubeflow, Pipeline SummaryA Kubeflow Pipeline component to delete a cluster in Cloud Dataproc. Intended useUse this component at the start of a Kubeflow Pipeline to delete a temporary Cloud Dataproc cluster to run Cloud Dataproc jobs as steps in the pipeline. This component is usually used with an [exit handler](https://github.com/kubeflow/pipelines/blob/master/samples/core/exit_handler/exit_handler.py) to run at the end of a pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------|-------------|----------|-----------|-----------------|---------|| project_id | The Google Cloud Platform (GCP) project ID that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region in which to handle the request. | No | GCPRegion | | || name | The name of the cluster to delete. | No | String | | || wait_interval | The number of seconds to pause between polling the operation. | Yes | Integer | | 30 | Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* The component can authenticate to GCP. Refer to [Authenticating Pipelines to GCP](https://www.kubeflow.org/docs/gke/authentication-pipelines/) for details.* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component deletes a Dataproc cluster by using [Dataproc delete cluster REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.clusters/delete).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK:
###Code
%%capture --no-stderr
!pip3 install kfp --upgrade
###Output
_____no_output_____
###Markdown
2. Load the component using KFP SDK
###Code
import kfp.components as comp
dataproc_delete_cluster_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/1.7.0-rc.3/components/gcp/dataproc/delete_cluster/component.yaml')
help(dataproc_delete_cluster_op)
###Output
_____no_output_____
###Markdown
SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Prerequisites[Create a Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) before running the sample code. Set sample parameters
###Code
PROJECT_ID = '<Please put your project ID here>'
CLUSTER_NAME = '<Please put your existing cluster name here>'
REGION = 'us-central1'
EXPERIMENT_NAME = 'Dataproc - Delete Cluster'
###Output
_____no_output_____
###Markdown
Example pipeline that uses the component
###Code
import kfp.dsl as dsl
import json
@dsl.pipeline(
name='Dataproc delete cluster pipeline',
description='Dataproc delete cluster pipeline'
)
def dataproc_delete_cluster_pipeline(
project_id = PROJECT_ID,
region = REGION,
name = CLUSTER_NAME
):
dataproc_delete_cluster_op(
project_id=project_id,
region=region,
name=name)
###Output
_____no_output_____
###Markdown
Compile the pipeline
###Code
pipeline_func = dataproc_delete_cluster_pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
###Output
_____no_output_____
###Markdown
NameData preparation by deleting a cluster in Cloud Dataproc LabelCloud Dataproc, cluster, GCP, Cloud Storage, Kubeflow, Pipeline SummaryA Kubeflow Pipeline component to delete a cluster in Cloud Dataproc. Intended useUse this component at the start of a Kubeflow Pipeline to delete a temporary Cloud Dataproc cluster to run Cloud Dataproc jobs as steps in the pipeline. This component is usually used with an [exit handler](https://github.com/kubeflow/pipelines/blob/master/samples/core/exit_handler/exit_handler.py) to run at the end of a pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------|-------------|----------|-----------|-----------------|---------|| project_id | The Google Cloud Platform (GCP) project ID that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region in which to handle the request. | No | GCPRegion | | || name | The name of the cluster to delete. | No | String | | || wait_interval | The number of seconds to pause between polling the operation. | Yes | Integer | | 30 | Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* The component can authenticate to GCP. Refer to [Authenticating Pipelines to GCP](https://www.kubeflow.org/docs/gke/authentication-pipelines/) for details.* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component deletes a Dataproc cluster by using [Dataproc delete cluster REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.clusters/delete).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK:
###Code
%%capture --no-stderr
!pip3 install kfp --upgrade
###Output
_____no_output_____
###Markdown
2. Load the component using KFP SDK
###Code
import kfp.components as comp
dataproc_delete_cluster_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/1.5.0-rc.3/components/gcp/dataproc/delete_cluster/component.yaml')
help(dataproc_delete_cluster_op)
###Output
_____no_output_____
###Markdown
SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Prerequisites[Create a Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) before running the sample code. Set sample parameters
###Code
PROJECT_ID = '<Please put your project ID here>'
CLUSTER_NAME = '<Please put your existing cluster name here>'
REGION = 'us-central1'
EXPERIMENT_NAME = 'Dataproc - Delete Cluster'
###Output
_____no_output_____
###Markdown
Example pipeline that uses the component
###Code
import kfp.dsl as dsl
import json
@dsl.pipeline(
name='Dataproc delete cluster pipeline',
description='Dataproc delete cluster pipeline'
)
def dataproc_delete_cluster_pipeline(
project_id = PROJECT_ID,
region = REGION,
name = CLUSTER_NAME
):
dataproc_delete_cluster_op(
project_id=project_id,
region=region,
name=name)
###Output
_____no_output_____
###Markdown
Compile the pipeline
###Code
pipeline_func = dataproc_delete_cluster_pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
###Output
_____no_output_____
###Markdown
NameData preparation by deleting a cluster in Cloud Dataproc LabelCloud Dataproc, cluster, GCP, Cloud Storage, Kubeflow, Pipeline SummaryA Kubeflow Pipeline component to delete a cluster in Cloud Dataproc. Intended useUse this component at the start of a Kubeflow Pipeline to delete a temporary Cloud Dataproc cluster to run Cloud Dataproc jobs as steps in the pipeline. This component is usually used with an [exit handler](https://github.com/kubeflow/pipelines/blob/master/samples/core/exit_handler/exit_handler.py) to run at the end of a pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------|-------------|----------|-----------|-----------------|---------|| project_id | The Google Cloud Platform (GCP) project ID that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region in which to handle the request. | No | GCPRegion | | || name | The name of the cluster to delete. | No | String | | || wait_interval | The number of seconds to pause between polling the operation. | Yes | Integer | | 30 | Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* Run the component under a secret [Kubeflow user service account](https://www.kubeflow.org/docs/started/getting-started-gke/gcp-service-accounts) in a Kubeflow cluster. For example: ``` component_op(...).apply(gcp.use_gcp_secret('user-gcp-sa')) ```* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component deletes a Dataproc cluster by using [Dataproc delete cluster REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.clusters/delete).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK:
###Code
%%capture --no-stderr
KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.14/kfp.tar.gz'
!pip3 install $KFP_PACKAGE --upgrade
###Output
_____no_output_____
###Markdown
2. Load the component using KFP SDK
###Code
import kfp.components as comp
dataproc_delete_cluster_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/e598176c02f45371336ccaa819409e8ec83743df/components/gcp/dataproc/delete_cluster/component.yaml')
help(dataproc_delete_cluster_op)
###Output
_____no_output_____
###Markdown
SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Prerequisites[Create a Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) before running the sample code. Set sample parameters
###Code
PROJECT_ID = '<Please put your project ID here>'
CLUSTER_NAME = '<Please put your existing cluster name here>'
REGION = 'us-central1'
EXPERIMENT_NAME = 'Dataproc - Delete Cluster'
###Output
_____no_output_____
###Markdown
Example pipeline that uses the component
###Code
import kfp.dsl as dsl
import kfp.gcp as gcp
import json
@dsl.pipeline(
name='Dataproc delete cluster pipeline',
description='Dataproc delete cluster pipeline'
)
def dataproc_delete_cluster_pipeline(
project_id = PROJECT_ID,
region = REGION,
name = CLUSTER_NAME
):
dataproc_delete_cluster_op(
project_id=project_id,
region=region,
name=name).apply(gcp.use_gcp_secret('user-gcp-sa'))
###Output
_____no_output_____
###Markdown
Compile the pipeline
###Code
pipeline_func = dataproc_delete_cluster_pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
###Output
_____no_output_____
###Markdown
NameData preparation by deleting a cluster in Cloud Dataproc LabelCloud Dataproc, cluster, GCP, Cloud Storage, Kubeflow, Pipeline SummaryA Kubeflow Pipeline component to delete a cluster in Cloud Dataproc. Intended useUse this component at the start of a Kubeflow Pipeline to delete a temporary Cloud Dataproc cluster to run Cloud Dataproc jobs as steps in the pipeline. This component is usually used with an [exit handler](https://github.com/kubeflow/pipelines/blob/master/samples/core/exit_handler/exit_handler.py) to run at the end of a pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------|-------------|----------|-----------|-----------------|---------|| project_id | The Google Cloud Platform (GCP) project ID that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region in which to handle the request. | No | GCPRegion | | || name | The name of the cluster to delete. | No | String | | || wait_interval | The number of seconds to pause between polling the operation. | Yes | Integer | | 30 | Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* The component can authenticate to GCP. Refer to [Authenticating Pipelines to GCP](https://www.kubeflow.org/docs/gke/authentication-pipelines/) for details.* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component deletes a Dataproc cluster by using [Dataproc delete cluster REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.clusters/delete).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK:
###Code
%%capture --no-stderr
KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.14/kfp.tar.gz'
!pip3 install $KFP_PACKAGE --upgrade
###Output
_____no_output_____
###Markdown
2. Load the component using KFP SDK
###Code
import kfp.components as comp
dataproc_delete_cluster_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/0ad0b368802eca8ca73b40fe08adb6d97af6a62f/components/gcp/dataproc/delete_cluster/component.yaml')
help(dataproc_delete_cluster_op)
###Output
_____no_output_____
###Markdown
SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Prerequisites[Create a Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) before running the sample code. Set sample parameters
###Code
PROJECT_ID = '<Please put your project ID here>'
CLUSTER_NAME = '<Please put your existing cluster name here>'
REGION = 'us-central1'
EXPERIMENT_NAME = 'Dataproc - Delete Cluster'
###Output
_____no_output_____
###Markdown
Example pipeline that uses the component
###Code
import kfp.dsl as dsl
import json
@dsl.pipeline(
name='Dataproc delete cluster pipeline',
description='Dataproc delete cluster pipeline'
)
def dataproc_delete_cluster_pipeline(
project_id = PROJECT_ID,
region = REGION,
name = CLUSTER_NAME
):
dataproc_delete_cluster_op(
project_id=project_id,
region=region,
name=name)
###Output
_____no_output_____
###Markdown
Compile the pipeline
###Code
pipeline_func = dataproc_delete_cluster_pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
###Output
_____no_output_____
###Markdown
NameData preparation by deleting a cluster in Cloud Dataproc LabelCloud Dataproc, cluster, GCP, Cloud Storage, Kubeflow, Pipeline SummaryA Kubeflow Pipeline component to delete a cluster in Cloud Dataproc. Intended useUse this component at the start of a Kubeflow Pipeline to delete a temporary Cloud Dataproc cluster to run Cloud Dataproc jobs as steps in the pipeline. This component is usually used with an [exit handler](https://github.com/kubeflow/pipelines/blob/master/samples/core/exit_handler/exit_handler.py) to run at the end of a pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------|-------------|----------|-----------|-----------------|---------|| project_id | The Google Cloud Platform (GCP) project ID that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region in which to handle the request. | No | GCPRegion | | || name | The name of the cluster to delete. | No | String | | || wait_interval | The number of seconds to pause between polling the operation. | Yes | Integer | | 30 | Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* The component can authenticate to GCP. Refer to [Authenticating Pipelines to GCP](https://www.kubeflow.org/docs/gke/authentication-pipelines/) for details.* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component deletes a Dataproc cluster by using [Dataproc delete cluster REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.clusters/delete).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK:
###Code
%%capture --no-stderr
!pip3 install kfp --upgrade
###Output
_____no_output_____
###Markdown
2. Load the component using KFP SDK
###Code
import kfp.components as comp
dataproc_delete_cluster_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/1.7.0-rc.2/components/gcp/dataproc/delete_cluster/component.yaml')
help(dataproc_delete_cluster_op)
###Output
_____no_output_____
###Markdown
SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Prerequisites[Create a Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) before running the sample code. Set sample parameters
###Code
PROJECT_ID = '<Please put your project ID here>'
CLUSTER_NAME = '<Please put your existing cluster name here>'
REGION = 'us-central1'
EXPERIMENT_NAME = 'Dataproc - Delete Cluster'
###Output
_____no_output_____
###Markdown
Example pipeline that uses the component
###Code
import kfp.dsl as dsl
import json
@dsl.pipeline(
name='Dataproc delete cluster pipeline',
description='Dataproc delete cluster pipeline'
)
def dataproc_delete_cluster_pipeline(
project_id = PROJECT_ID,
region = REGION,
name = CLUSTER_NAME
):
dataproc_delete_cluster_op(
project_id=project_id,
region=region,
name=name)
###Output
_____no_output_____
###Markdown
Compile the pipeline
###Code
pipeline_func = dataproc_delete_cluster_pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
###Output
_____no_output_____
###Markdown
Dataproc - Delete Cluster Intended UseA Kubeflow Pipeline component to delete a cluster in Google Cloud Dataproc service. Run-Time Parameters:Name | Description:--- | :----------project_id | Required. The ID of the Google Cloud Platform project that the cluster belongs to.region | Required. The Cloud Dataproc region in which to handle the request.name | Required. The cluster name to delete.wait_interval | The wait seconds between polling the operation. Defaults to 30s. SampleNote: the sample code below works in both IPython notebook or python code directly. PrerequisitesBefore running the sample code, you need to [create a Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster). Set sample parameters
###Code
PROJECT_ID = '<Please put your project ID here>'
CLUSTER_NAME = '<Please put your existing cluster name here>'
REGION = 'us-central1'
EXPERIMENT_NAME = 'Dataproc - Delete Cluster'
COMPONENT_SPEC_URI = 'https://raw.githubusercontent.com/kubeflow/pipelines/e5b0081cdcbef6a056c0da114d2eb81ab8d8152d/components/gcp/dataproc/delete_cluster/component.yaml'
###Output
_____no_output_____
###Markdown
Install KFP SDKInstall the SDK (Uncomment the code if the SDK is not installed before)
###Code
#KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.12/kfp.tar.gz'
#!pip3 install $KFP_PACKAGE --upgrade
###Output
_____no_output_____
###Markdown
Load component definitions
###Code
import kfp.components as comp
dataproc_delete_cluster_op = comp.load_component_from_url(COMPONENT_SPEC_URI)
display(dataproc_delete_cluster_op)
###Output
_____no_output_____
###Markdown
Here is an illustrative pipeline that uses the component
###Code
import kfp.dsl as dsl
import kfp.gcp as gcp
import json
@dsl.pipeline(
name='Dataproc delete cluster pipeline',
description='Dataproc delete cluster pipeline'
)
def dataproc_delete_cluster_pipeline(
project_id = PROJECT_ID,
region = REGION,
name = CLUSTER_NAME
):
dataproc_delete_cluster_op(project_id, region, name).apply(gcp.use_gcp_secret('user-gcp-sa'))
###Output
_____no_output_____
###Markdown
Compile the pipeline
###Code
pipeline_func = dataproc_delete_cluster_pipeline
pipeline_filename = pipeline_func.__name__ + '.pipeline.tar.gz'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
###Output
_____no_output_____
###Markdown
NameData preparation by deleting a cluster in Cloud Dataproc LabelCloud Dataproc, cluster, GCP, Cloud Storage, Kubeflow, Pipeline SummaryA Kubeflow Pipeline component to delete a cluster in Cloud Dataproc. Intended useUse this component at the start of a Kubeflow Pipeline to delete a temporary Cloud Dataproc cluster to run Cloud Dataproc jobs as steps in the pipeline. This component is usually used with an [exit handler](https://github.com/kubeflow/pipelines/blob/master/samples/core/exit_handler/exit_handler.py) to run at the end of a pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------|-------------|----------|-----------|-----------------|---------|| project_id | The Google Cloud Platform (GCP) project ID that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region in which to handle the request. | No | GCPRegion | | || name | The name of the cluster to delete. | No | String | | || wait_interval | The number of seconds to pause between polling the operation. | Yes | Integer | | 30 | Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* The component can authenticate to GCP. Refer to [Authenticating Pipelines to GCP](https://www.kubeflow.org/docs/gke/authentication-pipelines/) for details.* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component deletes a Dataproc cluster by using [Dataproc delete cluster REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.clusters/delete).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK:
###Code
%%capture --no-stderr
KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.14/kfp.tar.gz'
!pip3 install $KFP_PACKAGE --upgrade
###Output
_____no_output_____
###Markdown
2. Load the component using KFP SDK
###Code
import kfp.components as comp
dataproc_delete_cluster_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/0e794e8a0eff6f81ddc857946ee8311c7c431ec2/components/gcp/dataproc/delete_cluster/component.yaml')
help(dataproc_delete_cluster_op)
###Output
_____no_output_____
###Markdown
SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Prerequisites[Create a Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) before running the sample code. Set sample parameters
###Code
PROJECT_ID = '<Please put your project ID here>'
CLUSTER_NAME = '<Please put your existing cluster name here>'
REGION = 'us-central1'
EXPERIMENT_NAME = 'Dataproc - Delete Cluster'
###Output
_____no_output_____
###Markdown
Example pipeline that uses the component
###Code
import kfp.dsl as dsl
import json
@dsl.pipeline(
name='Dataproc delete cluster pipeline',
description='Dataproc delete cluster pipeline'
)
def dataproc_delete_cluster_pipeline(
project_id = PROJECT_ID,
region = REGION,
name = CLUSTER_NAME
):
dataproc_delete_cluster_op(
project_id=project_id,
region=region,
name=name)
###Output
_____no_output_____
###Markdown
Compile the pipeline
###Code
pipeline_func = dataproc_delete_cluster_pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
###Output
_____no_output_____
###Markdown
NameData preparation by deleting a cluster in Cloud Dataproc LabelCloud Dataproc, cluster, GCP, Cloud Storage, Kubeflow, Pipeline SummaryA Kubeflow Pipeline component to delete a cluster in Cloud Dataproc. Intended useUse this component at the start of a Kubeflow Pipeline to delete a temporary Cloud Dataproc cluster to run Cloud Dataproc jobs as steps in the pipeline. This component is usually used with an [exit handler](https://github.com/kubeflow/pipelines/blob/master/samples/basic/exit_handler.py) to run at the end of a pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------|-------------|----------|-----------|-----------------|---------|| project_id | The Google Cloud Platform (GCP) project ID that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region in which to handle the request. | No | GCPRegion | | || name | The name of the cluster to delete. | No | String | | || wait_interval | The number of seconds to pause between polling the operation. | Yes | Integer | | 30 | Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* Run the component under a secret [Kubeflow user service account](https://www.kubeflow.org/docs/started/getting-started-gke/gcp-service-accounts) in a Kubeflow cluster. For example: ``` component_op(...).apply(gcp.use_gcp_secret('user-gcp-sa')) ```* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component deletes a Dataproc cluster by using [Dataproc delete cluster REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.clusters/delete).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK:
###Code
%%capture --no-stderr
KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.14/kfp.tar.gz'
!pip3 install $KFP_PACKAGE --upgrade
###Output
_____no_output_____
###Markdown
2. Load the component using KFP SDK
###Code
import kfp.components as comp
dataproc_delete_cluster_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/d0aa15dfb3ff618e8cd1b03f86804ec4307fd9c2/components/gcp/dataproc/delete_cluster/component.yaml')
help(dataproc_delete_cluster_op)
###Output
_____no_output_____
###Markdown
SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Prerequisites[Create a Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) before running the sample code. Set sample parameters
###Code
PROJECT_ID = '<Please put your project ID here>'
CLUSTER_NAME = '<Please put your existing cluster name here>'
REGION = 'us-central1'
EXPERIMENT_NAME = 'Dataproc - Delete Cluster'
###Output
_____no_output_____
###Markdown
Example pipeline that uses the component
###Code
import kfp.dsl as dsl
import kfp.gcp as gcp
import json
@dsl.pipeline(
name='Dataproc delete cluster pipeline',
description='Dataproc delete cluster pipeline'
)
def dataproc_delete_cluster_pipeline(
project_id = PROJECT_ID,
region = REGION,
name = CLUSTER_NAME
):
dataproc_delete_cluster_op(
project_id=project_id,
region=region,
name=name).apply(gcp.use_gcp_secret('user-gcp-sa'))
###Output
_____no_output_____
###Markdown
Compile the pipeline
###Code
pipeline_func = dataproc_delete_cluster_pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
###Output
_____no_output_____
###Markdown
NameData preparation by deleting a cluster in Cloud Dataproc LabelCloud Dataproc, cluster, GCP, Cloud Storage, Kubeflow, Pipeline SummaryA Kubeflow Pipeline component to delete a cluster in Cloud Dataproc. Intended useUse this component at the start of a Kubeflow Pipeline to delete a temporary Cloud Dataproc cluster to run Cloud Dataproc jobs as steps in the pipeline. This component is usually used with an [exit handler](https://github.com/kubeflow/pipelines/blob/master/samples/core/exit_handler/exit_handler.py) to run at the end of a pipeline. Runtime arguments| Argument | Description | Optional | Data type | Accepted values | Default ||----------|-------------|----------|-----------|-----------------|---------|| project_id | The Google Cloud Platform (GCP) project ID that the cluster belongs to. | No | GCPProjectID | | || region | The Cloud Dataproc region in which to handle the request. | No | GCPRegion | | || name | The name of the cluster to delete. | No | String | | || wait_interval | The number of seconds to pause between polling the operation. | Yes | Integer | | 30 | Cautions & requirementsTo use the component, you must:* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).* The component can authenticate to GCP. Refer to [Authenticating Pipelines to GCP](https://www.kubeflow.org/docs/gke/authentication-pipelines/) for details.* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project. Detailed descriptionThis component deletes a Dataproc cluster by using [Dataproc delete cluster REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.clusters/delete).Follow these steps to use the component in a pipeline:1. Install the Kubeflow Pipeline SDK:
###Code
%%capture --no-stderr
!pip3 install kfp --upgrade
###Output
_____no_output_____
###Markdown
2. Load the component using KFP SDK
###Code
import kfp.components as comp
dataproc_delete_cluster_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/38771da09094640cd2786a4b5130b26ea140f864/components/gcp/dataproc/delete_cluster/component.yaml')
help(dataproc_delete_cluster_op)
###Output
_____no_output_____
###Markdown
SampleNote: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Prerequisites[Create a Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) before running the sample code. Set sample parameters
###Code
PROJECT_ID = '<Please put your project ID here>'
CLUSTER_NAME = '<Please put your existing cluster name here>'
REGION = 'us-central1'
EXPERIMENT_NAME = 'Dataproc - Delete Cluster'
###Output
_____no_output_____
###Markdown
Example pipeline that uses the component
###Code
import kfp.dsl as dsl
import json
@dsl.pipeline(
name='Dataproc delete cluster pipeline',
description='Dataproc delete cluster pipeline'
)
def dataproc_delete_cluster_pipeline(
project_id = PROJECT_ID,
region = REGION,
name = CLUSTER_NAME
):
dataproc_delete_cluster_op(
project_id=project_id,
region=region,
name=name)
###Output
_____no_output_____
###Markdown
Compile the pipeline
###Code
pipeline_func = dataproc_delete_cluster_pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
###Output
_____no_output_____
###Markdown
Submit the pipeline for execution
###Code
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
###Output
_____no_output_____ |
examples/gallery/bokeh/brexit_choropleth.ipynb | ###Markdown
Declaring data
###Code
geometries = gpd.read_file('../../assets/boundaries/boundaries.shp')
referendum = pd.read_csv('../../assets/referendum.csv')
gdf = gpd.GeoDataFrame(pd.merge(geometries, referendum))
###Output
_____no_output_____
###Markdown
Plot
###Code
gv.Polygons(gdf, vdims=['name', 'leaveVoteshare'], label='Brexit Referendum Vote').opts(
tools=['hover'], width=550, height=700, color='leaveVoteshare',
colorbar=True, toolbar='above', xaxis=None, yaxis=None)
###Output
_____no_output_____
###Markdown
Declaring data
###Code
geometries = gpd.read_file('../../assets/boundaries/boundaries.shp')
referendum = pd.read_csv('../../assets/referendum.csv')
gdf = gpd.GeoDataFrame(pd.merge(geometries, referendum))
###Output
_____no_output_____
###Markdown
Plot
###Code
gv.Polygons(gdf, vdims=['name', 'leaveVoteshare'], label='Brexit Referendum Vote').opts(
tools=['hover'], width=550, height=700, color='leaveVoteshare',
colorbar=True, toolbar='above', xaxis=None, yaxis=None)
###Output
_____no_output_____
###Markdown
Declaring data
###Code
geometries = gpd.read_file('../../assets/boundaries/boundaries.shp')
referendum = pd.read_csv('../../assets/referendum.csv')
gdf = gpd.GeoDataFrame(pd.merge(geometries, referendum))
###Output
_____no_output_____
###Markdown
Plot
###Code
plot_opts = dict(tools=['hover'], width=550, height=700, color_index='leaveVoteshare',
colorbar=True, toolbar='above', xaxis=None, yaxis=None)
gv.Polygons(gdf, vdims=['name', 'leaveVoteshare'], label='Brexit Referendum Vote').opts(plot=plot_opts)
###Output
_____no_output_____ |
Lab3/Lab3.ipynb | ###Markdown
Make the dimensions of the horn
###Code
# Define constants
epsilon_ap = 0.51
a = 6.5 * 2.54 / 100 # meters
b = 4.0 * 2.54 / 100 # inches
max_d1 = 32 * 2.54 / 100 # maximum dimension
max_d2 = 40 * 2.54 / 100 # maximum dimension
wavelength = 21.106 / 100 # HI wavelength in meters
def gain_equation(A,a,b,wavelen,ep):
return ( (4*np.pi)/(wavelen**2) ) * (ep*A/2) * (b+np.sqrt(b**2+(8*A*(A-a)/3)))
#def
def optimum_pyrimidal_horn(A,a,b,wavelen,ep):
G = gain_equation(A,a,b,wavelen,ep)
return (A**4)-(a*A**3)+(A*(3*b*G*wavelen**2/(8*np.pi*ep)))-(3*(G**2)*(wavelen**4)/(32*(np.pi**2)*(ep**2)))
#def
def calc_Rh(A,a,wavelen):
return A*(A-a)/(3*wavelen)
#def
def calc_B(A,b):
return 0.5*( b + np.sqrt( b**2 + 8*A*(A-a)/3 ) )
#def
def calc_R0H(Rh,A,a):
return Rh*A/(A-a)
#def
def calc_R0E(Re,B,b):
return Re*B/(B-b)
#def
def calc_alphaH(R0_H,A):
return np.arctan((A/2)/R0_H)*180/np.pi
#def
def calc_alphaE(R0_E,B):
return np.arctan((B/2)/R0_E)*180/np.pi
#def
A = 32*2.54/100 # In meters
R_H = calc_Rh(A,a,wavelength)
R_E = calc_Rh(A,a,wavelength)
B = calc_B(A,b)
G = gain_equation(A,a,b,wavelength,epsilon_ap)
R0_H = calc_R0H(R_H,A,a)
R0_E = calc_R0E(R_E,B,b)
alpha_H = calc_alphaH(R0_H,A)
alpha_E = calc_alphaE(R0_E,B)
A = 50*2.54/100 # In meters
R_H = calc_Rh(A,a,wavelength)
R_E = calc_Rh(A,a,wavelength)
B = calc_B(A,b)
G = gain_equation(A,a,b,wavelength,epsilon_ap)
R0_H = calc_R0H(R_H,A,a)
R0_E = calc_R0E(R_E,B,b)
alpha_H = calc_alphaH(R0_H,A)
alpha_E = calc_alphaE(R0_E,B)
print( 'A: '+str(A) )
print( 'B: '+str(B) )
print( 'R H: '+str(R_H) )
print( 'R E: '+str(R_E) )
print( 'R0 H: '+str(R0_H) )
print( 'R0 E: '+str(R0_E) )
print( 'G: '+str(R_H) )
print( 'Alpha H: '+str(alpha_H) )
print( 'Alpha E: '+str(alpha_E) )
###Output
A: 1.27
B: 1.019336339018831
R H: 2.216151805173884
R E: 2.216151805173884
R0 H: 2.5473009254872228
R0 E: 2.461495716962743
G: 2.216151805173884
Alpha H: 13.99759747002781
Alpha E: 11.698145498636139
###Markdown
Calculate the RA/Dec of the galactic plane
###Code
import importlib
importlib.reload(ast2050.lab3)
ra, dec = ast2050.lab3.calculate_galactic_longitude_radec( np.linspace(0,360,num=10) )
alt, az = ast2050.lab3.calculate_galactic_longitude_altaz( np.linspace(0,360,num=10), '2018-3-20 17:00:00' )
alt
from astropy.time import Time
from astropy.coordinates import SkyCoord, EarthLocation, AltAz
from astropy import units as apu
coords = SkyCoord( ra=99.4279583*apu.deg, dec=16.3992778*apu.deg )
toronto_location = EarthLocation( lon=(360-79.3832)*apu.deg, lat=43.6532*apu.deg,
height=70*apu.m )
date_time_string = '2019-3-20 16:50:00'
utcoffset = -4*apu.hour # EDT
time = Time(date_time_string) - utcoffset
coord_altaz = coords.transform_to(AltAz(obstime=time,
location=toronto_location))
coord_altaz.alt
###Output
_____no_output_____
###Markdown
Try and determine some power spectra
###Code
# Parameters of the fit
sample_rate = 5.0E6 # Samples per second. Could also be 20 MHz for March 20 data
local_oscillator_frequency = 1.420E9 # Frequency in hertz
bandpass = 5E6 # Could be 10 MHz for March 20 data
data_path = '/Users/JamesLane/Desktop/my_data.dat'
background_path = '/Users/JamesLane/Desktop/background.dat'
data = np.fromfile(data_path, dtype='int16')[:10000] - 2**11
background = np.fromfile(background_path, dtype='int16')[:10000] - 2**11
# Trim the background time series to match the length of the data time series
background = background[:len(data)]
# Take the fourier transform and shift it
ft_data = np.fft.fftshift(np.fft.fft( data ))
ft_background = np.fft.fftshift(np.fft.fft( background ))
# Determine the frequencies
freq_data = np.fft.fftshift( np.fft.fftfreq( len(data), 1/sample_rate ) )
freq_background = np.fft.fftshift( np.fft.fftfreq( len(data), 1/sample_rate ) )
###Output
_____no_output_____
###Markdown
We need to be aware that the output of the mixer are frequencies at:$f_{1} = f_{RF} + f_{LO}$and$f_{2} = f_{RF} - f_{LO}$So we are aiming to recover:$f_{RF} = |f_{RF} - f_{LO}| + f_{LO}$
###Code
# Calculate the power using the periodogram
power_data = np.abs(ft_data)**2 + np.abs(ft_data[::-1])**2
power_background = np.abs(ft_background)**2 + np.abs(ft_background[::-1])**2
# Only take the positive frequencies:
n_data = len(data)
freq_data_rf = np.abs( freq_data[ :int(len(data)/2) ] ) + local_oscillator_frequency + bandpass/4
power_data_rf = power_data[ :int(len(data)/2) ] / n_data**2
power_background_rf = power_background[ :int(len(data)/2) ] / n_data**2
# Subtract the background and normalize the power
power_data_rf_bsub = ( power_data_rf - power_background_rf )
###Output
_____no_output_____
###Markdown
Plot
###Code
fig = plt.figure( figsize=(10,5) )
ax = fig.add_subplot(111)
ax.plot( freq_data_rf/1.0E9, power_data_rf, linewidth=0.2, color='DodgerBlue' )
ax.plot( freq_data_rf/1.0E9, power_background_rf, linewidth=0.2, color='Red' )
ax.plot( freq_data_rf/1.0E9, power_data_rf_bsub, linewidth=1.0, color='Black')
ax.set_xlabel('Frequency [GHz]')
ax.set_ylabel('Power')
plt.show()
pass;
###Output
_____no_output_____
###Markdown
Try the sun
###Code
data_path = '/Users/JamesLane/Desktop/sun_on.dat'
background_path = '/Users/JamesLane/Desktop/sun_off.dat'
data = np.fromfile(data_path, dtype='int16')[:10000] - 2**11
background = np.fromfile(background_path, dtype='int16')[:10000] - 2**11
# Trim the background time series to match the length of the data time series
background = background[:len(data)]
# Take the fourier transform and shift it
ft_data = np.fft.fftshift(np.fft.fft( data ))
ft_background = np.fft.fftshift(np.fft.fft( background ))
# Determine the frequencies
freq_data = np.fft.fftshift( np.fft.fftfreq( len(data), 1/sample_rate ) )
freq_background = np.fft.fftshift( np.fft.fftfreq( len(data), 1/sample_rate ) )
# Calculate the power using the periodogram
power_data = np.abs(ft_data)**2 + np.abs(ft_data[::-1])**2
power_background = np.abs(ft_background)**2 + np.abs(ft_background[::-1])**2
# Only take the positive frequencies:
n_data = len(data)
freq_data_rf = np.abs( freq_data[ :int(len(data)/2) ] ) + local_oscillator_frequency
power_data_rf = power_data[ :int(len(data)/2) ] / n_data**2
power_background_rf = power_background[ :int(len(data)/2) ] / n_data**2
# Subtract the background and normalize the power
power_data_rf_bsub = ( power_data_rf - power_background_rf )
fig = plt.figure( figsize=(10,5) )
ax = fig.add_subplot(111)
ax.plot( freq_data_rf/1.0E9, np.log10(power_data_rf) , linewidth=0.2, color='DodgerBlue' )
ax.plot( freq_data_rf/1.0E9, np.log10(power_background_rf) , linewidth=0.2, color='Red' )
ax.plot( freq_data_rf/1.0E9, np.log10(power_data_rf_bsub) , linewidth=1.0, color='Black')
ax.set_xlabel('Frequency [GHz]')
ax.set_ylabel('Power')
plt.show()
pass;
###Output
_____no_output_____
###Markdown
Deep-Learning Classification of Play Type in NFL Play-By-Play Data Ian Johnson, Derek Phanekham, Travis Siems Introduction The NFL (National Football League) has 32 teams split into two conferences, the AFC and NFC. Each of the 32 teams plays 16 games during the regular season (non-playoff season) every year. Due to the considerable viewership of American football, as well as the pervasiveness of fantasy football, considerable data about the game is collected. During the 2015-2016 season, information about every play from each game that occurred was logged. All of that data was consolidated into a single data set which is analyzed throughout this report.In this report, we will attempt to classify the type of a play, given the game situation before the play began. The Classification TaskWe will attempt to classify plays based on play type using information about the state of the game prior to the start of the play. This is expected to be an exceptionally difficult classification task, due to the amount of noise in the dataset (specifically, the decision to run vs pass the ball is often a seemingly random one). A successful classifier would have huge value to defensive coordinators, who could call plays based on the expected offensive playcall. Because it may be very difficult to identify what play will be called, it is relevant to provide a probability of a given playcall in a situation. For example, it would be useful to provide the probability of a 4th down conversion attempt, even if the overall prediction is that a punt occurs. Data PreparationIn order to prepare the data for classification, a number of variables from the original dataset will be removed, as they measure the result of the play, not the state of the game prior to the start of the play. The dataset being included in this report has had previous cleaning and preprocessing performed in our previous report.
###Code
#For final version of report, remove warnings for aesthetics.
import warnings
warnings.filterwarnings('ignore')
#Libraries used for data analysis
import pandas as pd
import numpy as np
from sklearn import preprocessing
df = pd.read_csv('data/cleaned.csv') # read in the csv file
colsToInclude = [ 'Drive', 'qtr', 'down',
'TimeSecs', 'yrdline100','ydstogo','ydsnet',
'GoalToGo','posteam','DefensiveTeam',
'PosTeamScore','ScoreDiff', 'PlayType']
df = df[colsToInclude]
df = df[[p not in ["Sack", "No Play", "QB Kneel", "Spike"] for p in df.PlayType]]
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 38600 entries, 0 to 42875
Data columns (total 13 columns):
Drive 38600 non-null int64
qtr 38600 non-null int64
down 38600 non-null int64
TimeSecs 38600 non-null float64
yrdline100 38600 non-null float64
ydstogo 38600 non-null float64
ydsnet 38600 non-null float64
GoalToGo 38600 non-null int64
posteam 38600 non-null object
DefensiveTeam 38600 non-null object
PosTeamScore 38600 non-null float64
ScoreDiff 38600 non-null float64
PlayType 38600 non-null object
dtypes: float64(6), int64(4), object(3)
memory usage: 4.1+ MB
###Markdown
Neural Network EmbeddingsWe will use neural network embeddings from TensorFlow for the posteam and DefensiveTeam. However, we will be building these embeddings manually using one-hot encoding and additional fully-connected layers in each of the deep architectures. The following Python function was used for one-hot encoding, and was adapted from the website referenced in the code.
###Code
from sklearn.feature_extraction import DictVectorizer
#Simple function for 1 hot encoding
def encode_onehot(df, cols):
"""
One-hot encoding is applied to columns specified in a pandas DataFrame.
Modified from: https://gist.github.com/kljensen/5452382
Details:
http://en.wikipedia.org/wiki/One-hot
http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html
@param df pandas DataFrame
@param cols a list of columns to encode
@return a DataFrame with one-hot encoding
"""
vec = DictVectorizer()
vec_data = pd.DataFrame(vec.fit_transform(df[cols].to_dict(outtype='records')).toarray())
vec_data.columns = vec.get_feature_names()
vec_data.index = df.index
df = df.drop(cols, axis=1)
df = df.join(vec_data)
return df
df = encode_onehot(df, cols=['posteam', 'DefensiveTeam'])
###Output
_____no_output_____
###Markdown
The following are descriptions of the remaining data columns in the play-by-play dataset. Note that the one-hot encoded columns do not follow the structure listed below, but for the sake of readability they are presented as if they were not one-hot encoded.* **GameID** (*nominal*): A unique integer which identifies each game played * **Drive** (*ordinal*): The number of the drive during a game when the play occurred (indexed at one, so the first drive of the game has Drive 1 and the nth drive has Drive n)* **qtr** (*interval*): The quarter of the game when the play occurred* **down** (*interval*): The down when the play occurred (1st, 2nd, 3rd, or 4th)* **TimeSecs** (*interval*): The remaining game time, in seconds, when the play began* **yrdline100** (*ratio*): The absolute yard-line on the field where the play started (from 0 to 100, where 0 is the defensive end zone and 100 is the offensive end zone of the team with the ball)* **ydstogo** (*ratio*): The number of yards from the line of scrimmage to the first-down line* **ydsnet** (*ratio*): The number of yards from the beginning of the drive to the current line of scrimmage* **GoalToGo** (*nominal*): A binary attribute whose value is 1 if there is no first down line (the end-zone is the first down line) or 0 if there is a normal first down line* **posteam** (*nominal*): A 2-or-3 character code representing the team on offense* **PosTeamScore** (*ratio*): The score of the team with possesion of the ball* **DefensiveTeam** (*nominal*): A 2-or-3 character code representing the team on defense* **ScoreDiff**: (*ratio*) The difference in score between the offensive and defensive at the time of the play.* **PlayType**: (*nominal*) An attribute that identifies the type of play (i.e. Kickoff, Run, Pass, Sack, etc) Performance MetricsThe value of a classifier will be evaulated using the following cost matrix. Costs in the matrix which are set to 1 represent play predictions that would never actually occur in the context of a football game. For example, if we predicted a pass play and a kickoff occurs, then the classifier has a significant flaw. Bolded weights represent actual mispredictions that could occur.| | Actual Play | Pass | Run | Kickoff | Punt | Extra Point | Field Goal | Onside Kick ||----------------|-------------|------|-----|---------|-------------|------------|-------------|-------------|| Predicted Play | | | | | | | | || Pass | | 0 | **0.1** | 1 | **0.15** | **0.15** | **0.1** | 1 | | Run | | **0.1** | 0 | 1 | **0.15** | **0.15** | **0.1** | 1 | | Kickoff | | 1 | 1 | 0 | 1 | 1 | 1 | **0.75** || Punt | | **0.25** | **0.25** | 1 | 0 |1 | **0.15** | 1 || Extra Point | | **0.4** | **0.4** | 1 | 1 | 0 | 1 | 1 || Field Goal | | **0.4** | **0.4** | 1 | **0.1** | 1 | 0 | 1 || Onside Kick | | 1 | 1 | **0.25** | 1 |1 | 1 | 0 |This performance metric is the best for this classification problem because the actual potential cost of an incorrect play prediction varies significantly based on the nature of the misclassification. In an actual football game, it would be very costly to predict an extra point and have the opposing team run a pass play. This means that they ran a fake extra point and went for a two-point conversion. However, if a pass play is predicted and a run play occurs, the cost of the error is minimal because the defensive strategy for defending against run and pass plays.Because the goal of this classification is to help inform defensive play-calling, a cost matrix is helpful because it allows a defensive coordinator to set his own costs to produce his own classifier, without any knowledge of the actualy computation that occurs. Cross Validation MethodologyWe use a sequential k-fold partition of the data because this mirrors how data will be collected and analyzed. For our use, we assume that it is okay to use data in the “future” to predict data “now” because it can represent data from a previous football season. For example, if we use the first 90% of data for training and the remaining 10% of data for testing, that would simulate using most of the current season's data to predict plays towards the end of this season. If we use the first 50% and last 40% of data for training and the remaining 10% for testing, this would simulate using 40% of the previous season's data and the first 50% of this season's data to predict plays happening around the middle of the current season.
###Code
from sklearn.model_selection import KFold
#Using a 10-fold sequential split.
#Note that this cv object is unused, but is here for reference
cv = KFold(n_splits=10)
y,levels = pd.factorize(df.PlayType.values)
X = df.drop('PlayType', 1).values.astype(np.float32)
num_classes = len(levels)
###Output
_____no_output_____
###Markdown
ModelingBefore we build any models, we define a cost function in Python below, which is used to test all of our forthcoming models. It computes the item-wise product of a confusion matrix and our cost matrix, and returns the sum of all of the elements in the resulting matrix. We also define a function to calculate area under roc curve for a multiclass classification problem.
###Code
from sklearn.metrics import confusion_matrix
from sklearn.metrics import roc_curve, auc,make_scorer
from scipy import interp
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('ggplot')
cost_mat = [[0 ,.1 , 1 , .15 , 0.15, .1 , 1 ],
[.1 , 0 , 1 , 0.15, 0.15, 0.1, 1 ],
[1 , 1 , 0 , 1 , 1 , 1 , 0.75],
[.25,0.25, 1 , 0 , 1 ,0.15, 1 ],
[0.4, 0.4, 1 , 1 , 0 , 1 , 1 ],
[0.4, 0.4, 1 , 0.1 , 1 , 0 , 1 ],
[1 , 1 , 0.25, 1 , 1 , 1 , 0 ]]
def cost(Y, yhat):
return np.sum(np.multiply(confusion_matrix(Y,yhat), cost_mat))
def auc_of_roc(Y,yhat, levels=['Pass', 'Run', 'Kickoff', 'Punt', 'Extra Point', 'Field Goal', 'Onside Kick']):
mean_tpr = 0.0
mean_fpr = np.linspace(0, 1, 100)
for c in levels:
tempY = [x==c for x in Y]
tempYhat = [x==c for x in yhat]
fpr, tpr, thresholds = roc_curve(tempY, tempYhat)
mean_tpr += interp(mean_fpr, fpr, tpr)
mean_tpr[0] = 0.0
roc_auc = auc(fpr, tpr)
mean_tpr /= len(levels)
mean_tpr[-1] = 1.0
mean_auc = auc(mean_fpr, mean_tpr)
return mean_auc
#For use in the final deployment section
scorer = make_scorer(cost)
auc_roc_scorer = make_scorer(auc_of_roc)
###Output
_____no_output_____
###Markdown
Some Setup Code for TensorFlow Imports
###Code
import tensorflow as tf
from tensorflow.contrib import learn
from tensorflow.contrib import layers
#Suppress all non-error warnings
tf.logging.set_verbosity(tf.logging.ERROR)
###Output
_____no_output_____
###Markdown
Calculating Costs for a ModelThe following code performs cross validation on a model with a given step count and learning rate. This will be used as part of the grid search, as well as for evaluating the final classifier after the gridsearch is complete.
###Code
def get_scores_for_model(model_fn, X, y, steps=1000, learning_rate=0.05, num_splits = 10):
auc = []
costs = []
for train_index, test_index in KFold(n_splits=num_splits).split(X, y):
classifier = learn.TensorFlowEstimator(model_fn=model_fn,
n_classes=7, batch_size=1000,
steps=steps, learning_rate=learning_rate)
classifier.fit(X[train_index], y[train_index])
yhat = classifier.predict(X[test_index])
costs.append(cost(y[test_index], yhat))
auc.append(auc_of_roc(y[test_index], yhat, levels=range(0,7)))
return costs, auc
###Output
_____no_output_____
###Markdown
Grid SearchBecause we're performing a grid search on a TensorFlow estimator, we use our own grid search function, instead of the one provided in sklearn, for the sake of simplicity. Our grid search will search for optimal values of *steps* and *learning_rate*. During the grid search, a subsample of 5000 items will be used, and only 3 folds of cross validation will occur. This is done to decrease computation time, which is otherwise many hours per grid search.Note that the grid search function itself is not parallelized. This is because the underlying TensorFlow modelling is all parallelized, so maximal CPU usage is already being achieved.
###Code
def grid_search(model_fn, steps_list, learning_rate_list):
costs = []
for steps in steps_list:
step_costs = []
for rate in learning_rate_list:
step_costs.append(np.mean(get_scores_for_model(model_fn, X[0:5000, :], y[0:5000], steps, rate, 3)[0]))
print(step_costs)
print(costs)
costs.append(step_costs)
min_idx = np.argmin(costs)
return costs, steps_list[min_idx//len(costs[0])], learning_rate_list[min_idx%len(costs[0])]
import seaborn as sns
def grid_search_heatmap(costs, steps, rates):
ax = sns.heatmap(np.array(costs))
ax.set(xlabel='Learning Rate', ylabel='Step Count')
ax.set_xticklabels(rates[::-1])
ax.set_yticklabels(steps[::-1])
ax.set_title("Grid Search Heatmap")
###Output
_____no_output_____
###Markdown
First Deep Learning ArchitectureThe first deep learning architecture will be adapted from a model designed by PayPal that is used for anomaly detection. Because the vast majority of football plays are either runs or passes, and there are only a few anomalous plays, like onside kicks, etc., it stands to reason that an anomaly detection architecture would perform well for this classification task.The architecture is quite simple: it consists of a set of 6 fully connected layers of 700 neurons, followed by a hyperbolic tangent activation function and then a single fully connected layer for output. We will adapt this model slightly to allow for the embedding of the team attributes. We will split the data into embedding and non-embedding data, run each subset of data through 6 fully connected layers of 700 neurons, and then combine their output as the input into a single final layer, used for classification. A simple drawing of the architecture is shown below.The original talk about this architecture can be found here. Defining the Model
###Code
def deep_model_1(X, y):
#Embeddings layer
teamembeddings = layers.stack(X[:,11:75], layers.fully_connected, [700 for _ in range(6)])
teamembeddings = tf.nn.tanh(teamembeddings)
#Non-embeddings features
otherfeatures = X[:,0:10]
otherfeatures = layers.stack(otherfeatures, layers.fully_connected, [700 for _ in range(6)])
tensors = tf.concat(1, [teamembeddings, otherfeatures])
tensors = tf.nn.tanh(tensors)
pred,loss = learn.models.logistic_regression(tensors, y)
return pred, loss
###Output
_____no_output_____
###Markdown
Grid Searching on the ModelA grid search is performed on the model to find the approximately optimal step count and learning rate for the TensorFlowEstimator. Note that this particular grid search takes about 14 hours to run, so it should not be re-run.
###Code
costs, optimal_steps, optimal_rate = grid_search(deep_model_1, [250,500,1000,1500,2000], [.05, .01, .005, .001, .0005])
print((optimal_steps, optimal_rate))
###Output
(500, 0.01)
###Markdown
Although the grid search returned the optimal step count and rate, it is meaningful to visualize the grid that was generated, to get an idea for how much better these particular hyperparameters are than the other possible combinations in the grid.
###Code
grid_search_heatmap(costs, [250,500,1000,1500,2000], [.05, .01, .005, .001, .0005])
###Output
_____no_output_____
###Markdown
The above grid search shows that, while there is no obvious pattern with respect to the performance, it seems that the major diagonal generally has the lowest costs. Interestingly, the lower-left tile has a significantly higher score than all of the others. This is likely due to overlearning, as it comes from a section in the grid with the highest possible step count.The optimal parameter pair, of 500 steps and a learning rate of .001, is only marginally better than the second-best option, which falls at 1000 steps and a learning rate of .005. Calculating Costs and Area under ROC Curve Scores for the ModelWith the optimal step count and learning rate, the costs of the model built with the given step count and rate are computed.
###Code
costs_model_1, auc_roc_model_1 = get_scores_for_model(deep_model_1, X, y, optimal_steps, optimal_rate)
print(costs_model_1)
print(auc_roc_model_1)
###Output
[1843.45, 2005.0, 2017.0, 1971.05, 2069.25, 2021.75, 1942.8499999999999, 2048.25, 2022.0, 2042.0999999999999]
[0.51052936522990444, 0.50011988848000888, 0.49999999999999994, 0.50082942390789931, 0.49999999999999994, 0.49999999999999994, 0.50244062722123584, 0.49999999999999994, 0.50046048738783466, 0.49987541305436189]
###Markdown
The costs and auc scores computed above are hard-coded below for later use, so that they don't need to be computed again.
###Code
costs_model_1 = [1843.45, 2005.0, 2017.0, 1971.05, 2069.25, 2021.75, 1942.8499999999999, 2048.25, 2022.0, 2042.0999999999999]
auc_roc_model_1 = [0.51052936522990444, 0.50011988848000888, 0.49999999999999994, 0.50082942390789931, 0.49999999999999994, 0.49999999999999994, 0.50244062722123584, 0.49999999999999994, 0.50046048738783466, 0.49987541305436189]
###Output
_____no_output_____
###Markdown
Second Deep Learning ArchitectureThe second deep learning architecture will be adapted from a model designed by O'Shea Research that is used for radio modulation recognition. Because the majority of the football data is ordinal numerical data, an architecture that uses similar data for a classification task poses as a candidate model for the football play classification task.The architecture consists of a set of two convolutional reLu neurons, followed by a dense relu and dense softmax activation function. The result is a single fully connected layer for output. We will adapt this model slightly to allow for the embedding of the team attributes. We will split the data into embedding and non-embedding data, run each subset of data through the convolutional neurons, combine their output and run it through the dense neurons, and take a logistic regression for the input into a single final layer, used for classification. A simple drawing of the architecture is shown below.The original discussion about this architecture can be found here. Defining the Model
###Code
def deep_model_2(X, y):
#Embeddings layer
teamembeddings = layers.stack(X[:,11:75], layers.fully_connected, [200,1,3], activation_fn=tf.nn.relu)
teamembeddings = layers.stack(teamembeddings, layers.fully_connected, [50,2,3], activation_fn=tf.nn.relu)
#Non-embeddings features
otherfeatures = X[:,0:10]
otherfeatures = layers.stack(otherfeatures, layers.fully_connected, [50,1,3], activation_fn=tf.nn.relu)
otherfeatures = layers.stack(otherfeatures, layers.fully_connected, [12,2,3], activation_fn=tf.nn.relu)
#combine the team and play data
tensors = tf.concat(1, [teamembeddings, otherfeatures])
tensors = layers.stack(tensors, layers.fully_connected, [100], activation_fn=tf.nn.relu)
tensors = layers.stack(tensors, layers.fully_connected, [7], activation_fn=tf.nn.softmax)
""" # This section is doing all layers before combining team and play data
#Embeddings layer
teamembeddings = layers.stack(X[:,11:75], layers.fully_connected, [200,1,3], activation_fn=tf.nn.relu)
teamembeddings = layers.stack(teamembeddings, layers.fully_connected, [50,2,3], activation_fn=tf.nn.relu)
teamembeddings = layers.stack(teamembeddings, layers.fully_connected, [100], activation_fn=tf.nn.relu)
teamembeddings = layers.stack(teamembeddings, layers.fully_connected, [7], activation_fn=tf.nn.softmax)
#Non-embeddings features
otherfeatures = X[:,0:10]
otherfeatures = layers.stack(otherfeatures, layers.fully_connected, [50,1,3], activation_fn=tf.nn.relu)
otherfeatures = layers.stack(otherfeatures, layers.fully_connected, [12,2,3], activation_fn=tf.nn.relu)
otherfeatures = layers.stack(otherfeatures, layers.fully_connected, [100], activation_fn=tf.nn.relu)
otherfeatures = layers.stack(otherfeatures, layers.fully_connected, [7], activation_fn=tf.nn.softmax)
#combine the team and play data
tensors = tf.concat(1, [teamembeddings, otherfeatures])
"""
pred, loss = learn.models.logistic_regression(tensors, y)
return pred, loss
###Output
_____no_output_____
###Markdown
Grid Searching on the ModelA grid search is performed on the model to find the approximately optimal step count and learning rate for the TensorFlowEstimator.
###Code
costs, optimal_steps, optimal_rate = grid_search(deep_model_2, [250,500,1000,1500,2000], [.05, .01, .005, .001, .0005])
print((optimal_steps, optimal_rate))
###Output
(1500, 0.005)
###Markdown
Although the grid search, once again, gave us the optimal step count and rate, it is worthwhile to visualize the grid that was generated, to get an idea for how much better these particular hyperparameters are than the other possible combinations in the grid.
###Code
grid_search_heatmap(costs, [250,500,1000,1500,2000], [.05, .01, .005, .001, .0005])
###Output
_____no_output_____
###Markdown
This heatmap shows a stronger pattern than the map for model 1. It shows the optimal to be centered around 1500 steps and a learning rate of 0.005, with somewhat constant cost increase as the two parameters grow greater or smaller than thoses values.There is, however, still an abnormality with very high step count and very low learning rate, once again presumably due to overtraining. This may be something that could be fixed by changing the batch size for the tensorflow estimator function, but this grid search already took about 10 hours to run, so adding a 3rd parameter is not feasible with the compute power available to us. Calculating Costs and Area under ROC Curve Scores for the ModelWith the optimal step count and learning rate, the costs of the model built with the given step count and rate are computed.
###Code
costs_model_2, auc_roc_model_2 = get_scores_for_model(deep_model_2, X, y, optimal_steps, optimal_rate)
print(costs_model_2)
print(auc_roc_model_2)
###Output
[2042.75, 2007.5, 2017.0, 1978.0, 2069.25, 2021.75, 1974.0, 2048.25, 2027.0, 2041.25]
[0.5085592, 0.4820538, 0.48592543, 0.48381174, 0.47515301, 0.475777, 0.47903248, 0.48797259, 0.47638676, 0.4891856]
###Markdown
The costs and auc scores computed above are hard-coded below for later use, so that they don't need to be computed again.
###Code
costs_model_2 = [2042.75, 2007.5, 2017.0, 1978.0, 2069.25, 2021.75, 1974.0, 2048.25, 2027.0, 2041.25]
auc_roc_model_2 = [ 0.5085592, 0.4820538, 0.48592543, 0.48381174, 0.47515301, 0.475777, 0.47903248, 0.48797259, 0.47638676, 0.4891856]
###Output
_____no_output_____
###Markdown
Third Deep Learning ArchitectureDue to the poor performance of the two architectures we used in the previous two deep models, we have elected to build our own model to test on the data. This third deep learning architecture was primarily the product of trial and error. Many different cofigurations of layers and activation functions were tried, and in the given time, this the setup that resulted in the lowest cost for the model. We explored a number of different architectures, of varying complexity, including an LSTM network from TensorFlow, and we discovered that, generally speaking, simple networks which look like just a multi-layer perceptron seem to provide the lowest cost. We tuned the sizes of the hidden layers with trial and error, and we discovered that the network shown below is the most efficient. Note that the team embeddings are not shown in the main drawing, as they are computed using a very small network prior to the main MLP. Interestingly, we found that the smallest networks were the best for the team embeddings. We beleive that this is due to overlearning that may occur if the embeddings have too much weight in the overall network architecture.The architecture consists of 6 fully connected layers. The first 3 layers have 1000 nodes, followed by a 500 node layer, a 200 node layer, and another 1000 node layer. This is followed by a relu activation function. The output of this network is then used for classifcation.
###Code
def deep_model_3(X, y):
#Embeddings layer
teamembeddings = layers.stack(X[:,11:75], layers.fully_connected, [20,4])
#teamembeddings = tf.nn.relu(teamembeddings)
#Non-embeddings features
otherfeatures = X[:,0:10]
#Concatenate the embeddings with the non-embeddings
tensors = tf.concat(1, [teamembeddings, otherfeatures])
#[500,200,100,500][1000,1000,1000,500,200,1000]
tensors = layers.stack(tensors, layers.fully_connected, [1000,1000,1000,500,200,1000])
#Relu activation function
tensors = tf.nn.relu(tensors)
pred, loss = learn.models.logistic_regression(tensors, y)
return pred, loss
###Output
_____no_output_____
###Markdown
Grid Searching on the ModelA grid search is performed on the model to find the approximately optimal step count and learning rate for the TensorFlowEstimator.
###Code
costs, optimal_steps, optimal_rate = grid_search(deep_model_3, [250,500,1000,1500,2000], [.05, .01, .005, .001, .0005])
print((optimal_steps, optimal_rate))
###Output
(2000, 0.005)
###Markdown
Although the grid search returned the optimal step count and rate, it is meaningful to visualize the grid that was generated, to get an idea for how much better these particular hyperparameters are than the other possible combinations in the grid.
###Code
grid_search_heatmap(costs, [250,500,1000,1500,2000], [.05, .01, .005, .001, .0005])
###Output
_____no_output_____
###Markdown
The grid search for model 3, interestingly, shows a significant trend toward lower costs in the lower-left part of the map (higher step count and lower learning rate). This stands to reason, and aligns much more with what is expected than the unruly heatmap that was generated by the model 1 grid search. This also aligns with what one might expect of a simple multi-layer perceptron model, which this network architecture closely resembles.This grid search showed 2000 steps with a learning rate of 0.0005 as an optimal. This optimality only slightly outperforms the same learning rate with only 1000 steps, but significantly outperforms every hyper-parameter pair above the major diagonal of the heatmap. Calculating Costs and Area under ROC Curve Scores for the ModelWith the optimal step count and learning rate, the costs of the model built with the given step count and rate are computed.
###Code
costs_model_3, auc_roc_model_3 = get_scores_for_model(deep_model_3, X, y, optimal_steps, optimal_rate)
print(costs_model_3)
print(auc_roc_model_3)
###Output
[1750.0, 1722.85, 1709.0, 1710.75, 1716.25, 1742.0, 1650.0, 1714.0, 1699.5, 1684.5]
[0.62998537, 0.61758681, 0.61135905, 0.61209804, 0.6146912, 0.62633358, 0.5845431, 0.61362667, 0.60702243, 0.60023413]
###Markdown
The costs and auc scores computed above are hard-coded below for later use, so that they don't need to be computed again.
###Code
costs_model_3 = [ 1750.0 , 1722.85, 1709.0, 1710.75, 1716.25, 1742.0, 1650.0 , 1714.0, 1699.50, 1684.50]
auc_roc_model_3 = [0.62998537, 0.61758681, 0.61135905, 0.61209804, 0.6146912 , 0.62633358, 0.5845431 , 0.61362667, 0.60702243, 0.60023413]
###Output
_____no_output_____
###Markdown
Architecture ComparisonThe following getDifference function is used to create a tuple of the range of possible differences of mean for two sets of scores (cost or auc_roc), with 95% confidence. We use the second of the two confidence interval tests proposed in the ICA3 reversed assignment, because the datasets cannot be assumed to be independent, so the binomial approximation to the normal distribution does not hold.
###Code
def getDifference(cost1,cost2,z_val=2.26,size=10):
cost1 = np.asarray(cost1)
cost2 = np.asarray(cost2)
diff12 = cost1 - cost2
sigma12 = np.sqrt(np.sum(diff12*diff12) * 1/(size-1))
d12 = (np.mean(diff12) + 1/(np.sqrt(size)) * z_val * sigma12, np.mean(diff12) - 1/(np.sqrt(size)) * z_val * sigma12)
return d12
###Output
_____no_output_____
###Markdown
Cost DifferenceThe getDifference function is now used to create confidence intervals for the cost differences of the 3 possible pairs of two architectures created from the set of 3 architectures.
###Code
d_one_two = np.array(getDifference(costs_model_1, costs_model_2))
d_one_three = np.array(getDifference(costs_model_1, costs_model_3))
d_two_three = np.array(getDifference(costs_model_2, costs_model_3))
print("Average Model 1 vs Model 2 Difference:", d_one_two)
print("Average Model 1 vs Model 3 Difference:", d_one_three)
print("Average Model 2 vs Model 3 Difference:", d_two_three)
###Output
Average Model 1 vs Model 2 Difference: [ 23.69702196 -72.50702196]
Average Model 1 vs Model 3 Difference: [ 512.26558725 64.50441275]
Average Model 2 vs Model 3 Difference: [ 549.47453987 76.10546013]
###Markdown
The above 3 confidence intervals show that the first two architectures performed similarly, as the model 1 to model 2 difference confidence interval contains zero. However, the third architecture, which we created specifically because of how poorly the first two architectures performed, did significantly outperform both of the first two models, as zero does not appear in either of the confidence intervals including model 3. Therefore, it can be concluded that, with respect to cost, our "home-grown" model 3 does significantly outperform the other architectures. Area Under Multiclass ROC Curve DifferenceThe same process is now used for confidence intervals for the auc_roc metric.
###Code
d_one_two = np.array(getDifference(auc_roc_model_1, auc_roc_model_2))
d_one_three = np.array(getDifference(auc_roc_model_1, auc_roc_model_3))
d_two_three = np.array(getDifference(auc_roc_model_2, auc_roc_model_3))
print("Average Model 1 vs Model 2 Difference:", d_one_two)
print("Average Model 1 vs Model 3 Difference:", d_one_three)
print("Average Model 2 vs Model 3 Difference:", d_two_three)
###Output
Average Model 1 vs Model 2 Difference: [ 0.0309479 0.00313162]
Average Model 1 vs Model 3 Difference: [-0.02675915 -0.19388588]
Average Model 2 vs Model 3 Difference: [-0.03095869 -0.22376586]
###Markdown
The confidence intervals for area under ROC curve provide the same insight as the confidence intervals for the cost function. The first 2 models are statistically similar with 95% confidence, but the third model statistically outperforms both of the first 2, with 95% confidence, because zero does not fall in the ROC difference confidence interval for model 3 vs the other 2 models. With the ROC curve score's confirmation, we can confidently declare that architecture 3 is better for this classification task than models 1 and 2. However, in our deployment section, we will address the possibility that even our best deep learning architecture may be outperformed by conventional machine learning algorithms which were explored in Project 2. Deployment Given the performances of the three above deep learning models for classifying NFL play types, we beleive that model 3 would be the best choice, as it, with statistical significance, outperformed both other models with respect to both cost and AUC_ROC score. However, the costs for the models generated, as well as the AUC_ROC scores, seemed to, in general be inferior to the scores from the conventional algorithms. In our previous report, we declared random forests to be our preffered model, due to their superior performance, and ease of use / computational inexpense. Therefore, for the purpose of comparison, we will compare the performance of random forests to the performance of our best deep learning architecture.Note that we (mistakenly) used an incorrect cross-validation strategy for Lab 2, so random forest scores must be re-generated. The code below, to generate the new scores, is adapted from Lab 2 for use with our new cross validation strategy. First, we perform some setup that is needed to use the original SKLearn library for classification.
###Code
from sklearn.ensemble import RandomForestClassifier
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
from sklearn.model_selection import cross_val_score
#Building the class weight map
PlayTypes = df.PlayType.value_counts().index.tolist()
Costs = [sum(x) for x in cost_mat]
y = df.PlayType.values
ClassWeights = dict(zip(PlayTypes, Costs))
###Output
_____no_output_____
###Markdown
Next, we produce cost scores for the random forest classifier.
###Code
#Pipeline for cost evaluation
clf = Pipeline([('sca',StandardScaler()),
('clf',RandomForestClassifier(class_weight=ClassWeights, n_estimators=250))])
per_fold_eval_criteria = cross_val_score(estimator=clf,
X=X,
y=y,
cv=cv,
scoring=scorer,
n_jobs=-1
)
RFCosts = per_fold_eval_criteria
###Output
_____no_output_____
###Markdown
Finally, we produce AUC_ROC scores for the random forest classifier.
###Code
#Pipeline for cost evaluation
clf = Pipeline([('sca',StandardScaler()),
('clf',RandomForestClassifier(class_weight=ClassWeights, n_estimators=250))])
per_fold_eval_criteria = cross_val_score(estimator=clf,
X=X,
y=y,
cv=cv,
scoring=auc_roc_scorer,
n_jobs=-1
)
RF_auc = per_fold_eval_criteria
###Output
_____no_output_____
###Markdown
The cost and auc scores for the random forest are saved below so that they don't need to be re-computed.
###Code
RFCosts = [ 988.05, 1007.9 , 976.75, 956.75, 971.75, 949.55, 919. , 992.5 , 985.75, 956. ]
RF_auc = [ 0.89186154, 0.84936065, 0.85641263, 0.84052327, 0.84994278, 0.84799384, 0.86985361, 0.88280675, 0.87433496, 0.85303625]
auc_diff = getDifference(auc_roc_model_3, RF_auc)
cost_diff = getDifference(costs_model_3, RFCosts)
print("Deep Learning vs Random Forests (Cost):", cost_diff)
print("Deep Learning vs Random Forests (AUC_ROC):", auc_diff)
###Output
Deep Learning vs Random Forests (Cost): (1296.8375953662744, 182.13240463372551)
Deep Learning vs Random Forests (AUC_ROC): (-0.061045371130097753, -0.43868380886990221)
###Markdown
Problem 1A)
###Code
d = np.append(stats.norm.rvs(loc = 12., scale = 0.4, size = 100000), [10., 10.3, 2.1, 0., 0., 15.6, 22.3, 12.7])
fig, ax = plt.subplots(1, 1)
ax.hist(d,100, density=True)
plt.tick_params(labelsize = 24)
plt.yscale('log')
plt.xlabel('Temperature [K]', size = 24)
plt.ylabel('Probability', size = 24)
plt.show()
###Output
_____no_output_____
###Markdown
This is a Gaussian distribution of 100k values with a mean of 12 K and width of 0.4 K plotted on a log y-axis. There are some outlier data points at 2.1, 0., 0., 15.6, 22.3, 12.7. What are the temperatures that give me the range of temperatures between 5$\sigma$ from both sides? This question will help me determine the range of temperatures to keep in my data.$$5\sigma = \frac{1}{2} \text{erfc}\bigg(\frac{X}{\sqrt{2}}\bigg) dx \\5\sigma = \frac{1}{2} \text{erf}\bigg(\frac{X}{\sqrt{2}}\bigg) dx\\$$ where x = temperature.
###Code
prob5sigma = 1/3.5e6
print(prob5sigma)
t1 = stats.norm.ppf(prob5sigma, loc = 12., scale = 0.4)
t2 = stats.norm.isf(prob5sigma, loc = 12., scale = 0.4)
print(t1, t2)
###Output
2.857142857142857e-07
9.999747426038471 14.000252573961529
###Markdown
This gives me a temperature range of 10-14 K. So everything outside of that range, I can remove.
###Code
print(len(d))
mask = np.logical_or(d< 10, d>14)
bad_data = d[mask]
print(bad_data)
###Output
100008
[14.05889266 2.1 0. 0. 15.6 22.3 ]
###Markdown
| | True T | Bad T ||----------------|--------|-------|| Your Test Good | 100000 | 3 || Your Test Bad | 0 | 5 |B) It depended on where I chose to be too small of a probability of happening with regular noise. I chose that to be 5$\sigma$ but it could have been any $\sigma$. It isn't a predictable quantity because it depends on how many points are outside of the data and what is choosen to be the maximum $\sigma$. C) Yes there is a chance of bad data getting in. If I chose a smaller $\sigma$ there is less chance of comission but that could also remove valid data points. For example, if I choose 2$\sigma$ the temperature range changes to 11.2-12.8 K. This would get rid of all but 1 false temperature point. But it also gets rid of over 4500 real data points.
###Code
prob2sigma = stats.norm.sf(2)
t1 = stats.norm.ppf(prob2sigma, loc = 12., scale = 0.4)
t2 = stats.norm.isf(prob2sigma, loc = 12., scale = 0.4)
print(t1, t2)
mask = np.logical_or(d< 11.2, d>12.8)
bad_data = d[mask]
print(len(bad_data))
###Output
11.2 12.8
4526
###Markdown
Problem 2
###Code
a = np.vstack((stats.norm.rvs( scale = 1, size = 100000), stats.norm.rvs( scale = 1, size = 100000)))
print(a.shape)
fig, ax = plt.subplots(1, 1)
h = ax.hist2d(a[0,:],a[1,:],bins=100, density=False);
ax.set_aspect('equal', 'box')
plt.xlim([-3 , 3])
plt.ylim([-3 , 3])
plt.title("2D Histogram of positional uncertainty", fontsize = 24)
plt.ylabel("$\Delta$y arcseconds", fontsize = 18)
plt.xlabel("$\Delta$x arcseconds", fontsize = 18)
plt.colorbar(h[3], ax=ax)
plt.show()
###Output
(2, 100000)
###Markdown
1. If the background distribution is a 2D Gaussian that can be written as $X\hat{i} + Y\hat{j}$ and $R = \sqrt{X^2 + Y^2}$ where X and Y are standard normal distributions then R is a [Rayleigh Distribution](https://en.wikipedia.org/wiki/Rayleigh_distributionRelated_distributions) with parameter $\sigma$. Since the change in arcseconds is given by the distance formula $d = \sqrt{\delta x^2 + \delta y^2}$ this problem can be modeled as a Rayleigh distribution. What is the arcseconds that corresponds to 5$\sigma$? To be more signal-like, the distance has to be greater. So we integrate inward from the right starting at infinity.2. $$5\sigma = \int_{d}^{\infty} \frac{x}{\sigma^2} e^{-x^2/(2\sigma^2)} \, dx$$3. 5$\sigma$ is 5.5 arcseconds or greater.
###Code
dist5sigma = stats.rayleigh.isf(prob5sigma)
print(dist5sigma)
###Output
5.489676406940512
###Markdown
Problem 31. Assuming a background signal of 1 cosmic ray per minute (Poisson distribution), what is the difference in signal if we see 6800 cosmic rays total for 8 hours a night for 15 days? This would be a convolution of 2 Poisson distributions which is a Poisson distribution with a mean of the sum of means. So it the same as having a mean of 7200 cosmic rays ($15 \text{days} \times 8 \text{hours} \times 60 \text{minutes}$). We would want to integrate from $-\infty$ to X since the moon blocks out rays making total number of detected rays smaller.2. $$P(\lambda) = \frac{\lambda^k e{-\lambda}}{k!}; \,\text{where } \lambda = 7200\\ X\sigma = \int_{-\infty}^{n} P(\lambda) \, dx$$3. The significance is 4.7$\sigma$
###Code
minutes = 7200
rays_seen = 6800
probability = stats.poisson.cdf(rays_seen, minutes)
sigma = stats.norm.ppf(probability)
print(sigma)
###Output
-4.750747965777188
###Markdown
Assignment 3Data Classification Names: 1. Amr Hendy (46) 2. Abdelrhman Yasser (37) Introduction to MAGIC Gamma Telescope DataSetThe data are MC generated to simulate registration of high energy gamma particles in a ground-based atmospheric Cherenkov gamma telescope using the imaging technique. Cherenkov gamma telescope observes high energy gamma rays, taking advantage of the radiation emitted by charged particles produced inside the electromagnetic showers initiated by the gammas, and developing in the atmosphere. This Cherenkov radiation (of visible to UV wavelengths) leaks through the atmosphere and gets recorded in the detector, allowing reconstruction of the shower parameters. The available information consists of pulses left by the incoming Cherenkov photons on the photomultiplier tubes, arranged in a plane, the camera. Depending on the energy of the primary gamma, a total of few hundreds to some 10000 Cherenkov photons get collected, in patterns (called the shower image), allowing to discriminate statistically those caused by primary gammas (signal) from the images of hadronic showers initiated by cosmic rays in the upper atmosphere (background). Typically, the image of a shower after some pre-processing is an elongated cluster. Its long axis is oriented towards the camera center if the shower axis is parallel to the telescope's optical axis, i.e. if the telescope axis is directed towards a point source. A principal component analysis is performed in the camera plane, which results in a correlation axis and defines an ellipse. If the depositions were distributed as a bivariate Gaussian, this would be an equidensity ellipse. The characteristic parameters of this ellipse (often called Hillas parameters) are among the image parameters that can be used for discrimination. The energy depositions are typically asymmetric along the major axis, and this asymmetry can also be used in discrimination. There are, in addition, further discriminating characteristics, like the extent of the cluster in the image plane, or the total sum of depositions. The data set was generated by a Monte Carlo program, Corsika, described in D. Heck et al., CORSIKA, A Monte Carlo code to simulate extensive air showers, Forschungszentrum Karlsruhe FZKA 6019 (1998). The program was run with parameters allowing to observe events with energies down to below 50 GeV. Exploring DatasetWe will begin by exploring MAGIC Gamma Telescope Dataset:- Total number of Instances = 19020 we will divide them into 70% (13314 samples) as training set and 30% (5706 samples) as tetsing set.- Classes are gamma (g) and hadron (h), The dataset is distributed between the two classes as following: - 12332 instances for gamma class - 6688 instances for hadron class - Now we will explore the attributes of the dataset: 1. fLength: continuous, describes the major axis of ellipse [mm] 2. fWidth: continuous, describes minor axis of ellipse [mm] 3. fSize: continuous, describes 10-log of sum of content of all pixels [in phot] 4. fConc: continuous, describes ratio of sum of two highest pixels over fSize [ratio] 5. fConc1: continuous, describes ratio of highest pixel over fSize [ratio] 6. fAsym: continuous, describes distance from highest pixel to center, projected onto major axis [mm] 7. fM3Long: continuous, describes 3rd root of third moment along major axis [mm] 8. fM3Trans: continuous, describes 3rd root of third moment along minor axis [mm] 9. fAlpha: continuous, describes angle of major axis with vector to origin [deg] 10. fDist: continuous, describes distance from origin to center of ellipse [mm] 11. class: g and h , describes gamma (signal), hadron (background) Imports
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from time import time
import matplotlib.patches as mpatches
###Output
_____no_output_____
###Markdown
Loading dataset
###Code
def load_dataset():
url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/magic/magic04.data'
attribute_names = ['fLength', 'fWidth', 'fSize', 'fConc', 'fConc1', 'fAsym', 'fM3Long', 'fM3Trans', 'fAlpha', 'fDist', 'class']
df = pd.read_csv(url, index_col=False, names=attribute_names)
return df
dataset_df = load_dataset()
print("DataSet contains {} samples".format(dataset_df.shape[0]))
dataset_df.head()
###Output
DataSet contains 19020 samples
###Markdown
Dataset Summary
###Code
dataset_df.describe()
###Output
_____no_output_____
###Markdown
Exploring the Balance of class labels
###Code
classes_cnt = dataset_df['class'].value_counts()
print(classes_cnt)
###Output
g 12332
h 6688
Name: class, dtype: int64
###Markdown
**Dataset is imbalanced, so we will randomly put aside the extra readings for the gamma “g” class to make both classes equal in size which balance the dataset.** Dataset Balancing
###Code
random_state = 42
random_samples = dataset_df[dataset_df['class'] == 'g'].sample(n=classes_cnt[0]-classes_cnt[1], random_state=random_state)
display(random_samples)
balanced_df = dataset_df.drop(random_samples.index);
balanced_df['class'].value_counts()
###Output
_____no_output_____
###Markdown
Separating Features from Class Label
###Code
X = balanced_df.drop('class', axis=1)
Y = balanced_df['class']
###Output
_____no_output_____
###Markdown
Dataset Visualization **1) Using BoxPlots**
###Code
X.iloc[:,:].boxplot(grid=False, fontsize=10, rot=60, figsize=(10,20))
def plot_data(x_data, y_data, x_title, y_title, title, xticks):
fig = plt.figure(figsize=(15,8))
plt.plot(x_data, y_data, 'bo')
fig.suptitle(title, fontsize=16)
plt.xlabel(x_title)
plt.ylabel(y_title)
plt.xticks(xticks)
plt.show()
return
def plot_pretty_data(X, Y):
classes_values = Y.unique()
feature_names = list(X.columns)
all_x = []
all_y = []
# plotting each class alone
for class_val in classes_values:
# making feature points
x = []
y = []
for i in range(0, len(feature_names)):
feature_values = list(X.loc[Y[Y == class_val].index].iloc[:, i])
y = y + feature_values
x = x + [i] * len(feature_values)
all_x = all_x + x
all_y = all_y + y
# plotting all classes together
plot_data(all_x, all_y, 'Feature Number', 'Feature Value', 'All Classes', [i for i in range(0, len(feature_names))])
plot_pretty_data(X, Y)
###Output
_____no_output_____
###Markdown
We can see from the previous figures, features have different ranges so that may cause problems with some classifiers that depends on the distance between the samples. We wil discuss the scaling and normalization in the preprocessing part later. **2) Using Histograms**
###Code
def plot_histogram(X, Y, bins=15, rwidth=0.5):
colors = ['red', 'green', 'blue', 'olive', 'yellow', 'gray', 'black', 'gold', 'skyblue', 'teal']
classes = Y.unique()
for class_val in classes:
data = X.loc[Y[Y == class_val].index]
plt.figure(figsize=(20,10))
plt.title("Class " + class_val )
plt.xlabel("Feature Value")
plt.ylabel("Value Frequency")
plt.hist(np.array(data), bins=bins, color=np.array(colors), label=classes, rwidth=rwidth)
plt.legend()
plt.show()
plot_histogram(X, Y)
###Output
_____no_output_____
###Markdown
**3) Using Correlation Matrix**
###Code
def visualize_coeff_matrix(mat):
fig = plt.figure(figsize = (12,12))
plt.imshow(mat, cmap='Reds')
plt.colorbar()
plt.xticks([i for i in range(0,mat.shape[0])])
plt.yticks([i for i in range(0,mat.shape[0])])
display(X.corr())
visualize_coeff_matrix(X.corr())
###Output
_____no_output_____
###Markdown
From the correlation Matrix we can see that the first 3 features (0,1,2) are highly dependant/correlated, also the features (3,4) are highly correlated. That will give us hints to use some methods of dimensionality reduction. we will discuss that part also in the preprocessing part. Class Encoderwe need to convert the class labels into numerical labels to be used later in the classification algroithms. We will use SciKit learn labelencoder class to help us perform this step.
###Code
from sklearn.preprocessing import LabelEncoder
class_encoder = LabelEncoder()
Y = class_encoder.fit_transform(Y)
print(Y)
###Output
[0 0 0 ... 1 1 1]
###Markdown
Dataset Split
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.3, random_state=random_state, shuffle=True)
print('Training set has {} samples'.format(len(X_train)))
print('Testing set has {} samples'.format(len(X_test)))
###Output
Training set has 9363 samples
Testing set has 4013 samples
###Markdown
Preprocessing 1) Feature Projection Using PCA We already have seen from the correlation matrix, there are some highly correlated features. So we will try to reduce the dimensionality as we can with no loss in the covered variance of the data.
###Code
from sklearn.decomposition import PCA
def plot_pca(X):
features_number = X.shape[1]
pca = PCA(n_components=features_number, random_state=random_state)
pca.fit(X)
plt.figure(figsize=(20,8))
plt.title('PCA Components variance ratio')
plt.xlabel('PCA Component')
plt.ylabel('Variance Ratio')
plt.xticks([i for i in range(1, features_number + 1)])
plt.plot([i for i in range(1, features_number + 1)], pca.explained_variance_ratio_, color='red', marker='o', linestyle='dashed', linewidth=2, markersize=12)
plt.show()
plt.figure(figsize=(20,8))
plt.title('Relation Between Number of PCA Components taken and Covered Variance Ratio')
plt.xlabel('Number of Taken PCA Components')
plt.ylabel('Covered Variance Ratio')
plt.xticks([i for i in range(1, features_number + 1)])
plt.plot([i for i in range(1, features_number + 1)], pca.explained_variance_ratio_.cumsum(), color='red', marker='o', linestyle='dashed', linewidth=2, markersize=12)
plt.show()
plot_pca(X_train)
###Output
_____no_output_____
###Markdown
**We will choose to reduce the dimensionality to 7 dimensions which guarantees covering all the variance in the original data.**
###Code
pca = PCA(7, random_state=random_state)
pca.fit(X_train)
X_train_reduced = pd.DataFrame(pca.transform(X_train))
print("Training set after applying PCA Dimensionality Reduction")
display(X_train_reduced)
pca = PCA(7, random_state=random_state)
pca.fit(X_test)
X_test_reduced = pd.DataFrame(pca.transform(X_test))
print("Testing set after applying PCA Dimensionality Reduction")
display(X_test_reduced)
###Output
Testing set after applying PCA Dimensionality Reduction
###Markdown
2) Z Score Normalization We have seen from the histograms that the data is distributed to nearly normal for some features. So we will try to normalize all the featues to be more useful later in the classification algorithms.
###Code
from scipy.stats import zscore
X_train_reduced_normalized = X_train_reduced.apply(zscore)
display(X_train_reduced_normalized)
X_test_reduced_normalized = X_test_reduced.apply(zscore)
display(X_test_reduced_normalized)
###Output
_____no_output_____
###Markdown
3) Min Max Scaler We have seen also from the boxplot that many features have different ranges, so trying to scaling all the features to the same range such as [0,1] is often useful especially in the algorithms where distance between the sample points is highly considered such as KNN, and SVM.
###Code
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler(feature_range=(0,1))
X_train_scaled = pd.DataFrame(data=scaler.fit_transform(X_train), columns=X_train.columns)
X_test_scaled = pd.DataFrame(data=scaler.fit_transform(X_test), columns=X_test.columns)
###Output
_____no_output_____
###Markdown
Classification We will use several classifiers for our classification problem and evaluate them. This function will be used to get the result of any classifier easily.
###Code
from sklearn.metrics import accuracy_score, fbeta_score
def train_predict(learner, X_train, y_train, X_test, y_test):
'''
inputs:
- learner: the learning algorithm to be trained and predicted on
- sample_size: the size of samples (number) to be drawn from training set
- X_train: features training set
- y_train: income training set
- X_test: features testing set
- y_test: income testing set
'''
results = {}
start = time() # Get start time
learner.fit(X_train, y_train)
end = time() # Get end time
results['train_time'] = end - start
start = time() # Get start time
predictions_test = learner.predict(X_test)
predictions_train = learner.predict(X_train[:300])
end = time() # Get end time
results['pred_time'] = end - start
results['acc_train'] = accuracy_score(y_train[:300], predictions_train)
results['acc_test'] = accuracy_score(y_test, predictions_test)
results['f_train'] = fbeta_score(y_train[:300], predictions_train, beta=0.5)
results['f_test'] = fbeta_score(y_test, predictions_test, beta=0.5)
print("{} trained on {} samples, and tests predicted with accuracy {} and fscore {}".format(learner.__class__.__name__, len(X_train), results['acc_test'], results['f_test']))
# Return the results
return results
###Output
_____no_output_____
###Markdown
This function will be used to compare between the different classifiers results.
###Code
def evaluate(results, title):
"""
Visualization code to display results of various learners.
inputs:
- learners: a list of supervised learners
- stats: a list of dictionaries of the statistic results from 'train_predict()'
- accuracy: The score for the naive predictor
- f1: The score for the naive predictor
"""
# Create figure
fig, ax = plt.subplots(2, 3, figsize = (15,10))
# Constants
bar_width = 0.3
colors = ['#A00000','#00A0A0','#00AFFD','#00AB0A','#C0A0AA', '#ADA000']
# classifier
for k, learner in enumerate(results.keys()):
# metric
for j, metric in enumerate(['train_time', 'acc_train', 'f_train', 'pred_time', 'acc_test', 'f_test']):
# Creative plot code
ax[int(j/3), j%3].bar(k*bar_width, results[learner][metric], width=bar_width, color=colors[k])
ax[int(j/3), j%3].set_xticks([0.45, 1.45, 2.45])
ax[int(j/3), j%3].set_xlabel("Classiifer Algorithm")
ax[int(j/3), j%3].set_xlim((-0.1, 3.0))
# Add unique y-labels
ax[0, 0].set_ylabel("Time (in seconds)")
ax[0, 1].set_ylabel("Accuracy Score")
ax[0, 2].set_ylabel("F-score")
ax[1, 0].set_ylabel("Time (in seconds)")
ax[1, 1].set_ylabel("Accuracy Score")
ax[1, 2].set_ylabel("F-score")
# Add titles
ax[0, 0].set_title("Model Training")
ax[0, 1].set_title("Accuracy Score on Training Subset")
ax[0, 2].set_title("F-score on Training Subset")
ax[1, 0].set_title("Model Predicting")
ax[1, 1].set_title("Accuracy Score on Testing Set")
ax[1, 2].set_title("F-score on Testing Set")
# Set y-limits for score panels
ax[0, 1].set_ylim((0, 1))
ax[0, 2].set_ylim((0, 1))
ax[1, 1].set_ylim((0, 1))
ax[1, 2].set_ylim((0, 1))
# Create patches for the legend
patches = []
for i, learner in enumerate(results.keys()):
patches.append(mpatches.Patch(color = colors[i], label = learner))
plt.legend(handles = patches, bbox_to_anchor=(-.80, 2.53), \
loc = 'upper center', borderaxespad = 0., ncol = 3, fontsize = 'x-large')
# Aesthetics
plt.suptitle(title, fontsize = 16, y = 1.02)
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Here are our results which will be compared by plotting.
###Code
results = {}
results_reduced = {}
results_reduced_normalized = {}
###Output
_____no_output_____
###Markdown
1) Using Decision Tree
###Code
from sklearn.tree import DecisionTreeClassifier
clf = DecisionTreeClassifier(random_state=random_state)
print('Before Preprocessing')
results[clf.__class__.__name__] = train_predict(clf, X_train, y_train, X_test, y_test)
print('After Applying PCA')
results_reduced[clf.__class__.__name__] = train_predict(clf, X_train_reduced, y_train, X_test_reduced, y_test)
print('After Applying PCA and Z Normalization')
results_reduced_normalized[clf.__class__.__name__] = train_predict(clf, X_train_reduced_normalized, y_train, X_test_reduced_normalized, y_test)
print('After Scaling Using MinMaxScaler')
train_predict(clf, X_train_scaled, y_train, X_test_scaled, y_test);
###Output
Before Preprocessing
DecisionTreeClassifier trained on 9363 samples, and tests predicted with accuracy 0.7881883877398455 and fscore 0.7868455563401594
After Applying PCA
DecisionTreeClassifier trained on 9363 samples, and tests predicted with accuracy 0.7353600797408423 and fscore 0.7322551662174304
After Applying PCA and Z Normalization
DecisionTreeClassifier trained on 9363 samples, and tests predicted with accuracy 0.7331173685522053 and fscore 0.7296329453894359
After Scaling Using MinMaxScaler
DecisionTreeClassifier trained on 9363 samples, and tests predicted with accuracy 0.7201594816845253 and fscore 0.7088296808220425
###Markdown
2) Using Naïve Bayes
###Code
from sklearn.naive_bayes import GaussianNB
clf = GaussianNB()
print('Before Preprocessing')
results[clf.__class__.__name__] = train_predict(clf, X_train, y_train, X_test, y_test)
print('After Applying PCA')
results_reduced[clf.__class__.__name__] = train_predict(clf, X_train_reduced, y_train, X_test_reduced, y_test)
print('After Applying PCA and Z Normalization')
results_reduced_normalized[clf.__class__.__name__] = train_predict(clf, X_train_reduced_normalized, y_train, X_test_reduced_normalized, y_test)
print('After Scaling Using MinMaxScaler')
train_predict(clf, X_train_scaled, y_train, X_test_scaled, y_test);
###Output
Before Preprocessing
GaussianNB trained on 9363 samples, and tests predicted with accuracy 0.6568651881385497 and fscore 0.6711733462095607
After Applying PCA
GaussianNB trained on 9363 samples, and tests predicted with accuracy 0.6929977572888114 and fscore 0.7162941093771966
After Applying PCA and Z Normalization
GaussianNB trained on 9363 samples, and tests predicted with accuracy 0.6927485671567406 and fscore 0.7153835405896325
After Scaling Using MinMaxScaler
GaussianNB trained on 9363 samples, and tests predicted with accuracy 0.6411662098180912 and fscore 0.6408995944451271
###Markdown
3) Support Vector Machines (SVM)
###Code
from sklearn.svm import SVC
clf = SVC(random_state=random_state)
print('Before Preprocessing')
results[clf.__class__.__name__] = train_predict(clf, X_train, y_train, X_test, y_test)
print('After Applying PCA')
results_reduced[clf.__class__.__name__] = train_predict(clf, X_train_reduced, y_train, X_test_reduced, y_test)
print('After Applying PCA and Z Normalization')
results_reduced_normalized[clf.__class__.__name__] = train_predict(clf, X_train_reduced_normalized, y_train, X_test_reduced_normalized, y_test)
print('After Scaling Using MinMaxScaler')
train_predict(clf, X_train_scaled, y_train, X_test_scaled, y_test);
###Output
Before Preprocessing
SVC trained on 9363 samples, and tests predicted with accuracy 0.5664091701968602 and fscore 0.5869551891129774
After Applying PCA
SVC trained on 9363 samples, and tests predicted with accuracy 0.5046100174433092 and fscore 0.035629453681710214
After Applying PCA and Z Normalization
SVC trained on 9363 samples, and tests predicted with accuracy 0.7951657114378271 and fscore 0.8072945019052803
After Scaling Using MinMaxScaler
SVC trained on 9363 samples, and tests predicted with accuracy 0.7762272614004485 and fscore 0.7807744779095392
###Markdown
4) K-Nearest Neighbor (K-NN)
###Code
from sklearn.neighbors import KNeighborsClassifier
clf = KNeighborsClassifier(n_neighbors=3)
print('Before Preprocessing')
results[clf.__class__.__name__] = train_predict(clf, X_train, y_train, X_test, y_test)
print('After Applying PCA')
results_reduced[clf.__class__.__name__] = train_predict(clf, X_train_reduced, y_train, X_test_reduced, y_test)
print('After Applying PCA and Z Normalization')
results_reduced_normalized[clf.__class__.__name__] = train_predict(clf, X_train_reduced_normalized, y_train, X_test_reduced_normalized, y_test)
print('After Scaling Using MinMaxScaler')
train_predict(clf, X_train_scaled, y_train, X_test_scaled, y_test);
###Output
Before Preprocessing
KNeighborsClassifier trained on 9363 samples, and tests predicted with accuracy 0.7535509593820084 and fscore 0.7616614113297815
After Applying PCA
KNeighborsClassifier trained on 9363 samples, and tests predicted with accuracy 0.7478195863443807 and fscore 0.7543405586110213
After Applying PCA and Z Normalization
KNeighborsClassifier trained on 9363 samples, and tests predicted with accuracy 0.7597807126837777 and fscore 0.7669140582983759
After Scaling Using MinMaxScaler
KNeighborsClassifier trained on 9363 samples, and tests predicted with accuracy 0.7864440568153501 and fscore 0.7927280414332523
###Markdown
5) Random Forests
###Code
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier(n_estimators=100, max_depth=2, random_state=random_state)
print('Before Preprocessing')
results[clf.__class__.__name__] = train_predict(clf, X_train, y_train, X_test, y_test)
print('After Applying PCA')
results_reduced[clf.__class__.__name__] = train_predict(clf, X_train_reduced, y_train, X_test_reduced, y_test)
print('After Applying PCA and Z Normalization')
results_reduced_normalized[clf.__class__.__name__] = train_predict(clf, X_train_reduced_normalized, y_train, X_test_reduced_normalized, y_test)
print('After Scaling Using MinMaxScaler')
train_predict(clf, X_train_scaled, y_train, X_test_scaled, y_test);
###Output
Before Preprocessing
RandomForestClassifier trained on 9363 samples, and tests predicted with accuracy 0.7662596561176177 and fscore 0.7545139863410983
After Applying PCA
RandomForestClassifier trained on 9363 samples, and tests predicted with accuracy 0.7538001495140793 and fscore 0.7429963459196102
After Applying PCA and Z Normalization
RandomForestClassifier trained on 9363 samples, and tests predicted with accuracy 0.7547969100423623 and fscore 0.743666448536973
After Scaling Using MinMaxScaler
RandomForestClassifier trained on 9363 samples, and tests predicted with accuracy 0.7562920508347869 and fscore 0.7422623938646946
###Markdown
6) AdaBoost
###Code
from sklearn.ensemble import AdaBoostClassifier
clf = AdaBoostClassifier(base_estimator=DecisionTreeClassifier(max_depth=10), n_estimators=100, learning_rate=0.2, random_state=6)
print('Before Preprocessing')
results[clf.__class__.__name__] = train_predict(clf, X_train, y_train, X_test, y_test)
print('After Applying PCA')
results_reduced[clf.__class__.__name__] = train_predict(clf, X_train_reduced, y_train, X_test_reduced, y_test)
print('After Applying PCA and Z Normalization')
results_reduced_normalized[clf.__class__.__name__] = train_predict(clf, X_train_reduced_normalized, y_train, X_test_reduced_normalized, y_test)
print('After Scaling Using MinMaxScaler')
train_predict(clf, X_train_scaled, y_train, X_test_scaled, y_test);
###Output
Before Preprocessing
AdaBoostClassifier trained on 9363 samples, and tests predicted with accuracy 0.8547221530027411 and fscore 0.8661500902239675
After Applying PCA
AdaBoostClassifier trained on 9363 samples, and tests predicted with accuracy 0.7881883877398455 and fscore 0.7928040999895408
After Applying PCA and Z Normalization
AdaBoostClassifier trained on 9363 samples, and tests predicted with accuracy 0.791926239720907 and fscore 0.7959396147839668
After Scaling Using MinMaxScaler
AdaBoostClassifier trained on 9363 samples, and tests predicted with accuracy 0.7782207824570146 and fscore 0.7569511446364354
###Markdown
Comparing Performance **Let's compare between the classifiers performance on the original data before preprocessing.**
###Code
evaluate(results, "Performance Metrics for Different Models Before Preprocessing")
###Output
_____no_output_____
###Markdown
**Let's compare between the classifiers performance on the original data after applying PCA**
###Code
evaluate(results_reduced, "Performance Metrics for Different Models After Applying PCA")
###Output
_____no_output_____
###Markdown
**Let's compare between the classifiers performance on the original data after applying PCA and Z Normalization.**
###Code
evaluate(results_reduced_normalized, "Performance Metrics for Different Models After Applying PCA and Z Normalization")
###Output
_____no_output_____
###Markdown
More Classifiers **Let's try more classifiers to test if we can gain more score on this dataset** 7) Gradient Boosting
###Code
from sklearn.ensemble import GradientBoostingClassifier
clf = GradientBoostingClassifier(random_state=6)
print('Before Preprocessing')
train_predict(clf, X_train, y_train, X_test, y_test);
print('After Applying PCA')
train_predict(clf, X_train_reduced, y_train, X_test_reduced, y_test);
print('After Applying PCA and Z Normalization')
train_predict(clf, X_train_reduced_normalized, y_train, X_test_reduced_normalized, y_test);
print('After Scaling Using MinMaxScaler')
train_predict(clf, X_train_scaled, y_train, X_test_scaled, y_test);
###Output
Before Preprocessing
GradientBoostingClassifier trained on 9363 samples, and tests predicted with accuracy 0.8457513082481933 and fscore 0.8527293601920469
After Applying PCA
GradientBoostingClassifier trained on 9363 samples, and tests predicted with accuracy 0.7889359581360578 and fscore 0.790201906323665
After Applying PCA and Z Normalization
GradientBoostingClassifier trained on 9363 samples, and tests predicted with accuracy 0.7894343384001994 and fscore 0.7899031106578277
After Scaling Using MinMaxScaler
GradientBoostingClassifier trained on 9363 samples, and tests predicted with accuracy 0.8076252180413656 and fscore 0.7993411491134579
###Markdown
8) XGBoost
###Code
from xgboost import XGBClassifier
from sklearn.metrics import fbeta_score
clf = XGBClassifier()
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
predictions = [round(value) for value in y_pred]
fscore = fbeta_score(y_test, predictions, beta=1)
print("Fscore: %.5f" % (fscore))
###Output
Fscore: 0.83636
###Markdown
9) Neural Network
###Code
from sklearn.neural_network import MLPClassifier
clf = MLPClassifier(hidden_layer_sizes=(50,50,50), solver='adam', max_iter=300, shuffle=True, batch_size=50, random_state=29, learning_rate='adaptive',alpha=0.05, activation='relu')
print('Before Preprocessing')
train_predict(clf, X_train, y_train, X_test, y_test);
print('After Applying PCA')
train_predict(clf, X_train_reduced, y_train, X_test_reduced, y_test);
print('After Applying PCA and Z Normalization')
train_predict(clf, X_train_reduced_normalized, y_train, X_test_reduced_normalized, y_test);
print('After Scaling Using MinMaxScaler')
train_predict(clf, X_train_scaled, y_train, X_test_scaled, y_test);
###Output
Before Preprocessing
MLPClassifier trained on 9363 samples, and tests predicted with accuracy 0.7846997258908547 and fscore 0.7865746937094615
After Applying PCA
MLPClassifier trained on 9363 samples, and tests predicted with accuracy 0.7849489160229255 and fscore 0.7968083943600393
After Applying PCA and Z Normalization
MLPClassifier trained on 9363 samples, and tests predicted with accuracy 0.8041365561923748 and fscore 0.8110958759624514
After Scaling Using MinMaxScaler
MLPClassifier trained on 9363 samples, and tests predicted with accuracy 0.8325442312484426 and fscore 0.8694096601073346
###Markdown
10) Quadratic Discriminant Analysis
###Code
from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis
clf = QuadraticDiscriminantAnalysis()
print('Before Preprocessing')
train_predict(clf, X_train, y_train, X_test, y_test);
print('After Applying PCA')
train_predict(clf, X_train_reduced, y_train, X_test_reduced, y_test);
print('After Applying PCA and Z Normalization')
train_predict(clf, X_train_reduced_normalized, y_train, X_test_reduced_normalized, y_test);
print('After Scaling Using MinMaxScaler')
train_predict(clf, X_train_scaled, y_train, X_test_scaled, y_test);
###Output
Before Preprocessing
QuadraticDiscriminantAnalysis trained on 9363 samples, and tests predicted with accuracy 0.7261400448542238 and fscore 0.7672801043327054
After Applying PCA
QuadraticDiscriminantAnalysis trained on 9363 samples, and tests predicted with accuracy 0.7024669823075006 and fscore 0.7385930108347321
After Applying PCA and Z Normalization
QuadraticDiscriminantAnalysis trained on 9363 samples, and tests predicted with accuracy 0.7039621230999252 and fscore 0.7404591759160711
After Scaling Using MinMaxScaler
QuadraticDiscriminantAnalysis trained on 9363 samples, and tests predicted with accuracy 0.6984799401943683 and fscore 0.707896904789926
###Markdown
11) Logistic Regression
###Code
from sklearn.linear_model import LogisticRegression
clf = LogisticRegression(random_state=random_state, n_jobs=-1)
print('Before Preprocessing')
train_predict(clf, X_train, y_train, X_test, y_test);
print('After Applying PCA')
train_predict(clf, X_train_reduced, y_train, X_test_reduced, y_test);
print('After Applying PCA and Z Normalization')
train_predict(clf, X_train_reduced_normalized, y_train, X_test_reduced_normalized, y_test);
print('After Scaling Using MinMaxScaler')
train_predict(clf, X_train_scaled, y_train, X_test_scaled, y_test);
###Output
Before Preprocessing
LogisticRegression trained on 9363 samples, and tests predicted with accuracy 0.7615250436082731 and fscore 0.7665431445209105
After Applying PCA
###Markdown
12) Bagging Ensemble Bagging on Neural Network
###Code
from sklearn.ensemble import BaggingClassifier
clf = BaggingClassifier(MLPClassifier(hidden_layer_sizes=(50,50,50), solver='adam', max_iter=300, shuffle=True, batch_size=50, random_state=29, learning_rate='adaptive',alpha=0.05, activation='relu')
, max_samples=0.9, max_features=0.8)
print('Before Preprocessing')
train_predict(clf, X_train, y_train, X_test, y_test);
print('After Applying PCA')
train_predict(clf, X_train_reduced, y_train, X_test_reduced, y_test);
print('After Applying PCA and Z Normalization')
train_predict(clf, X_train_reduced_normalized, y_train, X_test_reduced_normalized, y_test);
print('After Scaling Using MinMaxScaler')
train_predict(clf, X_train_scaled, y_train, X_test_scaled, y_test);
###Output
Before Preprocessing
BaggingClassifier trained on 9363 samples, and tests predicted with accuracy 0.802890605532021 and fscore 0.8029456576942609
After Applying PCA
BaggingClassifier trained on 9363 samples, and tests predicted with accuracy 0.7914278594567655 and fscore 0.8020551649540292
After Applying PCA and Z Normalization
BaggingClassifier trained on 9363 samples, and tests predicted with accuracy 0.7956640917019686 and fscore 0.7976153767088086
After Scaling Using MinMaxScaler
BaggingClassifier trained on 9363 samples, and tests predicted with accuracy 0.8218290555693994 and fscore 0.8217651231127775
###Markdown
13) Extra Trees
###Code
from sklearn.ensemble import ExtraTreesClassifier
clf = ExtraTreesClassifier(n_estimators=900, random_state=6, max_depth=None, n_jobs=-1)
print('Before Preprocessing')
train_predict(clf, X_train, y_train, X_test, y_test);
print('After Applying PCA')
train_predict(clf, X_train_reduced, y_train, X_test_reduced, y_test);
print('After Applying PCA and Z Normalization')
train_predict(clf, X_train_reduced_normalized, y_train, X_test_reduced_normalized, y_test);
print('After Scaling Using MinMaxScaler')
train_predict(clf, X_train_scaled, y_train, X_test_scaled, y_test);
###Output
Before Preprocessing
ExtraTreesClassifier trained on 9363 samples, and tests predicted with accuracy 0.8562172937951658 and fscore 0.8648903807825448
After Applying PCA
ExtraTreesClassifier trained on 9363 samples, and tests predicted with accuracy 0.7966608522302517 and fscore 0.8047016274864376
After Applying PCA and Z Normalization
ExtraTreesClassifier trained on 9363 samples, and tests predicted with accuracy 0.7969100423623224 and fscore 0.8036751504910761
After Scaling Using MinMaxScaler
ExtraTreesClassifier trained on 9363 samples, and tests predicted with accuracy 0.8116122601544978 and fscore 0.7946683885250437
###Markdown
14) Voting
###Code
from sklearn.ensemble import VotingClassifier
clf1 = AdaBoostClassifier(base_estimator=DecisionTreeClassifier(max_depth=10), n_estimators=100, learning_rate=0.2, random_state=6) #0.86
clf2 = RandomForestClassifier(n_estimators=200, random_state=random_state, n_jobs=-1, max_depth=20) # 0.85
clf = VotingClassifier(estimators=[('clf2', clf2), ('clf1', clf1)], voting='hard', n_jobs=-1)
train_predict(clf, X_train, y_train, X_test, y_test);
###Output
VotingClassifier trained on 9363 samples, and tests predicted with accuracy 0.8569648641913781 and fscore 0.8805837139356133
###Markdown
**The best Fscore we got is 0.88058 using Voting classiifer of AdaBoost and RandomForest**
###Code
from sklearn.ensemble import VotingClassifier
clf1 = AdaBoostClassifier(base_estimator=DecisionTreeClassifier(max_depth=10), n_estimators=100, learning_rate=0.2, random_state=6) #0.86
clf2 = ExtraTreesClassifier(n_estimators=250, random_state=6, max_depth=20, n_jobs=-1) # 0.864
clf = VotingClassifier(estimators=[('clf1', clf1), ('clf2', clf2)], voting='hard', n_jobs=-1)
train_predict(clf, X_train, y_train, X_test, y_test);
###Output
/usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/label.py:151: DeprecationWarning: The truth value of an empty array is ambiguous. Returning False, but in future this will result in an error. Use `array.size > 0` to check that an array is not empty.
if diff:
###Markdown
**The best Fscore we got is 0.8809 using Voting classiifer of AdaBoost and ExtraTrees** Model Parameter Tuning We will use this method to tune any model with the desired hyperparameters, it will return the model with the best hyperparameters
###Code
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import fbeta_score, make_scorer
def tune_model(clf, parameters, X_train, y_train, X_test, y_test):
scorer = make_scorer(fbeta_score, beta=1)
grid_obj = GridSearchCV(estimator=clf, param_grid=parameters, scoring=scorer, n_jobs= -1)
grid_fit = grid_obj.fit(X_train, y_train)
# Get the best estimator
best_clf = grid_fit.best_estimator_
# Get predictions
predictions = (clf.fit(X_train, y_train)).predict(X_test)
best_predictions = best_clf.predict(X_test)
print("Untuned model")
print("Accuracy score on testing data: {:.4f}".format(accuracy_score(y_test, predictions)))
print("F-score on testing data: {:.4f}".format(fbeta_score(y_test, predictions, beta=1)))
print('-------------------------------')
print("Tuned Model")
print("Best accuracy score on the testing data: {:.4f}".format(accuracy_score(y_test, best_predictions)))
print("Best F-score on the testing data: {:.4f}".format(fbeta_score(y_test, best_predictions, beta=1)))
print('-------------------------------')
print("Best parameters found:", grid_fit.best_params_)
print('-------------------------------')
display(pd.DataFrame(grid_obj.grid_scores_))
return {'old_clf' : clf.fit(X_train, y_train), 'tuned_clf' : best_clf}
###Output
_____no_output_____
###Markdown
1) Decision Tree
###Code
parameters = {'max_depth':range(5, 12), 'max_features':range(7,11), 'min_samples_split':[0.01, 0.1]}
clf = DecisionTreeClassifier(random_state=random_state)
tune_model(clf, parameters, X_train, y_train, X_test, y_test);
###Output
Untuned model
Accuracy score on testing data: 0.7882
F-score on testing data: 0.7859
-------------------------------
Tuned Model
Best accuracy score on the testing data: 0.8163
Best F-score on the testing data: 0.8095
-------------------------------
Best parameters found: {'max_depth': 11, 'max_features': 10, 'min_samples_split': 0.01}
-------------------------------
###Markdown
2) Using Naïve BayesNo parameters to be tuned 3) Support Vector Machines (SVM)
###Code
parameters = {'C':[0.01, 0.1, 1, 10, 1000]}
clf = SVC(random_state=random_state)
tune_model(clf, parameters, X_train_reduced_normalized, y_train, X_test_reduced_normalized, y_test);
###Output
Untuned model
Accuracy score on testing data: 0.7952
F-score on testing data: 0.7830
-------------------------------
Tuned Model
Best accuracy score on the testing data: 0.7947
Best F-score on the testing data: 0.7820
-------------------------------
Best parameters found: {'C': 10}
-------------------------------
###Markdown
4) K-Nearest Neighbor (K-NN) **I. Tuning on the original data**
###Code
parameters = {'n_neighbors':range(1,20), 'weights':['uniform', 'distance']}
clf = KNeighborsClassifier()
tune_model(clf, parameters, X_train, y_train, X_test, y_test);
###Output
Untuned model
Accuracy score on testing data: 0.7590
F-score on testing data: 0.7428
-------------------------------
Tuned Model
Best accuracy score on the testing data: 0.7705
Best F-score on the testing data: 0.7522
-------------------------------
Best parameters found: {'n_neighbors': 10, 'weights': 'distance'}
-------------------------------
###Markdown
**II. Tuning on the data after PCA and Z normalization**
###Code
parameters = {'n_neighbors':range(1,20), 'weights':['uniform', 'distance']}
clf = KNeighborsClassifier()
tune_model(clf, parameters, X_train_reduced_normalized, y_train, X_test_reduced_normalized, y_test);
###Output
Untuned model
Accuracy score on testing data: 0.7720
F-score on testing data: 0.7602
-------------------------------
Tuned Model
Best accuracy score on the testing data: 0.7770
Best F-score on the testing data: 0.7623
-------------------------------
Best parameters found: {'n_neighbors': 12, 'weights': 'distance'}
-------------------------------
###Markdown
5) Random Forests
###Code
parameters = {'n_estimators':range(100,600,100)}
clf = RandomForestClassifier(random_state=random_state, n_jobs=-1)
tune_model(clf, parameters, X_train, y_train, X_test, y_test);
###Output
Untuned model
Accuracy score on testing data: 0.8408
F-score on testing data: 0.8327
-------------------------------
Tuned Model
Best accuracy score on the testing data: 0.8560
Best F-score on the testing data: 0.8524
-------------------------------
Best parameters found: {'n_estimators': 500}
-------------------------------
###Markdown
6) AdaBoost
###Code
parameters = {'n_estimators':range(100, 500, 100)}
clf = AdaBoostClassifier(base_estimator=DecisionTreeClassifier(max_depth=10), learning_rate=0.2, random_state=6)
tune_model(clf, parameters, X_train, y_train, X_test, y_test);
###Output
Untuned model
Accuracy score on testing data: 0.8435
F-score on testing data: 0.8386
-------------------------------
Tuned Model
Best accuracy score on the testing data: 0.8540
Best F-score on the testing data: 0.8487
-------------------------------
Best parameters found: {'n_estimators': 300}
-------------------------------
###Markdown
Best Model After Tuning we can see that the best model after tuning is AdaBoost, So we will build that model and use it for testing on our dataset and find the model accuracy, precision, recall and F-measure.
###Code
from sklearn.metrics import classification_report, confusion_matrix
final_model = AdaBoostClassifier(base_estimator=DecisionTreeClassifier(max_depth=10), n_estimators=100, learning_rate=0.2, random_state=6)
final_model.fit(X_train, y_train)
y_pred = final_model.predict(X_test)
y_true = y_test
print(classification_report(y_true, y_pred))
print(confusion_matrix(y_test, y_pred))
###Output
[[1798 226]
[ 357 1632]]
###Markdown
Spark SQL and DataFrames 1. Basic manipulationsFirst, let's create some users:
###Code
from datetime import date
# Let's first create some users:
col_names = ["first_name", "last_name", "birth_date", "gender", "country"]
users = [
("Alice", "Jones", date(1981, 4, 15), "female", "Canada"),
("John", "Doe", date(1951, 1, 21), "male", "USA"),
("Barbara", "May", date(1951, 9, 1), "female", "Australia"),
("James", "Smith", date(1975, 7, 12), "male", "United Kingdom"),
("Gerrard", "Dupont", date(1968, 5, 9), "male", "France"),
("Amanda", "B.", date(1988, 12, 16), "female", "New Zeland")
]
users_df = spark.createDataFrame(users, col_names)
display(users_df) # Only works in Databricks. Elswehere, use "df.show()" or "df.to(nPandas()"
###Output
_____no_output_____
###Markdown
Now it's your turn, create more users (at least 3, with different names) and add to the initial users, saving the result in a new variable.
###Code
new_users_df = spark.createDataFrame([ (" Mbaimou", "Auxence", date(1994,5,5), "male", "Chad"), (" Max", "LeGrand", date(1993,11,5), "male", "France"), (" Mohamed", "Younes", date(1992,3,5), "male", "Maroc")],col_names)
all_users_df = users_df.union(new_users_df)
display(all_users_df) # or all_users_df.show()
###Output
_____no_output_____
###Markdown
Now, select only two columns and show the resulting DataFrame, without saving it into a variable.
###Code
all_users_df.select("first_name", "last_name").show()
###Output
_____no_output_____
###Markdown
Now, register your DataFrame as a table and select the same two columns with a SQL query string
###Code
query_string = "SELECT first_name, last_name from table3"
all_users_df.registerTempTable("table3") # creates a local temporary table accessible by a SQL query
spark.sql(query_string).show()
###Output
_____no_output_____
###Markdown
Now we want to add an unique identifier for each user in the table. There are many strategies for that, and for our example we will use the string `{last_name}_{first_name}`You can use `all_users_df` since your latest operation did not override its values. Add a new column called `user_id` to your DataFrame and save to a new variable.**Hint:** The first place to look is in the [functions](http://spark.apache.org/docs/latest/api/python/pyspark.sql.htmlmodule-pyspark.sql.functions) package
###Code
from pyspark.sql import functions as fn
users_with_id = all_users_df.withColumn("user_id", fn.concat_ws('_',all_users_df.last_name,all_users_df.first_name))
display(users_with_id)
###Output
_____no_output_____
###Markdown
You can also do the same thing with an User Defined Function by passing a lambda, although it is not recommended when there is already a function in the `functions` package.Add a new column called `user_id_udf` to the DataFrame, using an UDF that receives two parameters and concatenate them.
###Code
concat_udf = fn.udf(lambda x,y: x+y)
users_with_id_udf = users_with_id.withColumn("user_id_udf", concat_udf(all_users_df.last_name, all_users_df.first_name))
display(users_with_id_udf)
###Output
_____no_output_____
###Markdown
Now, let's add another column called `age` with the computed age (in years) of each user, based on a given reference date, and save the resulting DataFrame into a new variable.**Hint:** You can first compute the age in months, and then divide by 12. A final operation will probably be needed to get an integer number.
###Code
reference_date = date(2017, 12, 31)
users_with_age = users_with_id_udf.withColumn( "age", fn.floor(fn.months_between(fn.lit(reference_date), users_with_id_udf.birth_date)/12))
display(users_with_age)
###Output
_____no_output_____
###Markdown
Now, an analytical question: How many users of each gender who are more than 40 years old exist in this data? The solution must be a DataFrame with two columns: `age` and `count` and two lines, one for each gender.Bonus: Try to do your solution in a single chain (without intermediate variables)**Hint:** You will need to filter and aggregate the data.
###Code
#result = "SELECT age from users_with_age.
#(result)
###Output
_____no_output_____
###Markdown
2. Reading files, performing joins, and aggregatingFor this section you will use some fake data of two datasets: `Users` and `Donations`. The data is provided in two CSV files *with header* and using *comma* as separator.The `Users` dataset contains information about the users. The `Donations` dataset contains information about Donations performed by those users.The first task is to read these files into the appropriate DataFrames.**Note:** You need to set the option "inferSchema" to true in order to have the columns in the correct types._The data for this section has been created using [Mockaroo](https://www.mockaroo.com/)_.
###Code
users_from_file = spark.read.load("/FileStore/tables/files_users.csv",format="csv", inferSchema="true", header="true")
donations = spark.read.load("/FileStore/tables/files_donations.csv",format="csv", inferSchema="true", header="true")
display(users_from_file)
###Output
_____no_output_____
###Markdown
Now investigate the columns, contents and size of both datasets
###Code
# print the column names and types
users_from_file.printSchema()
donations.printSchema()
# print 5 elements of the datasets in a tabular format
users_from_file.show(n =5)
donations.show(n =5)
# print the number of lines of each dataset
print(users_from_file.count())
print(donations.count())
###Output
_____no_output_____
###Markdown
**Note:** If all the column types shown in the previous results are "string", you need to make sure you passed "inferSchema" as true when reading the CSV files before continuing.Before using the data, we may want to add some information about the users. Add a column containing the age of each user.
###Code
users_from_file = users_from_file.withColumn("age", fn.floor(fn.months_between(fn.lit(date(2020,2,3)), users_from_file.birth_date)/12))
users_from_file.show()
###Output
_____no_output_____
###Markdown
Another useful information to have is the age **range** of each user. Using the `when` function, create the following 5 age ranges:- "(0, 25]"- "(25, 35]"- "(35, 45]"- "(45, 55]"- "(55, ~]"And add a new column to the users DataFrame, containing this information.**Note:** When building logical operations with Spark DataFrames, it's better to be add parantheses. Example:```pythondf.select("name", "age").where( (df("age") > 20) & (df("age") <= 30) )```**Note 2:** If you are having problems with the `when` function, you can make an User Defined Function and do your logic in standard python.
###Code
users_from_file = users_from_file.withColumn("range",fn.when( (users_from_file.age) <= 25, "(0,25]").when( (users_from_file.age <= 35) & (users_from_file.age > 25),"(25,35]" )
.when((users_from_file.age <= 45) & (users_from_file.age > 35),"(35,45]")
.when((users_from_file.age <= 55) & (users_from_file.age > 45),"(45,55]")
.otherwise("(55,~])"))
users_from_file.show()
###Output
_____no_output_____
###Markdown
Now that we have improved our users' DataFrame, the first analysis we want to make is the average donation performed by each gender. However, the gender information and the donation value are in different tables, so first we need to join them, using the `user_id` as joining key.**Note:** Make sure you are not using the `users` DataFrame from the first part of the lab.**Note 2:** For better performance, you can [broadcast](http://spark.apache.org/docs/latest/api/python/pyspark.sql.htmlpyspark.sql.functions.broadcast) the smaller dataset. But you should only do this when you are sure the DataFrame fits in the memory of the nodes.
###Code
joined_df = users_from_file.join(donations, ['user_id'])
display(joined_df)
###Output
_____no_output_____
###Markdown
Now, use aggregation to find the the min, max and avg donation by gender.
###Code
donations_by_gender = joined_df.groupby('gender').agg(fn.min('donation_value'), fn.max('donation_value'), fn.avg('donation_value'))
donations_by_gender.show()
###Output
_____no_output_____
###Markdown
Now, make the necessary transformations and aggregations to answer to the following questions about the data. Note that some questions are only about the users, so make sure to use the smaller possible dataset when looking for your answers!**Question 1:** a) What's the average, min and max age of the users? b) What's the average, min and max age of the users, by gender?
###Code
result_1a = users_from_file.agg(fn.min('age').alias('minimum'), fn.max('age').alias('maximum'),fn.floor(fn.avg('age')).alias("moyenne"))
result_1a.show()
result_1b = users_from_file.groupby('gender').agg(fn.min('age').alias('minimum'), fn.max('age').alias('maximum'), fn.floor(fn.avg('age')).alias('average'))
result_1b.show()
###Output
_____no_output_____
###Markdown
**Question 2:**a) How many distinct country origins exist in the data? Print a DataFrame listing them.b) What are the top 5 countries with the most users? Print a DataFrame containing the name of the countries and the counts.
###Code
result_2a = users_from_file.agg(fn.countDistinct('country').alias('distinctcountry'))
result_2a.show()
result_2b = users_from_file.groupby('country').count().sort('count', ascending =False)
result_2b.show(5)
###Output
_____no_output_____
###Markdown
**Question 3:**What's the number of donations average, min and max donations values by age range?
###Code
result_3 = joined_df.groupby('range').agg(fn.avg('donation_value').alias('avgdonation'), fn.min('donation_value').alias('mindonation'), fn.max('donation_value').alias('maxdonation'))
result_3.show()
###Output
_____no_output_____
###Markdown
**Question 4:**a) What's the number of donations, average, min and max donation values by user location (country)?b) What is the number of donations, average, min and max donation values by gender for each user location (contry)? (the resulting DataFrame must contain 5 columns: the gender of the user, their country and the 3 metrics)
###Code
result_4a = joined_df.groupby('country').agg(fn.avg('donation_value').alias('avg_donation_loc'), fn.min('donation_value').alias('min_donation_loc'), fn.max('donation_value').alias('max_donation_loc'))
result_4a.show()
result_4b = joined_df.groupby('country','gender').agg(fn.avg('donation_value').alias('avg_donation_loc'), fn.min('donation_value').alias('min_donation_loc'), fn.max('donation_value').alias('max_donation_loc'))
result_4b.show()
###Output
_____no_output_____
###Markdown
**Question 5**Which month of the year has the largest aggregated donation value?**Hint**: you can use a function to extract the month from a date, then you can aggregate to find the total donation value.
###Code
result_5 = ???
###Output
_____no_output_____
###Markdown
3. Window Functions_This section uses the same data as the last one._Window functions are very useful for gathering aggregated data without actually aggregating the DataFrame. They can also be used to find "previous" and "next" information for entities, such as an user. [This article](https://databricks.com/blog/2015/07/15/introducing-window-functions-in-spark-sql.html) has a very nice explanation about the concept. We want to find the users who donated less than a threshold and remove them from the donations dataset. Now, there are two ways of doing that:1) Performing a traditional aggregation to find the users who donated less than the threshold, then filtering these users from the donations dataset, either with `where(not(df("user_id").isin(a_local_list)))` or with a join of type "anti-join".2) Using window functions to add a new column with the aggregated donations per user, and then using a normal filter such as `where(aggregated_donation < threshold)`Let's implement both and compare the complexity:First, perform the traditional aggregation and find the users who donated less than 500 in total:
###Code
bad_users = joined_df.groupby('user_id').agg(fn.sum(fn.col("donation_value")).alias('sumof')).where('sumof<500')
bad_users.show()
###Output
_____no_output_____
###Markdown
You should have found around 10 users. Now, perform an "anti-join" to remove those users from `joined_df`.**Hint:** The `join` operation accepts a third argument which is the join type. The accepted values are: 'inner', 'outer', 'full', 'fullouter', 'leftouter', 'left', 'rightouter', 'right', 'leftsemi', 'leftanti', 'cross'.
###Code
good_donations = joined_df.join(bad_users, joined_df.user_id== bad_users.user_id, "leftanti")
good_donations.count()
###Output
_____no_output_____
###Markdown
Verify if the count of `good_donations` makes sense by performing a normal join to find the `bad_donations`.
###Code
bad_donations = joined_df.join(bad_users, bad_users.user_id==joined_df.user_id)
bad_donations.count()
###Output
_____no_output_____
###Markdown
If you done everything right, at this point `good_donations.count()` + `bad_donations.count()` = `joined_df.count()`.But using the join approach can be very heavy and it requires multipe operations. For this kind of problems, Window Functions are better.
###Code
good_donations.count() + bad_donations.count() == joined_df.count()
###Output
_____no_output_____
###Markdown
The first step is to create your window specification over `user_id` by using partitionBy
###Code
from pyspark.sql import Window
window_spec = Window.partitionBy('user_id')
###Output
_____no_output_____
###Markdown
Then, you can use one of the window functions of the `pyspark.sql.functions` package, appling to the created window_spec by using the `over` method. **Hint:** If you are blocked, try looking at the [documentation](http://spark.apache.org/docs/latest/api/python/pyspark.sql.htmlpyspark.sql.Column.over) for the `over` method, or searching on StackOverflow.
###Code
new_column = fn.sum('donation_value').over(window_spec)
donations_with_total = joined_df.withColumn("total_donated_by_user", new_column)
donations_with_total.show()
###Output
_____no_output_____
###Markdown
And now you can just filter on the `total_donated_by_user` column:
###Code
good_donations_wf = donations_with_total.where('total_donated_by_user>500')
good_donations_wf.count()
###Output
_____no_output_____
###Markdown
If you done everything right, you should obtain `good_donations_wf.count()` = `good_donations.count()`
###Code
good_donations_wf.count() == good_donations.count()
###Output
_____no_output_____
###Markdown
Window functions also can be useful to find the "next" and "previous" operations by using `functions.lead` and `functions.lag`. In our example, we can use it to find the date interval between two donations by a specific user.For this kind of Window function, the window specification must be ordered by the date. So, create a new window specification partitioned by the `user_id` and ordered by the donation timestamp.
###Code
ordered_window = Window.partitionBy('user_id').orderBy('Timestamp')
###Output
_____no_output_____
###Markdown
Now use `functions.lag().over()` to add a column with the timestamp of the previous donation of the same user. Then, inspect the result to see what it looks like.
###Code
new_column = fn.lag('Timestamp',1).over(ordered_window)
donations_with_lag = good_donations.withColumn("PreviousTemps", new_column)
donations_with_lag.orderBy("user_id", "timestamp").show()
###Output
_____no_output_____
###Markdown
Finally, compute the average time it took for each user between two of their consecutive donations (in days), and print the 5 users with the smallest averages. The result must include at least the users' id, last name and birth date, as well as the computed average.
###Code
users_average_between_donations = donations_with_lag.withColumn("averagetime",fn.datediff(donations_with_lag.date, fn.lag(donations_with_lag.date, 1)
.over(ordered_window))).groupby('user_id','last_name','birth_date').agg(fn.floor(fn.mean('averagetime'))
.alias('average_date')).sort('average_date', ascending =True)
users_average_between_donations.show(5)
###Output
_____no_output_____
###Markdown
**Table of contents*** [Random sentences](snt)* [Language models](lm)* [Implementation](imp) * [Unigram LM](imp-uni) * [A general n-gram language model](imp-n)* [Evaluation](eval)* [Smoothing](smooth)* [Interpolation](inter) **Table of Exercises*** Theory (10 points) * [Exercise 3-1](ex3-1) * [Exercise 3-2](ex3-2) * [Exercise 3-3](ex3-3) * [Exercise 3-4](ex3-4)* Practice (20 points) * [Exercise 3-5](ex3-5) * [Exercise 3-6](ex3-6) * [Exercise 3-7](ex3-7) * [Exercise 3-8](ex3-8) * [Exercise 3-9](ex3-9)**General notes*** In this notebook you are expected to use $\LaTeX$. * Use python3 Random sentencesGiven a sentence $S$ (for example ***He went to the store*** in English) a language model (LM) can tell us if this resembles a natural sentence.Can we learn a model to asses the fluency of sentences generated by an automatic system?For example, such a model must prefer a sentence like ***He went to the store*** to a sentence like ***He store go***.A language model is an attempt at quantifying a notion of the degree of goodness (or badness) ofany given sentence. The best way to represent this degree of goodness is as a probability value $P$, where, if the model assigns a high probability value to *He went to the store*, it can be concluded that this sentence is much more likely to be a fluent English sentence than *He store go* which is assigned a low probability.We model a sentence $S$ as a sequence of random words, so let's first define a random variable $X$ that represents a random word: **Exercise 3-1** **[2 points]** Define a categorical random variable $X$ for words sampled from a closed vocabulary $\Sigma$ (assume the size of the vocabulary is denoted by $v$). In your answer make sure you indicate what is the sample space and the precise support $\mathcal X$ of the categorical random variable. Let X take on values in an index set $\mathcal X = \{1, ..., v\}$ of $\Sigma $ A **sentence** corresponds to any sequence of words in $\Sigma^*$.We denote a random sentence $S$ by the sequence $\langle X_1, \ldots, X_n \rangle$ or the shorthand $X_1^n$.The following definition is also useful:* for the $i$th random word $X_i$, the prefix $\langle x_1, \ldots, x_{i-1} \rangle$ (also denoted $x_{<i}$) is called a random history.* we use $H$ to denote an arbitrary random history and $H_i$ to denote the $i$th random historyA **generative story** is a stochastic procedure that we define as a means to explain the process by which we believe data are generated. For random sentences we define the following generative story1. Sample the sequence length from a distribution $P_N$ * $N \sim P_N$2. Then for each position $i=1 , \ldots, n$ sample the $i$th word from the distribution $P_{X|H}$ * $X_i|x_{<i} \sim P_{X|H=x_{<i}}$Here is an example for a sentence with $3$ words:$P_S(\langle x_1, x_2, x_3 \rangle) = P_N(3) P_{X|H}(x_1) P_{X|H}(x_2|\langle x_1 \rangle) P_{X|H}(x_3 | \langle x_1, x_2 \rangle)$For our example sentence *He went to the store* this means:$P(\text{"He went to the store"}) = P_N(5) \times P(\text{He}) \times P(\text{went}|\langle \text{He} \rangle) \times P(\text{to}|\langle \text{He}, \text{went} \rangle) \times P(\text{the}|\langle \text{He}, \text{went}, \text{to} \rangle) \times P(\text{store}|\langle \text{He}, \text{went}, \text{to}, \text{the} \rangle) $* where with some abuse of notation we use the words themselves instead of their corresponding indices. **Exercise 3-2** **[3 points]** Write down the general rule for the probability $P_S$ of a sentence $x_1^n$. For this exercise please use subscripts to indicate the precise random variable associated with every distribution (that is, for example, $P_S$ is correct while $P$ is wrong). $$P_S(x_i^n) = P_N(n)\prod_{i=1}^n P_{X|H}(x_i|x_{<i})$$ Language modelsHere we quickly revisit the material discussed in class about n-gram LMs.We start with the simplest unigram language model. The idea is to forget the history therefore making a strong independence assumption:\begin{equation}(1) \qquad P_S(x_1^n) \approx P_N(n) \prod_{i=1}^n P_X(x_i)\end{equation}* we assume $P_N(n)$ to be some constant $c$, this means that we have a uniform distribution over length* and we assume $P_X$ to be a Categorical distributionThus, the final unigram LM definition is \begin{equation}(2) \qquad P_S(x_1^n; c, \theta_1^v) \triangleq c \prod_{i=1}^n \text{Cat}(X=x_i|\theta_1, \ldots, \theta_v)\end{equation}Note that we have introduced the Categorical pmf, which you have learnt about in Lab2. **Exercise 3-3** **[3 points]** Complete the categorical pmf and the conditions below:$\text{Cat}(X=a|\theta_1, \ldots, \theta_v) = \prod_{x=1}^v \theta_x^{\delta x a}$where $\theta_1^v$ are the categorical parameters for which it must hold1. $\forall\theta \{0 \le \theta \le 1|\theta \in \theta_1^v\}$2. $\sum \theta_1^v = 1$ **Maximum likelihood estimation**Suppose we are given a corpus containing $m$ sentences* $\langle x_1^{(k)}, \ldots, x_{n_k}^{(k)} \rangle$ for $k=1, \ldots, m$* where $n_k$ is the length of the $k$th sentenceThe MLE solution for the unigram LM is based on gathering counts and computing the relative frequency of word types:\begin{equation}(3) \qquad \theta_x = \frac{\text{count}(x)}{\text{number of tokens}}\end{equation}Note that the *number of tokens* is simply the sum of the length of the sentences $\sum_{k=1}^m n_k$.More generaly for a conditional probability distribution (cpd), we have that * $P_{X|H}(x|h) = \text{Cat}(X=x|\theta_1^{(h)}, \ldots, \theta_v^{(h)})$where $h$ uniquely indexes a history and $P_{X|H}(x|h) = \theta_x^{(h)}$ is the $x$th probability value in the $h$th cpd.Then the MLE solution is simply\begin{equation}(4) \qquad \theta_x^{(h)} = \frac{\text{count}(h \circ \langle x \rangle)}{\text{count}(h)}\end{equation}where $h \circ \langle x \rangle$ is the concatenation of history and word.Now that we know how to estimate general cpds we can define the n-gram LM.An $n$-gram LM is a Markov model of order $o=n-1$ where we truncate the complete history $x_{<i}$ so that it contains only the $o$ most recent words $x_{i-o}^{i-1}$.\begin{equation}(5) \qquad P_S(x_1^n; c, \boldsymbol \theta) \triangleq c \prod_{i=1}^n P_{X|H}(x_i|x_{i-o}^{i-1}; \boldsymbol \theta)\end{equation}where $P_{X|H=h; \boldsymbol \theta}$ is $\text{Cat}(\theta_1^{(h)}, \ldots, \theta_v^{(h)})$ ***Example***Consider the sentence *He went to the store*, its probability under the unigram LM is$P_S(\langle \text{He, went, to, the, store} \rangle) \propto P_X(\text{He}) \times P_X(\text{went}) \times P_X(\text{to}) \times P_X(\text{the}) \times P_X(\text{store})$which can also be seen as $P_S(\langle \text{He, went, to, the, store} \rangle) \propto \theta_{\text{He}} \times \theta_{\text{went}} \times \theta_{\text{to}} \times \theta_{\text{the}} \times \theta_{\text{store}}$where again we use the words instead of their indices and we use the proportionality symbol to ignore the probability of the length. **Exercise 3-4** **[2 points]** Write down the probability of the sentence He went to the store under a bigram language model. Tip: recall that *the* is a word while $\langle \text{the} \rangle$ is a sequence. $P_S(\langle \text{BoS, He, went, to, the, store, EoS}\rangle) \propto \\P_X(\text{He}\,|\,\langle\text{BoS}\rangle) \timesP_X(\text{went}\,|\,\langle\text{BoS, He}\rangle) \times \\P_X(\text{to}\,|\,\langle\text{BoS, He, went}\rangle) \times P_X(\text{the}\,|\,\langle\text{BoS, He, went, to}\rangle) \times \\P_X(\text{store}\,|\,\langle\text{BoS, He, went, to, the}\rangle) \timesP_X(\text{EoS}\,|\,\langle\text{BoS, He, went, to, the, store}\rangle)$ ImplementationWe will start by showing you how to implement the unigram LM. Consider the PTB dataset as our training data: *sec02-21.raw*. We will estimate the categorical parameters and we query the LM with some sentences to find out their probability.Notes: 1. For *memory efficiency* rather than vectors we will use sparse data structures (such as python dict), this is nice because we do not use memory to represent events that have never occurred.2. We lowercase the data to collect better statistics (otherwise 'He' and 'he' would correspond to different words)3. Recall from the lecture that we pad sentences with 1 EOS token (which becomes part of the sequence) and $n-1$ BOS tokens (which are there just to make the history size constant). Unigram LMWe start with the unigram language model, whose factorisation is shown in [Equation (2)](eq-unigram-lm).First, we start by **loading and pre-processing data**. In the code below we use [python generators](https://wiki.python.org/moin/Generators), check the link if you are not familiar with then.
###Code
from collections import defaultdict
def preprocess(file_path, min_count=1, char_level=False):
"""
Returns a generator (a data stream) that yields one pre-processed sentence at a time.
A preprocessed sentence is:
- a list of tokens (each token a string)
- where tokens are lowercased
- and possibly replaced by '<unk>' if infrequent
:param file_path: path to a text corpus
:param min_count: minimum number of occurrences
if a token happens less times than this value we replace it by '<unk>'
:returns: a generator of sentences
A generator is an object that can be used in `for` loops
"""
count = defaultdict(int)
# First we count the number of occurrences of each token
with open(file_path, 'r') as f:
for line in f:
line = line.strip()
if not line:
continue # we skip empty lines
if char_level:
sentence = [ch for ch in line.lower()]
else:
sentence = line.lower().split()
for token in sentence:
count[token] += 1
# then we yield one preprocessed sentence at a time
# making sure we map infrequent tokens to <unk>
with open(file_path, 'r') as f:
for line in f:
line = line.strip()
if not line:
continue # we skip empty lines
if char_level:
sentence = [ch for ch in line.lower()]
else:
sentence = line.lower().split()
preprocessed_sentence = [token if count[token] >= min_count else '<unk>' for token in sentence]
yield preprocessed_sentence
# Let's test our preprocessed sentence generator
for k, sentence in enumerate(preprocess('eleanor-rigby.txt'), 1):
print k, sentence
# Let's see what happens if we prune words that happen only once
for k, sentence in enumerate(preprocess('eleanor-rigby.txt', min_count=2), 1):
print k, sentence
###Output
1 ['ah', 'look', 'at', 'all', 'the', 'lonely', 'people']
2 ['ah', 'look', 'at', 'all', 'the', 'lonely', 'people']
3 ['eleanor', 'rigby', ',', '<unk>', '<unk>', 'the', '<unk>']
4 ['in', 'the', 'church', 'where', 'a', '<unk>', '<unk>', '<unk>']
5 ['<unk>', 'in', 'a', '<unk>']
6 ['<unk>', 'at', 'the', '<unk>', ',', '<unk>', 'the', '<unk>']
7 ['that', '<unk>', '<unk>', 'in', 'a', '<unk>', '<unk>', 'the', '<unk>']
8 ['<unk>', '<unk>', '<unk>', '<unk>']
9 ['all', 'the', 'lonely', 'people']
10 ['where', 'do', 'they', 'all', 'come', 'from', '?']
11 ['all', 'the', 'lonely', 'people']
12 ['where', 'do', 'they', 'all', 'belong', '?']
13 ['father', 'mckenzie', ',', '<unk>', 'the', '<unk>']
14 ['<unk>', 'a', '<unk>', 'that', 'no', 'one', '<unk>', '<unk>']
15 ['no', 'one', '<unk>', '<unk>']
16 ['look', 'at', '<unk>', '<unk>', ',', '<unk>', 'his', '<unk>']
17 ['in', 'the', '<unk>', '<unk>', 'there', '<unk>', 'nobody', 'there']
18 ['<unk>', '<unk>', 'he', '<unk>']
19 ['all', 'the', 'lonely', 'people']
20 ['where', 'do', 'they', 'all', 'come', 'from', '?']
21 ['all', 'the', 'lonely', 'people']
22 ['where', 'do', 'they', 'all', 'belong', '?']
23 ['ah', 'look', 'at', 'all', 'the', 'lonely', 'people']
24 ['ah', 'look', 'at', 'all', 'the', 'lonely', 'people']
25 ['eleanor', 'rigby', ',', '<unk>', 'in', 'the', 'church']
26 ['<unk>', 'was', '<unk>', '<unk>', '<unk>', '<unk>', '<unk>']
27 ['nobody', '<unk>']
28 ['father', 'mckenzie', ',', '<unk>', 'the', '<unk>']
29 ['from', 'his', '<unk>', '<unk>', 'he', '<unk>', 'from', 'the', '<unk>']
30 ['no', 'one', 'was', '<unk>']
31 ['all', 'the', 'lonely', 'people']
32 ['where', 'do', 'they', 'all', 'come', 'from', '?']
33 ['all', 'the', 'lonely', 'people']
34 ['where', 'do', 'they', 'all', 'belong', '?']
###Markdown
Now we show you **how to count unigrams**.
###Code
from collections import defaultdict
def count_unigrams(sentence_stream):
"""
input: a generator of preprocessed sentences
- a preprocessed sentence is a list of lowercased tokens
where rare tokens were possibly replaced by <unk>
output:
unigram_count: dictionary of frequency of each word
"""
unigram_counts = defaultdict(int)
for sentence in sentence_stream:
sentence = sentence + ["</s>"] # add end of sentence
for token in sentence:
unigram_counts[token.lower()] += 1 # frequency of each word
return unigram_counts
# Let's test our procedure and check how many times 'cat' and 'mat' happen in the PTB training corpus
unigram_count_table = count_unigrams(preprocess('sec02-21.raw'))
print('unigram=cat count=%d' % unigram_count_table['cat'])
print('unigram=mat count=%d' % unigram_count_table['mat'])
###Output
unigram=cat count=1
unigram=mat count=0
###Markdown
Now we show you **how to get the MLE solution for the unigram distribution**
###Code
def unigram_mle(unigram_counts):
"""
input: unigram_count: dictionary of frequency of each word
output: unigram_prob: dictionary with the probabilty of each word
(parameters of the model)
"""
total_count = sum(unigram_count_table.values())
unigram_probs = defaultdict(float)
for word, count in unigram_counts.items():
unigram_probs[word] = float(count) / total_count
return unigram_probs
# Let's check the MLE parameters associated with 'cat' and 'mat' by querying their unigram probabilities
unigram_prob_table = unigram_mle(unigram_count_table)
print('unigram=cat prob=%f' % unigram_prob_table['cat'])
print('unigram=mat prob=%f' % unigram_prob_table['mat'])
###Output
unigram=cat prob=0.000001
unigram=mat prob=0.000000
###Markdown
And finally we show you **how to compute the log-probability** of a sentence under the unigram LM
###Code
import numpy as np
def calculate_sentence_unigram_log_probability(sentence, word_probs):
"""
input: list of words in a sentence
word_probs: MLE paremeters
output:
sentence_probability_sum: log probability of the sentence
"""
sentence_probability_sum = 0.
# we first get the probability of unknown words
# which by default is 0. in case '<unk>' is not in the support
unk_probability = word_probs.get('<unk>', 0.)
for word in sentence:
# this will return `unk_probability` if the word is not in the support
word_probability = word_probs.get(word, unk_probability)
# it is a sum of log pboabilities
# we use np.log because it knows that log(0) is float('-inf')
sentence_probability_sum += np.log(word_probability)
return sentence_probability_sum
sent_prob = calculate_sentence_unigram_log_probability(['the', 'cat', 'sat', 'on', 'the', 'cat'], unigram_prob_table)
print(sent_prob)
###Output
-49.5345713414
###Markdown
**Unseen words**However, note that if we want the probability of sentences containing words that are not present in the training corpus, we will have an unpleasant surprise. For example: *the cat sat on the mat*
###Code
calculate_sentence_unigram_log_probability(['the', 'cat', 'sat', 'on', 'the', 'mat'], unigram_prob_table)
###Output
/home/harm/.local/lib/python2.7/site-packages/ipykernel_launcher.py:20: RuntimeWarning: divide by zero encountered in log
###Markdown
Of course that would not happen if we pre-processes the data to map infrequent words to ``, as we illustrate below by setting `min_count=2`.
###Code
unigram_prob_table2 = unigram_mle(count_unigrams(preprocess('sec02-21.raw', min_count=2)) )
calculate_sentence_unigram_log_probability(['the', 'cat', 'sat', 'on', 'the', 'mat'], unigram_prob_table2)
###Output
_____no_output_____
###Markdown
This is a very rudimentary *smoothing technique* and you will se other techniques later in this notebook. A general n-gram language modelWe now turn to a general $n$-gram LM, whose factorisation is shown in [Equation (5)](eq-ngram-lm). **Exercise 3-5** **[10 points]** In this exercise you will build a general $n$-gram language model where $n \ge 1$. We provide you with a skeleton class on which to build. a) Implementation **[3 points]*** Start by implementing the method `count_ngrams`, see the documentation of the method for specification. Tip: expand upon the procedure implemented in the function `count_unigrams` above; remember to handle BOS tokens and EOS tokens correctly. Use `` for BOS token and `` for EOS token.* Now implement the method `solve_mle`, see the documentation of the method for specification.* Finally, implement the `log_prob` method, see the documentation of the method for specification.b) Toy data **[1 point]**: Train 3 models (unigram, bigram, trigram) using Eleanor Rigby's lyrics (`eleanor-rigby.txt`) and show that you can reproduce the output of the code below.```pythonunigram_lm = LM(order=0)bigram_lm = LM(order=1)trigram_lm = LM(order=2)unigram_lm.count_ngrams(preprocess('eleanor-rigby.txt'))unigram_lm.solve_mle()bigram_lm.count_ngrams(preprocess('eleanor-rigby.txt'))bigram_lm.solve_mle()trigram_lm.count_ngrams(preprocess('eleanor-rigby.txt'))trigram_lm.solve_mle()print(unigram_lm.log_prob("where do they all belong ?".split()))print(bigram_lm.log_prob("where do they all belong ?".split()))print(trigram_lm.log_prob("where do they all belong ?".split()))```which should produce```python-23.5871446234-3.56272816879-2.42774823595```c) PTB data (`sec02-21.raw`): train 3 models (unigra, bigram, and trigram) and report the probability of: the new rate will be payable feb. 15 .* unigram model **[2 points]*** bigram model **[2 points]*** trigram model **[2 points]**The excerpt below```pythonunigram_lm = LM(order=0)bigram_lm = LM(order=1)trigram_lm = LM(order=2)unigram_lm.count_ngrams(preprocess('sec02-21.raw'))unigram_lm.solve_mle()bigram_lm.count_ngrams(preprocess('sec02-21.raw'))bigram_lm.solve_mle()trigram_lm.count_ngrams(preprocess('sec02-21.raw'))trigram_lm.solve_mle()print(unigram_lm.log_prob("the new rate will be payable feb. 15 .".split()))print(bigram_lm.log_prob("the new rate will be payable feb. 15 .".split()))print(trigram_lm.log_prob("the new rate will be payable feb. 15 .".split()))```should produce```python-63.0944350135-35.0096672791-20.6963911844```*Help with debugging?*We have provided a toy corpus called `eleanor-rigby.txt`, for that corpus we provided the output of `print_count_table` and `print_prob_table` for a correct implementation of the LM class. We have varied order from 0 to 2:* `eleanor-rigby-unigram-counts.txt`* `eleanor-rigby-unigram-cpd.txt`* `eleanor-rigby-bigram-cpds.txt`* `eleanor-rigby-bigram-counts.txt`* `eleanor-rigby-trigram-cpds.txt`* `eleanor-rigby-trigram-counts.txt`
###Code
from collections import defaultdict
from __future__ import division
import sys
class LM:
def __init__(self, order):
self._order = order
self._count_table = dict()
self._prob_table = dict()
self._vocab = set()
def order(self):
return self._order
def print_count_table(self, output_stream=sys.stdout):
"""Prints the count table for visualisation"""
for history, ngrams in sorted(self._count_table.items(), key=lambda pair: pair[0]):
for word, count in sorted(ngrams.items(), key=lambda pair: pair[0]):
print >> output_stream, 'history="%s" word=%s count=%d' % (' '.join(history), word, count)
# print('history="%s" word=%s count=%d' % (' '.join(history), word, count), file=output_stream)
def print_prob_table(self, output_stream=sys.stdout):
"""Prints the tabular cpd for visualisation"""
for history, ngrams in sorted(self._prob_table.items(), key=lambda pair: pair[0]):
for word, prob in sorted(ngrams.items(), key=lambda pair: pair[0]):
print >> output_stream, 'history="%s" word=%s prob=%f' % (' '.join(history), word, prob)
# print('history="%s" word=%s prob=%f' % (' '.join(history), word, prob), file=output_stream)
def preprocess_history(self, history):
"""
This function pre-process an arbitrary history to match the order of this language model.
:param history: a sequence of words
:return: a tuple containing exactly as many elements as the order of the model
- if the input history is too short we pad it with <s>
"""
if len(history) == self._order:
return tuple(history)
elif len(history) > self._order:
length = len(history)
return tuple(history[length - self._order:length])
else: # here the history is too short
missing = self._order - len(history)
return tuple(['<s>'] * missing) + tuple(history)
def get_parameter(self, history, word):
"""
This function returns the categorical parameter associated with a certain word given a certain history.
:param history: a sequence of words (a tuple)
:param word: a word (a str)
:return: a float representing P(word|history)
"""
history = self.preprocess_history(history)
cpd = self._prob_table.get(history, None)
if cpd is None:
return 0.
else:
# we either return P(x|h)
# or P(unk|h) in case x is not in the support of this cpd
# or 0. in case neither x nor unk are in the support of this cpd
unk_probability = cpd.get('<unk>', 0.)
return cpd.get(word, unk_probability)
def cpd_items(self, history):
history = self.preprocess_history(history)
# if the history is unseen we return an empty cpd
return self._prob_table.get(history, dict()).items()
def preprocess_sentence(self, sentence):
"""
Preprocesses sentences by adding <s> tags as BoS tokens and a </s> tag at the end of the sentence
:param sentence: a sequence of tokens
:return: sentence with <s> tokens at start and </s> at end
"""
return self._order * ['<s>'] + sentence + ['</s>']
def count_ngrams(self, data_stream):
"""
This function should populate the attribute _count_table which should be understood as
- a python dict
- whose key is a history (a tuple of words)
- and whose value is itself a python dict (or defaultdict)
- which maps a word (a string) to a count (an integer)
This function will add counts to whatever counts are already stored in _count_table.
This function also maintains a unique set of words in the vocabulary using the attribute _vocab
:param data_stream: a generator as produced by `preprocess`
"""
for sentence in data_stream:
sentence = self.preprocess_sentence(sentence)
for i, word in enumerate(sentence):
if i >= self._order:
history = tuple(sentence[i - self._order:i])
# If key doesn't exist create defaultdict for later easy additions.
if history not in self._count_table:
self._count_table[history] = defaultdict(int)
self._count_table[history][word] += 1
# Add words to vocab
self._vocab.add(word)
def solve_mle(self):
"""
This function should compute the attribute _prob_table which has the exact same structure as _count_table
but stores probability values instead of counts.
It can be seen as the collection of cpds of our model, that is, _prob_table
- maps a history (a tuple of words) to a dict where
- a key is a word (that extends the history forming an ngram)
- and the value is the probability P(word|history)
This function will replace whatever value _prob_table currently stores by the newly computed MLE solution.
"""
for history, word_counts in self._count_table.iteritems():
self._prob_table[history] = defaultdict(float)
total_count = sum(word_counts.values())
for word, count in word_counts.iteritems():
self._prob_table[history][word] = count / total_count
def log_prob(self, sentence):
"""
Compute the log probability of a sentence under this model.
input:
sentence: a sequence of tokens
output:
log probability
"""
prob_sum = 0
sentence = self.preprocess_sentence(sentence)
for i, word in enumerate(sentence):
if i >= self._order:
history = tuple(sentence[i - self._order:i])
p = self.get_parameter(history, word)
prob_sum += np.log(p)
return prob_sum
unigram_lm = LM(order=0)
bigram_lm = LM(order=1)
trigram_lm = LM(order=2)
unigram_lm.count_ngrams(preprocess('eleanor-rigby.txt'))
unigram_lm.solve_mle()
bigram_lm.count_ngrams(preprocess('eleanor-rigby.txt'))
bigram_lm.solve_mle()
trigram_lm.count_ngrams(preprocess('eleanor-rigby.txt'))
trigram_lm.solve_mle()
print(unigram_lm.log_prob("where do they all belong ?".split()))
print(bigram_lm.log_prob("where do they all belong ?".split()))
print(trigram_lm.log_prob("where do they all belong ?".split()))
unigram_lm = LM(order=0)
bigram_lm = LM(order=1)
trigram_lm = LM(order=2)
unigram_lm.count_ngrams(preprocess('sec02-21.raw'))
unigram_lm.solve_mle()
bigram_lm.count_ngrams(preprocess('sec02-21.raw'))
bigram_lm.solve_mle()
trigram_lm.count_ngrams(preprocess('sec02-21.raw'))
trigram_lm.solve_mle()
print(unigram_lm.log_prob("the new rate will be payable feb. 15 .".split()))
print(bigram_lm.log_prob("the new rate will be payable feb. 15 .".split()))
print(trigram_lm.log_prob("the new rate will be payable feb. 15 .".split()))
###Output
-63.0944350135
-35.0096672791
-20.6963911844
###Markdown
EvaluationThe way to evaluate the performance of a LM is to test into a final application. In other words, how much the final score of the application improves. This is called *extrinsic* evaluation. Also, we can test our LM independently from an application, this is called *intrinsinc* evaluation. In this course, we are going to study the intrinsic evaluation of the LM.To test a LM model we prepare 3 datasets: Training is used for estimating $\boldsymbol \theta$ (we use boldface to indicate a collection of parameters). Develpment is used to make choices across models. Test is used for measuring the accuracy of the model. In n-gram LM the evaluation is defined by the **likelihood** of the model with respect of the test dataset.The likelihood of the parameters $\theta$ over the test dataset is the probability that the model assigns to the dataset.We assume the test data $\mathcal T$ consits of $m$ independent sentences each denoted $\langle x_1^{(k)}, \ldots, x_{n_k}^{(k)} \rangle$ $P(\mathcal T; \boldsymbol \theta) = \prod_{k=1}^m P_S(\langle x_1^{(k)}, \ldots, x_{n_k}^{(k)} \rangle; \boldsymbol \theta)$Or in form of the log-likelihood:$\log P(\mathcal T; \theta) = \sum_{k=1}^m \log P_N(n_k) + \log P_{S|N}(\langle x_1^{(k)}, \ldots, x_{n_k}^{(k)} \rangle|n_k; \theta)$We assume the length probability to be constant, so in comparing different models that probability does not make a difference. Thus we drop it and define the log-likelihood as follows:$\mathcal L(\boldsymbol \theta) = \sum_{k=1}^m \log P_{S|N}(\langle x_1^{(k)}, \ldots, x_{n_k}^{(k)} \rangle|n_k; \boldsymbol \theta)$Then the model that assings the higher $\mathcal L$ to the test set is the one that best fits the data. In other words, given two probabilistic models, the better model is the one that assigns a higher probability to the test data. One detail we need to abstract away from is differences in factorisation of the models which may cause their likelihoods not to be comparable, but for that we will define *perplexity* below. The log likelihood is used because the probability of a particular sentence according to the LM can be a very small number, and the product of these small numbers can become even smaller, and it will cause numericalprecision problems. **Perplexity** of a language model on a test set is the inverse probability of the test set, normalizedby the number of tokens. Perplexity is a notion of average branching factor, thus a LM with low perplexity can be thought of as a *less confused* LM. That is, each time it introduces a word given some history it picks from a reduced subset of the entire vocabulary (in other words, it is more certain of how to continue). If a dataset contains $t$ tokens where $t = \sum_{k=1}^m n_k$, then the perplexity of the dataset is\begin{equation}(6) \qquad \text{PP}(\mathcal T) = \left( \prod_{k=1}^m P_{S|N}(\langle x_1^{(k)}, \ldots, x_{n_k}^{(k)} \rangle|n_k; \boldsymbol \theta) \right)^{-1/t}\end{equation}where we have already discarded the length distribution (since it's held constant across models).It's again convenient to use log and define log-perplexity\begin{equation}(7) \qquad \log \text{PP}(\mathcal T) = - \frac{1}{t} \sum_{k=1}^m \log P_{S|N}(\langle x_1^{(k)}, \ldots, x_{n_k}^{(k)} \rangle|n_k; \boldsymbol \theta) \end{equation}You can compare models in terms of the log-perplexity they assign to the same test data. The lower the perplexity, the better the model is.
###Code
# Let's quickly make a helper function to load test data
# and segment lines into sequences of lowercased tokens
def make_test_generator(path, char_level=False):
"""Return a generator for test sentences"""
with open(path, 'r') as fi:
for line in fi:
if char_level:
yield [ch for ch in line.lower()]
else:
yield line.lower().split()
###Output
_____no_output_____
###Markdown
**Exercise 3-6** Implement the log-perplexity function below. See the function documentation for specifications.* Two sentences test **[2 points]**: If you run the excerpt below for models trained on PTB```pythontwo_sentences_data = [ "Ms. Haag plays Elianti .".lower().split(), "Apparently the commission did not really believe in this ideal .".lower().split()]log_ppl = log_perplexity(two_sentences_data, unigram_lm)print(log_ppl)log_ppl = log_perplexity(two_sentences_data, bigram_lm)print(log_ppl)log_ppl = log_perplexity(two_sentences_data, trigram_lm)print(log_ppl)```and your implementation is correct, you will get```7.322679060443.879586133552.15917055083```At this point if your try to evaluate the perplexity of the PTB test set `sec00.raw` ```pythonprint(log_perplexity(make_test_generator('sec00.raw'), unigram_lm))print(log_perplexity(make_test_generator('sec00.raw'), bigram_lm))print(log_perplexity(make_test_generator('sec00.raw'), trigram_lm))```you will get `inf` for all models. That's because you need to implement smoothing.
###Code
def log_perplexity(data_stream, lm):
"""
Calculates the perplexity of the given text.
This is simply 2 ** cross-entropy for the text.
This function can make use of `lm.order()`, `lm.get_parameter()`, and `lm.log_prob()`
:param data_stream: generator of sentences (each sentence is a list of words)
:param lm: an instance of the class LM
"""
token_count = 0
prob_sum = 0
for line in data_stream:
token_count += len(line) + 1
prob_sum += lm.log_prob(line)
perpl = - prob_sum / token_count
return perpl
two_sentences_data = [
"Ms. Haag plays Elianti .".lower().split(),
"Apparently the commission did not really believe in this ideal .".lower().split()
]
print log_perplexity(two_sentences_data, unigram_lm)
print log_perplexity(two_sentences_data, bigram_lm)
print log_perplexity(two_sentences_data, trigram_lm)
# Prints below return inf
# print log_perplexity(make_test_generator('sec00.raw'), unigram_lm)
# print log_perplexity(make_test_generator('sec00.raw'), bigram_lm)
# print log_perplexity(make_test_generator('sec00.raw'), trigram_lm)
###Output
7.32267906044
3.87958613355
2.15917055083
###Markdown
SmoothingNote that MLE will fail if we evaluate on sentences containing n-grams that the model has never seen (at training). For example, *He went to the store* some bigrams are not present in the corpus giving a probability of *zero*.The words we haven't seen before are called unknown words, or out of vocabulary (OOV) words.We will now map them to a special symbol such as ``. To keep the LM from assigning zero probability to these unseen events (ngrams), we’ll have to steal some of the probability mass from some more frequent events and give it to the events we've never seen.This is called **smoothing** or **discounting**.The simplest form of smoothing is called **Laplace smoothing**, whereby we add `` to the support of the distribution and then add one to all counts before we normalize them into probabilities. All the counts that used to be zero will now have a count of 1, the counts of 1 will be 2, and so on. We can also generalise it and add $\alpha$ instead of $1$. Then for $P_{X|H=h} = \text{Cat}(\theta_1^{(h)}, \ldots, \theta_v^{(h)})$ we get the MLE solution:\begin{equation}(7) \qquad \theta_x^{(h)} = \frac{ \text{count}(h \circ \langle x \rangle) + \alpha}{\text{count}(h) + v \alpha}\end{equation}There are $v$ words in the vocabulary and each one was incremented by $\alpha$, we also need to adjust the denominator to take into account the extra $v\alpha$ observations. **Exercise 3-7** **[3 points]**Complete the `LaplaceLM` class below. Note that it must extend from `LM`. Implement the 3 modifications below in order to obtain add $\alpha$ smoothing. 1. **[1 point]** Modify `count_ngrams` to add `` to the support of every cpd (that is, for every possible history, including the empty history, an `` outcome with count 0 should exist.2. **[1 point]** Modify `solve_mle` so that it adds $\alpha$ to every count before normalisation.3. **[1 point]** Modify `get_parameter` so that it returns $1/v$ when the history is unknown, that is, when $\text{count}(h)$ is $0$To get all points your you need to show that your code can reproduce the following result.If your implementation is correct, for add $1$ smoothing, the following excerpt of code```pythonunigram_lm_laplace = LaplaceLM(order=0, alpha=1.)bigram_lm_laplace = LaplaceLM(order=1, alpha=1.)unigram_lm_laplace.count_ngrams(preprocess('sec02-21.raw'))unigram_lm_laplace.solve_mle()bigram_lm_laplace.count_ngrams(preprocess('sec02-21.raw'))bigram_lm_laplace.solve_mle()print(log_perplexity(make_test_generator('sec00.raw'), unigram_lm_laplace))print(log_perplexity(make_test_generator('sec00.raw'), bigram_lm_laplace))```should produce```python7.064978382274.70167458342```As you can see, Laplace smoothing improved the language models by assigning a non-zero probability to sentences with unseen words and/or bigrams.
###Code
class LaplaceLM(LM):
def __init__(self, order, alpha=1.):
LM.__init__(self, order)
self._alpha = alpha
# in Laplace smoothing we always add '<unk>' to the vocabulary
self._vocab.add('<unk>')
def get_parameter(self, history, word):
"""
This function returns the categorical parameter associated with a certain word given a certain history.
:param history: a sequence of words (a tuple)
:param word: a word (a str)
:return: a float representing P(word|history)
"""
history = self.preprocess_history(history)
cpd = self._prob_table.get(history, None)
if cpd is None:
return float(1) / len(self._vocab)
else:
unk_prob = cpd.get('<unk>', float(1) / len(self._vocab))
return cpd.get(word, unk_prob)
def count_ngrams(self, data_stream):
"""
This function should populate the attribute _count_table which should be understood as
- a python dict
- whose key is a history (a tuple of words)
- and whose value is itself a python dict (or defaultdict)
- which maps a word (a string) to a count (an integer)
This function will add counts to whatever counts are already stored in _count_table.
:param data_stream: a generator as produced by `preprocess`
"""
self._count_table[('',)] = {'<unk>': self._alpha}
for sentence in data_stream:
sentence = self.preprocess_sentence(sentence)
for i, word in enumerate(sentence):
if i >= self._order:
history = tuple(sentence[i - self._order:i])
# If key doesn't exist create defaultdict for later easy additions.
if history not in self._count_table:
# Lets defaultdict start at alpha value
# self._count_table[history] = defaultdict(int)
self._count_table[history] = defaultdict(lambda:self._alpha)
self._count_table[history][word] += 1
# Check if <unk> not in history then add to cpd with value of alpha
if '<unk>' not in history:
# self._count_table[history]['<unk>'] = 0
self._count_table[history]['<unk>'] = self._alpha
# Add words to vocab
self._vocab.add(word)
def solve_mle(self):
"""
This function should compute the attribute _prob_table which has the exact same structure as _count_table
but stores probability values instead of counts.
It can be seen as the collection of cpds of our model, that is, _prob_table
- maps a history (a tuple of words) to a dict where
- a key is a word (that extends the history forming an ngram)
- and the value is the probability P(word|history)
This function will replace whatever value _prob_table currently stores by the newly computed MLE solution.
"""
for history, word_counts in self._count_table.iteritems():
self._prob_table[history] = defaultdict(float)
n = len(word_counts.values())
total_count = sum(word_counts.values())
for word, count in word_counts.iteritems():
self._prob_table[history][word] = float(count) / total_count
unigram_lm_laplace = LaplaceLM(order=0, alpha=1.)
bigram_lm_laplace = LaplaceLM(order=1, alpha=1.)
unigram_lm_laplace.count_ngrams(preprocess('sec02-21.raw'))
unigram_lm_laplace.solve_mle()
bigram_lm_laplace.count_ngrams(preprocess('sec02-21.raw'))
bigram_lm_laplace.solve_mle()
print(log_perplexity(make_test_generator('sec00.raw'), unigram_lm_laplace))
print(log_perplexity(make_test_generator('sec00.raw'), bigram_lm_laplace))
###Output
7.06497838227
4.70167458342
###Markdown
InterpolationLaplace smoothing deals with unseen words for a seen history, but it cannot deal with unseen histories.This means that Laplace smoothing is not sufficient to avoid 0 probabilities. A simple idea is to use language model interpolation. We interpolate language models $\mathcal M_0, \ldots, \mathcal M_o$, where $\mathcal M_j$ is a Markov model of order $j$, to obtain an interpolated $(o+1)$-gram language model. For the interpolation we use coefficients $\lambda_0, \ldots, \lambda_o$ where* $0 < \lambda_j < 1$* $\sum_{j=0}^{o} \lambda_j = 1$The probability of a sentence $x_1^n$ under the interpolated model is\begin{equation}(8) \qquad P_S(x_1^n|n; \mathcal M_0, \ldots, \mathcal M_o) = P_N(n) \prod_{i=1}^n P_{X|H}(x_i|x_{<i}; \mathcal M_0, \ldots, \mathcal M_o)\end{equation}where the interpolated factor is \begin{equation}(9) \qquad P_{X|H}(x_i|x_{<i}; \mathcal M_0, \ldots, \mathcal M_{n-1}) = \sum_{j=0}^{o} \lambda_j \times P_{X|H}(x_i|x_{i-j}^{i-1}; \mathcal M_j)\end{equation}and $ P_{X|H}(x|h; \mathcal M_j)$ is the probability of the $(j+1)$-gram suffix of $h \circ \langle x \rangle$ under a model of order $j$. For example, consider the sentence `here comes the sun`, for a $3$-gram LM (order $2$) we pad it `BOS BOS here comes the sun EOS` and compute interpolated factors:\begin{align}P(\text{here} \mid \langle \text{BOS, BOS} \rangle) &= \lambda_0 \times P(\text{here} \mid \langle \rangle; \mathcal M_0) \\&+ \lambda_1 P(\text{here}\mid \langle \text{BOS} \rangle; \mathcal M_1) \\&+ \lambda_2 P(\text{here} \mid \langle \text{BOS, BOS} \rangle; \mathcal M_2) \\P(\text{comes}\mid \langle \text{BOS, here} \rangle) &= \lambda_0 \times P(\text{comes}\mid \langle \rangle; \mathcal M_0) \\&+ \lambda_1 P(\text{comes}\mid\langle \text{here} \rangle; \mathcal M_1) \\&+ \lambda_2 P(\text{comes}\mid \langle \text{BOS, here} \rangle; \mathcal M_2) \\P(\text{the}\mid \langle \text{here, comes} \rangle) &= \lambda_0 \times P(\text{the}\mid\langle \rangle; \mathcal M_0) \\&+ \lambda_1 P(\text{the}\mid \langle \text{comes} \rangle; \mathcal M_1) \\&+ \lambda_2 P(\text{the}\mid\langle \text{here, comes} \rangle; \mathcal M_2) \\P(\text{sun}\mid \langle \text{comes, the} \rangle) &= \lambda_0 \times P(\text{sun}\mid \langle \rangle; \mathcal M_0) \\&+ \lambda_1 P(\text{sun}\mid \langle \text{the} \rangle; \mathcal M_1) \\&+ \lambda_2 P(\text{sun}\mid \langle \text{comes, the} \rangle; \mathcal M_2) \\P(\text{EOS}\mid \langle \text{the, sun} \rangle) &= \lambda_0 \times P(\text{EOS}\mid \langle \rangle; \mathcal M_0) \\&+ \lambda_1 P(\text{EOS}\mid \langle \text{sun} \rangle; \mathcal M_1) \\&+ \lambda_2 P(\text{EOS}\mid \langle \text{the, sun} \rangle; \mathcal M_2) \end{align}Then the probability of the sentence under the interpolation is proportional to\begin{align}P_{S|N}(\langle \text{here, comes, the, sun, EOS}\rangle|n) &= P(\text{here} \mid \langle \text{BOS, BOS} \rangle) \\&\times P(\text{comes}\mid \langle \text{BOS, here} \rangle) \\&\times P(\text{the}\mid \langle \text{here, comes} \rangle) \\&\times P(\text{sun}\mid \langle \text{comes, the} \rangle) \\&\times P(\text{EOS}\mid \langle \text{the, sun} \rangle)\end{align}Let's try and implement it **Exercise 3-8** **[2 points]** Complete the class below which implements an interpolated language model.1. **[1 point]** start by completing the method `get_parameter` which computes the interpolated factor $P_{X|H}$ as shown in [Equation (9)](inter-factor);2. **[1 point]** then complete the method `log_prob` which should use `get_parameter` to compute the log of the interpolated probability $P_{S|N}(x_1^n|n)$ as defined in [Equation (8)](inter-snt-prob)If your implementation is correct you should be able to reproduce the following result```pythonlms = [ LaplaceLM(order=0), unigram LM LaplaceLM(order=1), bigram LM LaplaceLM(order=2), trigram LM LaplaceLM(order=3) 4-gram LM] train our modelsfor lm in lms: lm.count_ngrams(preprocess('sec02-21.raw')) lm.solve_mle()print(log_perplexity(make_test_generator('sec00.raw'), InterpolatedLM(lms[0:1], [1.])))print(log_perplexity(make_test_generator('sec00.raw'), InterpolatedLM(lms[0:2], [0.5, 0.5])))print(log_perplexity(make_test_generator('sec00.raw'), InterpolatedLM(lms[0:3], [0.5, 0.3, 0.2])))print(log_perplexity(make_test_generator('sec00.raw'), InterpolatedLM(lms, [0.4, 0.3, 0.15, 0.15])))```which should produce```python7.064978382275.037191563314.528212123284.50461015933```
###Code
class InterpolatedLM(LM):
def __init__(self, lms, weights):
"""
This class should interpolate language models,
there are certain conditions that they must hold.
:params lms: a list of language models where the lms[i] should have order i
:params weights: a list of positive weights that should sum to 1.0
"""
if not lms:
raise ValueError('I need at least 1 language model')
if not all(0 < w < 1 for w in weights) and sum(weights) != 1.0:
raise ValueError('LM weights must sum to 1')
# Let's check that we have the LMs we need
for i, lm in enumerate(lms):
if lm.order() != i:
raise ValueError('Interpolation requires the ith LM to be of order i-1')
self._max_order = lms[-1].order() # the maximum order
self._lms = lms
self._weights = weights
def order(self):
return self._max_order
def print_count_table(self, output_stream=sys.stdout):
raise NotImplementedError('You do not need to use or implement this method')
def print_prob_table(self, output_stream=sys.stdout):
raise NotImplementedError('You do not need to use or implement this method')
def preprocess_history(self, history):
raise NotImplementedError('You do not need to use or implement this method')
def cpd_items(self, history):
raise NotImplementedError('You do not need to use or implement this method')
def count_ngrams(self, data_stream):
raise NotImplementedError('You do not need to use or implement this method')
def solve_mle(self):
raise NotImplementedError('You do not need to use or implement this method')
def get_parameter(self, history, word):
"""
This function should return the interpolated factor P(X=w|H=h) as defined in Equation (9) above.
:param history: a sequence of words (a tuple)
:param word: a word (a str)
:return: a float representing P(word|history) in the interpolated model
"""
p = 0
for i, lm in enumerate(self._lms):
p += lm.get_parameter(history, word) * self._weights[i]
return p
def log_prob(self, sentence):
"""
Compute the log probability of a sentence under this model.
input:
sentence: a sequence of tokens
output:
log probability
"""
prob_sum = 0
for i, word in enumerate(sentence):
if i >= self.order():
history = tuple(sentence[i - self.order():i])
p = self.get_parameter(history, word)
prob_sum += np.log(p)
return prob_sum
lms = [
LaplaceLM(order=0), # unigram LM
LaplaceLM(order=1), # bigram LM
LaplaceLM(order=2), # trigram LM
LaplaceLM(order=3) # 4-gram LM
]
# train our models
for lm in lms:
lm.count_ngrams(preprocess('sec02-21.raw'))
lm.solve_mle()
print(log_perplexity(make_test_generator('sec00.raw'), InterpolatedLM(lms[0:1], [1.])))
print(log_perplexity(make_test_generator('sec00.raw'), InterpolatedLM(lms[0:2], [0.5, 0.5])))
print(log_perplexity(make_test_generator('sec00.raw'), InterpolatedLM(lms[0:3], [0.5, 0.3, 0.2])))
print(log_perplexity(make_test_generator('sec00.raw'), InterpolatedLM(lms, [0.4, 0.3, 0.15, 0.15])))
###Output
6.93414568618
4.76866792669
3.75866165156
3.39725731171
###Markdown
Bisekcja:
###Code
def bisection(function, start, end, precision=53, eps=1e-15, delta=1e-30):
mp.prec = precision
start = mpf(start)
end = mpf(end)
eps = mpf(eps)
i = 0
n = ceil(log((end - start)/ eps) / log(2))
while abs(function(end)) > eps and i < n: # and abs(end - start) > delta:
x = start + ((end - start) / 2)
if sign(function(start)) != sign(function(x)):
end = x
else:
start = x
i += 1
print('number of iterations: ', i)
return x
f1 = lambda x : cos(x) * cosh(x) - 1
f2 = lambda x : (1/x) - tan(x)
f3 = lambda x : (2**(-x)) + e**x + 2 * cos(x) - 6
bisection(f1, 3/2*pi, 2 * pi, eps=1e-30)
bisection(f2, 1e-30, pi/2)
bisection(f3, 1, 3)
###Output
number of iterations: 51
###Markdown
Newton:
###Code
def newton(f, df, start, precision=53, eps=1e-15, delta=1e-30):
mp.prec = precision
start = mpf(start)
eps = mpf(eps)
i = 0
x = start - (f(start)/df(start))
n = abs(ceil(log(abs(start - x)/ eps) / log(2)))
while abs(f(x)) > eps and i < n and abs(start - x) > delta:
start = x
x = start - (f(start)/df(start))
i += 1
print(i)
return x
df1 = lambda x : cos(x) * sinh(x) - sin(x)*cosh(x)
newton(f1, df1, 3/2*pi)
df2 = lambda x : -1/x**2 - sec(x)**2
newton(f2, df2, pi/2 - pi/4)
df3 = lambda x : e**x - 2**(-x) * log(2) - 2 * sin(x)
newton(f3, df3, 1)
###Output
7
###Markdown
Sieczne:
###Code
def secants(f, first, second, precision=53, eps=1e-15, delta=1e-30):
mp.prec = precision
first = mpf(first)
second = mpf(second)
eps = mpf(eps)
i = 0
n = ceil(log(abs(second - first)/ eps) / log(2))
# n = 50
x = second - ((f(second) * (second - first)) / (f(second) - f(first)))
while abs(f(x)) > eps and i < n and abs(x - second) > delta:
first = second
second = x
x = second - ((f(second) * (second - first))/ (f(second) - f(first)))
i += 1
print(i)
return x
secants(f1, 3/2*pi, 2 * pi, eps=1e-30)
secants(f2, 1e-30, pi/2)
secants(f3, 1, 3)
###Output
9
###Markdown
Lab 3 - Asking a Statistical Question PHYS434 - Advanced Laboratory: Computational Data Analysis Professor: Miguel Morales Due date: 10/23/2021 By Erik Solhaug This week we are going to concentrate on asking a statistical question. This process almost always consists of 3+ steps: 1. Writing down in words very precisely what question you are trying to ask. 2. Translating the precise english question into a mathematical expression. This often includes determining the pdf of the background (possibly including trials), and the to integral to do to obtain a probability. 3. Coverting the probability into equivalent sigma
###Code
# Importing needed libraries
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import scipy
from scipy import stats, signal
from astropy import units as u
# This sets the size of the plot to something useful
plt.rcParams["figure.figsize"] = (15,10)
# This sets the fontsize of the x- and y-labels
fsize = 30
lsize = 24
###Output
_____no_output_____
###Markdown
Problem 1 In our first example we are looking at the temperature reading (meta-data) associated with an experiment. For the experiment to work reliably, the temperature should be at around 12 Kelvin, and if we look at the data it is mostly consistent with 12 Kelvin to within the 0.4 degree precision of the thermometry and the thermal control system (standard deviation). However, there are times when the thermal control system misbehaved and the temperature was not near 12 K, and in addition there are various glitches in the thermometry that give anomalously high and low readings (the reading does not match the real temperature). We definitely want to identify and throw out all the data when the thermal control system was not working (and the temperature was truly off from nominal). While it is possible to have an error in the thermometry such that the true temperature was fine, and we just had a wonky reading, in an abundance of caution we want to throw those values out too.
###Code
d = np.append(stats.norm.rvs(loc = 12., scale = 0.4, size = 100000), [10., 10.3, 2.1, 0., 0., 15.6, 22.3, 12.7])
fig, ax = plt.subplots(1, 1)
ax.hist(d,100, density=True)
plt.tick_params(labelsize = 24)
plt.yscale('log')
ax.set_xlabel('Temperature (K)', fontsize = fsize)
ax.set_ylabel('Probability Mass', fontsize = fsize)
ax.set_title('Temperature Distribution', fontsize = fsize, fontweight = 'bold')
plt.show()
###Output
_____no_output_____
###Markdown
A) 1. Let's play around with the data and come up with criteria for throwing out certain data points.
###Code
x = np.linspace(10, 14, 1000)
d2 = stats.norm.pdf(x, loc = 12., scale = 0.4)
fig, ax = plt.subplots(1, 1)
ax.hist(d,100, density=True)
ax.plot(x, d2, linewidth = 3)
plt.tick_params(labelsize = 24)
plt.yscale('log')
ax.set_xlabel('Temperature (K)', fontsize = fsize)
ax.set_ylabel('Probability Mass', fontsize = fsize)
ax.set_title('Temperature Distribution', fontsize = fsize, fontweight = 'bold')
plt.show()
fig, ax = plt.subplots(1, 1)
ax.hist(d,100, density=True)
ax.plot(x, d2, linewidth = 3)
ax.vlines(10, 5e-6, 1e0, color='r', linestyle = '--')
ax.vlines(14, 5e-6, 1e0, color='r', linestyle = '--')
plt.tick_params(labelsize = 24)
plt.yscale('log')
ax.set_xlabel('Temperature (K)', fontsize = fsize)
ax.set_ylabel('Probability Mass', fontsize = fsize)
ax.set_title('Temperature Distribution', fontsize = fsize, fontweight = 'bold')
plt.show()
###Output
_____no_output_____
###Markdown
Let's suggest boundaries of values 10 and 14 (+-2 on each side of the mean) to discriminate 'bad' data points - essentially, setting these as thresholds for the data to be in between. 2. If we take the survival function of 14 under our pdf we get the following probability and sigma:
###Code
norm_dist = stats.norm(loc = 12., scale = 0.4)
prob = norm_dist.sf(14)
sigma = round(-stats.norm.ppf(prob, loc=0, scale=1), 4)
sigma
###Output
_____no_output_____
###Markdown
This seems to be a good threshold for our data - so that if the data value lies beyond five sigma from the mean of the distribution, then we will throw the data point away. 5 sigma seems to be placed right outside the majority of our distribution and excludes the data point that are outliers.Then, our statistical question becomes:_Is the probability of getting the data point in our distribution smaller than $5\sigma$?_If this is the case, we will throw out the data point. 3. We now restate our question in mathematical terms. For a data point with value $V$.
###Code
def exclude_data(dist, V, sigma):
'''
Returns True if data point should be thrown out,
False if it should be kept.
'''
Vprob = dist.sf(V)
Vsigma = -stats.norm.ppf(Vprob, loc=0, scale=1)
if abs(Vsigma) > sigma:
exclude = True
else:
exclude = False
return exclude
###Output
_____no_output_____
###Markdown
We run this in a loop and get:
###Code
included_array = []
excluded_array = []
for item in d:
if exclude_data(norm_dist, item, sigma):
excluded_array.append(item)
else:
included_array.append(item)
print(f'Excluded: {excluded_array}')
###Output
Excluded: [2.1, 0.0, 0.0, 15.6, 22.3]
###Markdown
4. Reminder: Our 'bad' data points are {10., 10.3, 2.1, 0., 0., 15.6, 22.3, 12.7}
###Code
bad_data = [10., 10.3, 2.1, 0., 0., 15.6, 22.3, 12.7]
kept_bad_data = []
for i in included_array:
for k in bad_data:
if i == k:
kept_bad_data.append(i)
bad_data, kept_bad_data
len(d) - len(bad_data), len(bad_data)
len(included_array), len(excluded_array), len(excluded_array) + len(included_array), len(kept_bad_data)
###Output
_____no_output_____
###Markdown
We construct a truth table showing our results from above:| | **True T** | **Bad T** || --- | --- | --- || Test Include | 100000 | 3 || Test Exclude | 0 | 5 || Total | 100000 | 8 | B) Now, we evaluate how the omissions (throwing out 'good' data) depends on the threshold (sigma) I chose above.Since the test does not omit any good data for my threshold of $ 5\sigma $, it does not depend on the threshold if the threshold increases (to a larger sigma). However, if we decreased the threshold so that the sigma would converge onto the actual good data points - so that the width of the statistical "inclusion" is narrower than the distribution of the normal distribution in the background - and then the test would start excluding good data points. C) There are still some data points that are 'bad' data that gets into my final distribution even after the statistical test. These are located among the distribution of the background, and so they are included as my test does not omit them since they are within the 'inclusion zone' defined by my threshold of $ \pm \: 5\sigma $.There is no way I can change my threshold - effectively the width of the inclusion zone - so that the test would not also exclude good data. Problem 2 In this example we will be looking for asteroids. If we look at the alignment of stars on subsequent images, they don't perfectly align due to atmospheric and instrumental effects (even ignoring proper motion). The resulting distribution is two-dimensional, and for this lab let's assume it is a 2D Gaussian with 1 arcsecond RMS. Or said another way, if I histogram how far all the (stationary) stars appear to have moved I get something like:
###Code
a = np.vstack((stats.norm.rvs( scale = 1, size = 100000), stats.norm.rvs( scale = 1, size = 100000)))
a.shape
fig, ax = plt.subplots(1, 1)
h = ax.hist2d(a[0,:],a[1,:],bins=100, density=True);
ax.set_aspect('equal', 'box')
plt.xlim([-3 , 3])
plt.ylim([-3 , 3])
plt.title("2D Histogram of positional uncertainty", fontsize = 24)
plt.ylabel("$\Delta$y arcseconds", fontsize = 18)
plt.xlabel("$\Delta$x arcseconds", fontsize = 18)
plt.colorbar(h[3], ax=ax);
###Output
_____no_output_____
###Markdown
If I have a potential asteroid, it will have some true movement between the images. We would like a '5 sigma' detection of movement. What is that distance in arcseconds? 1. We know that our 2D Gaussian is related to a Rayleigh distribution, such that our Rayleigh distribution will have a standard deviation of $ \sqrt{\sigma} $ if the Gaussians have a standard deviation of $ \sigma $. Let's state our statistical question in words:_What is the distance in arcseconds that when integrated from the right up to that value (distance) corresponds to a probability of 5 'sigma'?_ 2. For a value V, Rayleigh distribution of $ R(x) $ and standard normal distribution of $ N(x) $:$$ \int_{V}^{\infty}{ R(x) dx} = \int_{5\sigma}^{\infty}{ N(x) dx} $$Then, we take the $isf()$ of the first integral to find the value of V.(Thus, essentially, our mathematical question asks what is the value of V that makes this equation true.) 3.
###Code
prob_5sigma = 1/(3.5e6)
sigma_gaussian = 1
sigma_rayleigh = np.sqrt(sigma_gaussian)
rayleigh = stats.rayleigh(scale = sigma_rayleigh)
det = rayleigh.isf(prob_5sigma)
det
# stats.norm.isf(rayleigh.sf(prob_5sigma))
print(f'This means that the detection of movement of 5 sigma corresponds to {det} arcseconds')
###Output
This means that the detection of movement of 5 sigma corresponds to 5.489676406940512 arcseconds
###Markdown
Problem 3 As we discussed in class, one of the key backgrounds for gamma-ray telescopes are cosmic rays. Cosmic rays are charged particles—usually protons or electrons but can include atomic nuclei such a alpha particles (helium) or iron. Because of their charge cosmic rays spiral in the magnetic field of the galaxy. From the perspective of the Earth they appear to be coming uniformly from all directions like a high energy gas, and the direction the cosmic ray is travelling when it reaches the Earth tells us nothing about where it came from because we don't know what tortured path it has taken through the galaxy to reach us. However, at trillion electron volt energies and above, the spiral loops are fairly big and the sun and the moon will block cosmic rays. This means the sun and the moon appear as holes in the cosmic ray sky (cosmic rays from that direction are absorbed).Assume in a moon sized patch on the sky we normally have a cosmic ray rate of 1 cosmic ray per minute (arrivals are random in time). We observe where the moon is for 8 hours per night (not too close to the horizon) and we observe for 15 days and see 6800 cosmic rays. Let's find the signficance of our moon shadow detection. 1. We assume the cosmic rays to follow a Poisson distribution, since we are dealing with rates of events (from cosmic rays). In this problem, we are not dealing with trials since there is no look-elsewhere effect - we are not looking for the 'brightest' candidate of our signals. Rather, we are adding our exposures together to extend the time we are observing. Thus, we are convolving our distribution over 7200 times (see cell below). However, we know that a Poisson distribution convolved with another Poisson distribution is a Poisson distribution with a mean equal to the sum of the means of the previous distributions.
###Code
(8 * u.hour * 15).to(u.min)/u.min # 8 hours and 15 days
###Output
_____no_output_____
###Markdown
We state our statistical question:_What is the probability that the "normally" occurring cosmic ray background - a Poisson distribution with mean 7200 - produces a signal of 6800 cosmic rays?_ 2. We will let $S = 6800$. We start by showing the background for an 8 hour exposure (1 night).
###Code
N = 7200
trials = 1
mu = 1
resolution = 1
background = stats.poisson(mu*N)
xmin, xmax = (6000, 8000)
x = np.arange(xmin, xmax+1, resolution)
cx = np.arange(xmin, xmax+1, resolution/N)
# cxstairs = (np.arange(xmin, xmax+1+0.5*resolution/N, resolution) - 0.5*resolution/N)/N
cxstairs = (np.arange(xmin, xmax+1+0.5*resolution, resolution) - 0.5*resolution)
fig, ax = plt.subplots(1, 1)
plt.tick_params(labelsize = lsize/2)
ax.stairs(background.pmf(x), cxstairs, fill=True)
ax.set_xlim([6500, 7900])
ax.set_xlabel('N cosmic rays', fontsize = fsize)
ax.set_ylabel('Probability Mass', fontsize = fsize)
ax.set_title('15 days of 8-hour exposures', fontsize = fsize, fontweight = 'bold')
plt.show()
fig, ax = plt.subplots(1, 1)
plt.tick_params(labelsize = lsize/2)
ax.stairs(background.pmf(x), cxstairs, fill=True)
ax.set_xlim([6500, 7900])
ax.set_ylim([1e-21, 1e-2])
ax.set_xlabel('N cosmic rays', fontsize = fsize)
ax.set_ylabel('Probability Mass', fontsize = fsize)
ax.set_title('15 days of 8-hour exposures', fontsize = fsize, fontweight = 'bold')
ax.set_yscale('log')
plt.show()
###Output
_____no_output_____
###Markdown
Which is the $pdf()$ of the background.
###Code
Y = 6800
Y
###Output
_____no_output_____
###Markdown
Let's describe the integral that we need to do for a 6800 cosmic ray detection.Since this value is smaller than the mean of the distribution $\mu$, we need to integrate from the left ($-\infty$) up to our value $Y = 6800$. Our integral equation then becomes:$$ \int_{-\infty}^{Y}{ P(x) dx} = \int_{\sigma}^{\infty}{ N(x) dx} $$
###Code
prob_moon = (background.cdf(Y)) # We have to integrate from the left, since we are observing a deviation from the normal **less than** the mean
prob_moon
print(f'The probability of detecting 6800 cosmic rays in our observation is {prob_moon:.2e}.')
###Output
The probability of detecting 6800 cosmic rays in our observation is 1.01e-06.
###Markdown
3.
###Code
sigma_moon = abs(stats.norm.ppf(prob_moon))
print(f'The sigma of our detection is {sigma_moon:.3}.')
###Output
The sigma of our detection is 4.75.
###Markdown
Домашняя лабораторная работа №3 по вычислительной математикеДержавин Андрей, Б01-909 группа Задача __IV.12.7(г)__ $$\left\lbrace\begin{matrix}\cos{y} - x = -0.85 \\\sin{x} - y = 1.32 \\\end{matrix}\right., \: \: \varepsilon = 10^{-5}$$
###Code
import numpy as np
from matplotlib import pyplot as plt
###Output
_____no_output_____
###Markdown
 Метод простых итераций По графику видно, что решение находится в области$$G : \left\lbrace\begin{matrix}x \in \left[1.5, 2 \right]\\y \in \left[-0.5, 0 \right]\end{matrix}\right\rbrace$$Выберем начальную точку для МПИ как середину:$$\vec{x}_0 = (1.8, -0.3)$$Сузим область:$$G : \left\lbrace\begin{matrix}x \in \left[1.7, 1.9 \right]\\y \in \left[-0.4, -0.2 \right]\end{matrix}\right\rbrace$$ Приведём систему к виду $$\vec{x}_{n+1} = \vec{\varphi}\left(\vec{x}_{n} \right)$$:$$\left\lbrace\begin{matrix}x = 0.85 + \cos{y}\\y = -1.32 + \sin{x}\\\end{matrix}\right. \Leftrightarrow\left\lbrace\begin{matrix}\varphi_1 = x_{n+1} = 0.85 + \cos{y_n}\\\varphi_2 = y_{n+1} = -1.32 + \sin{x_n}\\\end{matrix}\right.$$Вычислим матрицу:$$M = \left( \begin{matrix}\frac{\partial \varphi_1}{\partial x} & \frac{\partial \varphi_1}{\partial y} \\\frac{\partial \varphi_2}{\partial x} & \frac{\partial \varphi_2}{\partial y} \\\end{matrix}\right)$$ $$\frac{\partial \varphi_1}{\partial x} = 0, \: \frac{\partial \varphi_1}{\partial y} = -\sin{y_n}$$$$\left|\frac{\partial \varphi_1}{\partial y}\right| \leqslant\sin{0.4} < 0.4$$$$\frac{\partial \varphi_2}{\partial x} = \cos{x_n}, \: \frac{\partial \varphi_2}{\partial y} = 0$$$$\left|\frac{\partial \varphi_2}{\partial x} \right| \leqslant\left|\cos{1.9}\right| < 0.33$$Таким образом, $$\left|\left| M \right|\right|_2 < 1 \Rightarrow \text{метод простых итераций сходится}$$
###Code
epsilon = 1e-5
x, y = 1.8, -0.3
xprev, yprev = 0, 0
iters = 0
while 1:
iters += 1
xprev, yprev = x, y
x, y = 0.85 + np.cos(y), -1.32 + np.sin(x)
if max(abs(x - xprev), abs(y - yprev)) < epsilon:
break
print(f'(x, y) = ({x}, {y})')
print(f'iters = {iters}')
mpi_it = iters
###Output
(x, y) = (1.7913388724813675, -0.34421976363320883)
iters = 8
###Markdown
Метод Ньютона Функции для подсчета норм матрицы
###Code
def norm_1(matr):
max_s = 0
rows, cols = matr.shape
for j in range(cols):
sum = 0
for i in range(rows):
sum += abs(matr[i][j])
max_s = max(sum, max_s)
return max_s
def norm_2(matr):
rows, cols = matr.shape
max_s = 0
for i in range(rows):
max_s = max(sum(abs(matr[i])), max_s)
return max_s
def norm_3(matr):
return np.sqrt(max(abs(np.linalg.eigvals(np.dot(matr, matr.transpose())))))
def vec_n1(vec):
return max(abs(vec))
def vec_n2(vec):
return sum(abs(vec))
def vec_n3(vec):
return np.sqrt(sum(vec * vec))
###Output
_____no_output_____
###Markdown
$$\vec{x}_{n+1} = \vec{x}_n - J^{-1}(\vec{x}_n) \cdot \vec{f}(\vec{x}_n)$$Где $$\left\lbrace\begin{matrix}f_1 = \cos{y} - x + 0.85\\f_2 = -y -1.32 + \sin{x}\\\end{matrix}\right.$$Будем решать СЛАУ для приращения:$$ J(⃗\vec{x}_n) \cdot \overrightarrow{\Delta x} = - \vec{f}(\vec{x}_n)$$$$J= \left(\begin{matrix}-1 & -\sin{y} \\ \cos{x} & -1 \\\end{matrix}\right)$$
###Code
def get_LUD(matr):
sz = matr.shape[0]
D = np.zeros((sz, sz))
L = np.zeros((sz, sz))
U = np.zeros((sz, sz))
for i in range(sz):
for j in range(sz):
if i == j:
D[i][j] = matr[i][j]
elif i > j:
L[i][j] = matr[i][j]
else:
U[i][j] = matr[i][j]
return L, U, D
def zeidel(matr, b, eps, norm):
converged = False
x = np.zeros((matr.shape[0], 1))
L, U, D = get_LUD(matr)
LDinv = np.linalg.inv(L + D)
LDinvU = np.dot(LDinv, U)
while not converged:
x_new = -np.dot(LDinvU, x) + np.dot(LDinv, b)
converged = norm(x_new - x) < eps
x = x_new
return x.transpose()[0]
def getJ(x):
return np.array([
[-1, -np.sin(x[1])],
[np.cos(x[0]), -1]
])
def f(x):
return np.array([[
np.cos(x[1]) - x[0] + 0.85,
np.sin(x[0]) - x[1] - 1.32,
]]
).transpose()
def runNewton(x):
J = getJ(x)
Jinv = np.linalg.inv(J)
mu = norm_1(J) * norm_1(Jinv)
assert mu < 10, f'Слишком большая невязка {mu}'
return zeidel(J, -f(x), 1e-12, norm_2)
x = np.array([1.8, -0.3])
converged = False
iters = 0
while not converged:
iters += 1
xprev = x
dx = runNewton(x)
x += dx
converged = vec_n2(dx) < epsilon
print(f'(x, y) = ({x[0]}, {x[1]})')
print(f'iters = {iters}')
###Output
(x, y) = (1.7913386099639974, -0.34422103640676516)
iters = 3
###Markdown
Сравнение методов
###Code
print(f"Число итераций МПИ: {mpi_it}")
print(f"Число итераций метода Ньютона: {iters}")
###Output
Число итераций МПИ: 8
Число итераций метода Ньютона: 3
|
docs/README figure.ipynb | ###Markdown
JWST NIRCam
###Code
nc = webbpsf.NIRCam()
nc.filter = 'F210M'
psf, interm = nc.calc_psf(display=True, return_intermediates=True)
###Output
[webbpsf] NIRCam aperture name updated to NRCA1_FULL
[ poppy] No source spectrum supplied, therefore defaulting to 5700 K blackbody
[ poppy] Computing wavelength weights using synthetic photometry for F210M...
[ poppy] PSF calc using fov_arcsec = 5.000000, oversample = 4, number of wavelengths = 9
[webbpsf] Creating optical system model:
[ poppy] Initialized OpticalSystem: JWST+NIRCam
[ poppy] JWST Entrance Pupil: Loaded amplitude transmission from /Users/rgeda/project/data/webbpsf-data/jwst_pupil_RevW_npix1024.fits.gz
[ poppy] JWST Entrance Pupil: Loaded OPD from /Users/rgeda/project/data/webbpsf-data/NIRCam/OPD/OPD_RevW_ote_for_NIRCam_requirements.fits.gz
[ poppy] Added pupil plane: JWST Entrance Pupil
[ poppy] Added coordinate inversion plane: OTE exit pupil
[ poppy] Added pupil plane: NIRCamSWA internal WFE at V2V3=(2.01,-8.79)', near ISIM41
[ poppy] Added detector with pixelscale=0.0311 and oversampling=4: NIRCam detector
[ poppy] Calculating PSF with 9 wavelengths
[ poppy] User requested saving intermediate wavefronts in call to poppy.calc_psf
[ poppy] Propagating wavelength = 1.99178e-06 m
[webbpsf] Applying OPD focus adjustment based on NIRCam focus vs wavelength model
[webbpsf] Modified focus from 2.12 to 1.9917835671342685 um: -16.719 nm wfe
[webbpsf] Resulting OPD has 26.805 nm rms
[ poppy] Propagating wavelength = 2.01824e-06 m
###Markdown
Roman WFI
###Code
wfi = roman.WFI()
wfi.filter = 'F087'
wpsf, winterm = wfi.calc_psf(display=True, return_intermediates=True)
###Output
[webbpsf] Using the unmasked WFI pupil shape based on filter requested
[webbpsf] Using the unmasked WFI pupil shape based on filter requested
[ poppy] No source spectrum supplied, therefore defaulting to 5700 K blackbody
[ poppy] Computing wavelength weights using synthetic photometry for F087...
[webbpsf] Using the unmasked WFI pupil shape based on filter requested
[ poppy] PSF calc using fov_arcsec = 5.000000, oversample = 4, number of wavelengths = 10
[webbpsf] Creating optical system model:
[ poppy] Initialized OpticalSystem: Roman+WFI
[ poppy] Roman Entrance Pupil: Loaded amplitude transmission from /Users/rgeda/project/data/webbpsf-data/WFI/pupils/SCA1_rim_mask.fits.gz
[ poppy] Roman Entrance Pupil: Loaded OPD from /Users/rgeda/project/data/webbpsf-data/upscaled_HST_OPD.fits
[ poppy] Added pupil plane: Roman Entrance Pupil
[ poppy] Added coordinate inversion plane: OTE exit pupil
[ poppy] Added pupil plane: Field Dependent Aberration (SCA01)
[ poppy] Added detector with pixelscale=0.11 and oversampling=4: WFI detector
[ poppy] Calculating PSF with 10 wavelengths
[ poppy] User requested saving intermediate wavefronts in call to poppy.calc_psf
[ poppy] Propagating wavelength = 7.62706e-07 m
[ poppy] Propagating wavelength = 7.8688e-07 m
###Markdown
Make the figure
###Code
fig, axes = plt.subplots(nrows=1, ncols=6, figsize=(16, 4))
for ax in axes:
ax.patch.set_facecolor('black')
crop_arcsec = 2.0
# JWST NIRCam
phasemap = interm[0].phase.copy()
phasemap[interm[0].intensity == 0.0] = np.nan
axes[0].imshow(phasemap, cmap='RdBu_r')
webbpsf.display_psf(psf, ext='OVERSAMP', colorbar=False, ax=axes[1], imagecrop=crop_arcsec)
webbpsf.display_psf(psf, ext='DET_SAMP', colorbar=False, ax=axes[2], imagecrop=crop_arcsec)
# Roman WFI
wphasemap = winterm[2].phase.copy()
wphasemap[winterm[2].intensity == 0.0] = np.nan
axes[3].imshow(wphasemap, cmap='RdBu_r')
webbpsf.display_psf(wpsf, ext='OVERSAMP', colorbar=False, ax=axes[4], imagecrop=crop_arcsec)
webbpsf.display_psf(wpsf, ext='DET_SAMP', colorbar=False, ax=axes[5], imagecrop=crop_arcsec)
for ax in axes:
ax.set_title('')
ax.set_ylabel('')
ax.set_xlabel('')
ax.set_xticks([])
ax.set_yticks([])
plt.savefig('./readme_fig.png', bbox_inches='tight')
###Output
_____no_output_____
###Markdown
JWST NIRCam
###Code
nc = webbpsf.NIRCam()
nc.filter = 'F210M'
psf, interm = nc.calc_psf(display=True, return_intermediates=True)
###Output
[ poppy] No source spectrum supplied, therefore defaulting to 5700 K blackbody
[ poppy] Computing wavelength weights using synthetic photometry for F210M...
[ poppy] PSF calc using fov_arcsec = 5.000000, oversample = 4, number of wavelengths = 10
[webbpsf] Creating optical system model:
[ poppy] Initialized OpticalSystem: JWST+NIRCam
[ poppy] JWST Entrance Pupil: Loaded amplitude transmission from /Users/jlong/software/webbpsf-data/pupil_RevV.fits
[ poppy] JWST Entrance Pupil: Loaded OPD from /Users/jlong/software/webbpsf-data/NIRCam/OPD/OPD_RevV_nircam_155.fits
[ poppy] The supplied pupil OPD is a datacube but no slice was specified. Defaulting to use slice 0.
[ poppy] Added pupil plane: JWST Entrance Pupil
[ poppy] Added coordinate inversion plane: OTE exit pupil
[ poppy] Added detector with pixelscale=0.0311 and oversampling=4: NIRCam detector
[ poppy] Calculating PSF with 10 wavelengths
[ poppy] User requested saving intermediate wavefronts in call to poppy.calc_psf
[ poppy] Propagating wavelength = 2.0054e-06 m
[ poppy] Propagating wavelength = 2.0262e-06 m
[ poppy] Propagating wavelength = 2.047e-06 m
[ poppy] Propagating wavelength = 2.0678e-06 m
[ poppy] Propagating wavelength = 2.0886e-06 m
[ poppy] Propagating wavelength = 2.1094e-06 m
[ poppy] Propagating wavelength = 2.1302e-06 m
[ poppy] Propagating wavelength = 2.151e-06 m
[ poppy] Propagating wavelength = 2.1718e-06 m
[ poppy] Propagating wavelength = 2.1926e-06 m
[ poppy] Calculation completed in 9.743 s
[ poppy] PSF Calculation completed.
[ poppy] Adding extension with image downsampled to detector pixel scale.
###Markdown
WFIRST WFI
###Code
wfi = wfirst.WFI()
wfi.filter = 'Z087'
wpsf, winterm = wfi.calc_psf(display=True, return_intermediates=True)
###Output
[ poppy] No source spectrum supplied, therefore defaulting to 5700 K blackbody
[ poppy] Computing wavelength weights using synthetic photometry for Z087...
[webbpsf] Using the unmasked WFI pupil shape based on wavelengths requested
[ poppy] PSF calc using fov_arcsec = 5.000000, oversample = 4, number of wavelengths = 10
[webbpsf] Creating optical system model:
[ poppy] Initialized OpticalSystem: WFIRST+WFI
[ poppy] WFIRST Entrance Pupil: Loaded amplitude transmission from /Users/jlong/software/webbpsf-data/wfc_pupil_rev_mcr.fits
[ poppy] WFIRST Entrance Pupil: Loaded OPD from /Users/jlong/software/webbpsf-data/upscaled_HST_OPD.fits
[ poppy] Added pupil plane: WFIRST Entrance Pupil
[ poppy] Added coordinate inversion plane: OTE exit pupil
[ poppy] Added pupil plane: Field Dependent Aberration (SCA01)
[ poppy] Added detector with pixelscale=0.11 and oversampling=4: WFI detector
[ poppy] Calculating PSF with 10 wavelengths
[ poppy] User requested saving intermediate wavefronts in call to poppy.calc_psf
[ poppy] Propagating wavelength = 7.7085e-07 m
[ poppy] Propagating wavelength = 7.9255e-07 m
[ poppy] Propagating wavelength = 8.1425e-07 m
[ poppy] Propagating wavelength = 8.3595e-07 m
[ poppy] Propagating wavelength = 8.5765e-07 m
[ poppy] Propagating wavelength = 8.7935e-07 m
[ poppy] Propagating wavelength = 9.0105e-07 m
[ poppy] Propagating wavelength = 9.2275e-07 m
[ poppy] Propagating wavelength = 9.4445e-07 m
[ poppy] Propagating wavelength = 9.6615e-07 m
[ poppy] Calculation completed in 54.563 s
[ poppy] PSF Calculation completed.
[ poppy] Adding extension with image downsampled to detector pixel scale.
###Markdown
Make the figure
###Code
fig, axes = plt.subplots(nrows=1, ncols=6, figsize=(16, 4))
crop_arcsec = 2.0
# JWST NIRCam
phasemap = interm[0].phase.copy()
phasemap[interm[0].intensity == 0.0] = np.nan
axes[0].imshow(phasemap, cmap='RdBu_r')
webbpsf.display_psf(psf, ext='OVERSAMP', colorbar=False, ax=axes[1], imagecrop=crop_arcsec)
webbpsf.display_psf(psf, ext='DET_SAMP', colorbar=False, ax=axes[2], imagecrop=crop_arcsec)
# WFIRST WFI
wphasemap = winterm[2].phase.copy()
wphasemap[winterm[2].intensity == 0.0] = np.nan
axes[3].imshow(wphasemap, cmap='RdBu_r')
webbpsf.display_psf(wpsf, ext='OVERSAMP', colorbar=False, ax=axes[4], imagecrop=crop_arcsec)
webbpsf.display_psf(wpsf, ext='DET_SAMP', colorbar=False, ax=axes[5], imagecrop=crop_arcsec)
for ax in axes:
ax.set_title('')
ax.set_ylabel('')
ax.set_xlabel('')
ax.set_xticks([])
ax.set_yticks([])
plt.savefig('./readme_fig.png', bbox_inches='tight')
###Output
_____no_output_____ |
jupyterbooks/2019-04-17-Bayesian Linear Regression using PyMC3.ipynb | ###Markdown
Introduction> In statistics, Bayesian linear regression is an approach to linear regression in which the statistical analysis is undertaken within the context of Bayesian inference. When the regression model has errors that have a normal distribution, and if a particular form of prior distribution is assumed, explicit results are available for the posterior probability distributions of the model's parameters.**Linear Regression**In simple linear regression we get point estimates by,$$ y = \alpha\ + \beta\ *x $$ Equation says, there's a linear relationship between variable $x$ and $y$. Slope is controlled by $ \beta\ $ and intercept tells about value of $y$ when $x=0$ . Methods like Ordinary Least Squares, optimize the parameters to minimize the error between observed $y$ and predicted $y$. These methods only return single best value for parameters. ** Bayesian Approach: ** The same problem can be stated under probablistic framework. We can obtain best values of $ \alpha\ $ and $ \beta\ $ along with their uncertainity estimations. Probablistically linear regression can be explained as : $$ y \sim N(\mu=\alpha+\beta x, \sigma=\varepsilon) $$$y$ is observed as a Gaussian distribution with mean $ \mu\ $ and standard deviation $ \sigma\ $. Unlike OLS regression, here it is normally distibuted. Since we do not know the values of $ \alpha\ $ , $ \beta\ $ and $ \epsilon\ $, we have to set prior distributions for them. $$ \begin{array}{l}{\alpha \sim N\left(\mu_{\alpha}, \sigma_{\alpha}\right)} \\ {\beta \sim N\left(\mu_{\beta}, \sigma_{\beta}\right)} \\ {\varepsilon \sim U\left(0, h_{s}\right)}\end{array}$$In this post, I'm going to demonstrate very simple linear regression problem with both OLS and bayesian approach. We will use [PyMC3 package](https://docs.pymc.io/). PyMC3 is a Python package for Bayesian statistical modeling and probabilistic machine learning. Import basic modules
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
###Output
_____no_output_____
###Markdown
Generate linear artificial data
###Code
rng = np.random.RandomState(1)
x = 10 * rng.rand(50)
y = 2 * x - 5 + 2*rng.randn(50)
x = x.reshape(-1, 1)
y = y.reshape(-1, 1)
plt.scatter(x, y)
###Output
_____no_output_____
###Markdown
Create PyMC3 model
###Code
import pymc3 as pm
print('Running on the PyMC3 v{}'.format(pm.__version__))
basic_model = pm.Model()
###Output
Running on the PyMC3 v3.6
###Markdown
Define model parametersContext is created for defining model parameters using `with` statement. Using context makes it easy to assign parameters to model.Distributions for $ \alpha\ $ , $ \beta\ $ and $ \epsilon\ $ are defined. $ \mu\ $ is a deterministic variable which calculated using line equation.`Ylikelihood` is a likelihood parameter which is defined ny Normal distribution with $ \mu\ $ and $ \sigma\ $. Observed values are also passed along with distribution.
###Code
with basic_model as bm:
#Priors
alpha = pm.Normal('alpha', mu=0, sd=10)
beta = pm.Normal('beta', mu=0, sd=10)
sigma = pm.HalfNormal('sigma', sd=1)
# Deterministics
mu = alpha + beta*x
# Likelihood
Ylikelihood = pm.Normal('Ylikelihood', mu=mu, sd=sigma, observed=y)
###Output
_____no_output_____
###Markdown
Sampling from posteriorAs model is defined completely, now we can sample from posterior. PyMC3 automatically chooses appropriate model depending on the type of data. In our case of continuous data, NUTS is used.
###Code
trace = pm.sample(draws=2000,model=bm)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Sequential sampling (2 chains in 1 job)
NUTS: [sigma, beta, alpha]
100%|██████████| 2500/2500 [00:03<00:00, 770.37it/s]
100%|██████████| 2500/2500 [00:03<00:00, 721.84it/s]
The acceptance probability does not match the target. It is 0.8850329863869828, but should be close to 0.8. Try to increase the number of tuning steps.
###Markdown
Trace summary and plotsTraceplots plots samples histograms and values.
###Code
pm.traceplot(trace)
print(pm.summary(trace).round(2))
###Output
mean sd mc_error hpd_2.5 hpd_97.5 n_eff Rhat
alpha -5.00 0.48 0.01 -5.93 -4.05 1619.65 1.0
beta 2.05 0.09 0.00 1.89 2.23 1583.34 1.0
sigma 1.83 0.18 0.00 1.49 2.18 2254.55 1.0
###Markdown
Checking autocorrelationBar plot of the autocorrelation function for a trace can be plotted using [pymc3.plots.autocorrplot](https://docs.pymc.io/api/plots.html).Autocorrelation dictates the amount of time you have to wait for convergence. If autocorrelation is high, you will have to use a longer burn-in. Low autocorrelation means good exploration.
###Code
pm.autocorrplot(trace)
###Output
_____no_output_____
###Markdown
Comparing parameters with Simple Linear Regression (OLS)Parameters can be cross checked using Simple Linear Regression.
###Code
from sklearn.linear_model import LinearRegression
lm = LinearRegression()
ypred = lm.fit(x,y).predict(x)
plt.scatter(x,y)
plt.plot(x,ypred)
legend_title = 'Simple Linear Regression\n {} + {}*x'.format(round(lm.intercept_[0],2),round(lm.coef_[0][0],2))
legend = plt.legend(loc='upper left', frameon=False, title=legend_title)
plt.title("Simple Linear Regression")
plt.show()
###Output
No handles with labels found to put in legend.
###Markdown
Parameters are almost similar for both pyMc3 and Simple Linear Regression**Intercept: **OLS: -5.0PyMc3: -5.0**Coefficient: **OLS: 2.03PyMc3: 2.05 Plotting traces
###Code
plt.plot(x, y, 'b.');
idx = range(0, len(trace['alpha']), 10)
alpha_m = trace['alpha'].mean()
beta_m = trace['beta'].mean()
plt.plot(x, trace['alpha'][idx] + trace['beta'][idx] *x, c='gray', alpha=0.2);
plt.plot(x, alpha_m + beta_m * x, c='k', label='y = {:.2f} + {:.2f}* x'.format(alpha_m, beta_m))
plt.xlabel('$x$', fontsize=16)
plt.ylabel('$y$', fontsize=16, rotation=0)
plt.legend(loc=2, fontsize=14)
plt.title("Traces")
plt.show()
###Output
_____no_output_____
###Markdown
Posterior PlotsPlot Posterior densities in style of John K. Kruschke's book.
###Code
pm.plots.plot_posterior(trace)
###Output
_____no_output_____
###Markdown
Forest PlotsGenerates a “forest plot” of 100*(1-alpha)% credible intervals from a trace or list of traces.
###Code
pm.plots.forestplot(trace)
###Output
_____no_output_____
###Markdown
Plotting energy distributionsPlot energy transition distribution and marginal energy distribution in order to diagnose poor exploration by HMC algorithms.
###Code
pm.plots.energyplot(trace)
###Output
_____no_output_____
###Markdown
Density PlotsGenerates KDE plots for continuous variables. Plots are truncated at their 100*(1-alpha)% credible intervals.
###Code
pm.plots.densityplot(trace)
###Output
_____no_output_____
###Markdown
Sampling from PosteriorPosterior predictive checks (PPCs) are a great way to validate a model. The idea is to generate data from the model using parameters from draws from the posterior.We will draw samples from the observed model.
###Code
ypred = pm.sampling.sample_posterior_predictive(model=bm,trace=trace, samples=500)
y_sample_posterior_predictive = np.asarray(ypred['Ylikelihood'])
_, ax = plt.subplots()
ax.hist([n.mean() for n in y_sample_posterior_predictive], bins=19, alpha=0.5)
ax.axvline(y.mean())
ax.set(title='Posterior predictive of the mean', xlabel='mean(x)', ylabel='Frequency');
###Output
_____no_output_____
###Markdown
Plotting HPD intervalsWe can plot credible intervals to see unobserved parameter values that fall with a particular subjective probability. The HPD has the nice property that any point **within** the interval has a higher density than any other point outside. To calculate highest posterior density (HPD) of array for given alpha, we use a function given by PyMC3 : [pymc3.stats.hpd()](https://docs.pymc.io/api/stats.html)
###Code
sig0 = pm.hpd(y_sample_posterior_predictive, alpha=0.5)
sig1 = pm.hpd(y_sample_posterior_predictive, alpha=0.05)
#removing extra dimension
sig0 = np.squeeze(sig0)
sig1 = np.squeeze(sig1)
#Reshaping and sorting
inds = x.ravel().argsort()
x_ord = x.ravel()[inds].reshape(-1)
y= y[inds]
sig0ord= sig0[inds]
sig1ord = sig1[inds]
pal = sns.color_palette('Purples')
#plt.plot(x, y, 'b.')
plt.scatter(x_ord,y)
plt.plot(x, alpha_m + beta_m * x, c='k', label='y = {:.2f} + {:.2f}* x'.format(alpha_m, beta_m))
plt.fill_between(x_ord, sig0ord[:,0], sig0ord[:,1], color='red',alpha=1,label="50% interval")
plt.fill_between(x_ord, sig1ord[:,0], sig1ord[:,1], color='gray',alpha=0.5,label="95% interval")
plt.legend()
plt.xlabel('$x$', fontsize=16)
plt.ylabel('$y$', fontsize=16, rotation=0)
plt.title("HPD Plot")
plt.show()
###Output
_____no_output_____
###Markdown
Similarily using 'posterior_predictive' samples, we can get various percentile values to plot.
###Code
dfp = np.percentile(y_sample_posterior_predictive,[2.5,25,50,70,97.5],axis=0)
dfp = np.squeeze(dfp)
dfp = dfp[:,inds]
ymean = y_sample_posterior_predictive.mean(axis=0)
ymean =ymean[inds]
pal = sns.color_palette('Purples')
plt.rcParams['axes.facecolor'] = 'white'
plt.rcParams["axes.linewidth"] = 1.25
plt.rcParams["axes.edgecolor"] = "0.15"
fig = plt.figure(dpi=100)
ax = fig.add_subplot(111)
plt.scatter(x_ord,y,s=10,label='observed')
ax.plot(x_ord,ymean,c=pal[5],label='posterior mean',alpha=0.5)
ax.plot(x_ord,dfp[2,:],alpha=0.75,color=pal[3],label='posterior median')
ax.fill_between(x_ord,dfp[0,:],dfp[4,:],alpha=0.5,color=pal[1],label='CR 95%')
ax.fill_between(x_ord,dfp[1,:],dfp[3,:],alpha=0.4,color=pal[2],label='CR 50%')
ax.legend()
plt.legend(frameon=True)
plt.show()
###Output
_____no_output_____ |
reader/blocksDB_analyze_notebook.ipynb | ###Markdown
analyze `____.db` datacalculates & plots* blocktime* TPS (transactions per second), over 1, 3, 5, 10 consecutive blocks* block size* gasUsed and gasLimit per secondIt needs an `allblocks-....db` database (created by `blocksDB_create.py`) containing all the blocks.--- Interactive version!This notebook here allows to interactively manipulate the diagrams, etc. However:The same output diagrams can also be created by calling on the commandline e.g.:```./blocksDB_diagramming.py temp.db prefix```for the whole chain, or e.g. ```./blocksDB_diagramming.py temp.db prefix 115 230```for just those blocks 115-230 TODONow that **all** subroutines are refactored into `blocksDB_diagramming.py`, clean them out of here, and instead import them. See `def load_prepare_plot_save(...)` in `blocksDB_diagramming.py`. Please cite this as:> Ethereum benchmarking scripts "chainhammer" and "chainreader" > by Dr Andreas Krueger, London 2018 > https://github.com/drandreaskrueger/chainhammerConsider to submit your improvements & usage as pull request --> [../other-projects.md](../other-projects.md). Thanks. table of contents TOCCode * [dependencies & my own routines](dependencies)* [simple statistics](stats)* [generate new columns](columns)* [4 diagrams in one](code4diagrams)Results* [tables of peak TPS rates](tables)* [whole chain](allblocks)* [zooming in](zoom) load dependencies
###Code
#dependencies
import sqlite3; print("sqlite3 version", sqlite3.version)
import pandas; print("pandas version", pandas.__version__)
import numpy; print("numpy version", numpy.__version__)
import matplotlib; print("matplotlib version", matplotlib.__version__)
%matplotlib inline
# https://github.com/matplotlib/matplotlib/issues/5907#issuecomment-179001811
matplotlib.rcParams['agg.path.chunksize'] = 10000
# my own routines are now all in separate .py file:
from blocksDB_diagramming import DB_query, DB_tableSize, maxBlockNumber, check_whether_complete
from blocksDB_diagramming import add_blocktime, add_TPS, add_GUPS, add_GLPS
print ("\nReading blocks table from", DBFILE)
###Output
sqlite3 version 2.6.0
pandas version 0.23.4
numpy version 1.15.2
matplotlib version 3.0.0
Reading blocks table from dell-crux-temp.db
###Markdown
simple statistics some simple statistics
###Code
# open database connection
conn = sqlite3.connect(DBFILE)
print ("DB table names: ", DB_query("SELECT name FROM sqlite_master WHERE type='table';", conn)[0])
# number of rows?
_=DB_tableSize("blocks", conn)
# what is the first & last block we have?
minblock, maxblock = maxBlockNumber(conn)[0]
blocknumbers = DB_query("SELECT blocknumber FROM blocks ORDER BY blocknumber", conn)
print ("len(blocknumbers)=", len(blocknumbers))
# do we have consecutive blocks, none missing?
check_whether_complete(blocknumbers)
# simple statistics
size_max = DB_query("SELECT MAX(size) FROM blocks", conn); print ("(block)size_max", size_max[0][0])
txcount_max = DB_query("SELECT MAX(txcount) FROM blocks", conn); print ("txcount_max", txcount_max[0][0])
txcount_av = DB_query("SELECT AVG(txcount) FROM blocks", conn); print ("txcount_av", txcount_av[0][0])
txcount_sum = DB_query("SELECT SUM(txcount) FROM blocks", conn); print ("txcount_sum", txcount_sum[0][0])
blocks_nonempty_count = DB_query("SELECT COUNT(blocknumber) FROM blocks WHERE txcount != 0", conn); print ("blocks_nonempty_count", blocks_nonempty_count[0][0])
print ("av tx per nonempty blocks = ", txcount_sum[0][0] / blocks_nonempty_count[0][0] )
###Output
(block)size_max 51549
txcount_max 367
txcount_av 56.5
txcount_sum 20001
blocks_nonempty_count 85
av tx per nonempty blocks = 235.30588235294118
###Markdown
create new columns read whole table, and create new columns
###Code
# read whole table
# SQL="SELECT * FROM blocks WHERE 48500<blocknumber and blocknumber<49000 ORDER BY blocknumber"
SQL="SELECT * FROM blocks ORDER BY blocknumber"
df = pandas.read_sql(SQL, conn)
conn.close()
###Output
_____no_output_____
###Markdown
`geth` based clients have a nanosecond timestampnot anymore?
###Code
# transform nanoseconds to seconds
# df["timestamp"]=df["timestamp"]/1000000000
df[0:5]
# blocktime = timestamp[n] - timestamp[n-1]
add_blocktime(df)
#df["TPS_1"]=df['txcount']/df['blocktime']
#df
# transactions per second
# with differently sized (rectangular) windows
add_TPS(df, numBlocks=1)
add_TPS(df, numBlocks=3)
add_TPS(df, numBlocks=5)
add_TPS(df, numBlocks=10)
# gasUsed and gasLimit per second
add_GUPS(df, numBlocks=1)
add_GUPS(df, numBlocks=3)
add_GUPS(df, numBlocks=5)
add_GLPS(df, numBlocks=1)
add_GLPS(df, numBlocks=3)
add_GLPS(df, numBlocks=5)
###Output
_____no_output_____
###Markdown
tables of peak TPS rates peak TPS rates
###Code
# peak TPS single block
df.sort_values(by=['TPS_1blk'], ascending=False)[0:10]
# peak TPS over ten blocks
df.sort_values(by=['TPS_10blks'], ascending=False)[0:10]
###Output
_____no_output_____
###Markdown
code: 4 diagrams in one all 4 diagrams in oneTODO - once this routine is also ready, move it out into `blocksDB_diagramming.py` but at the moment this is still in flux.For the 4 separate graphs see `outdated/blocksDB_analyze.ipynb`.
###Code
def diagrams(df, blockFrom, blockTo, prefix="", gas_logy=True, bt_logy=True):
# https://github.com/matplotlib/matplotlib/issues/5907#issuecomment-179001811
matplotlib.rcParams['agg.path.chunksize'] = 10000
###################################################
# prepare 2x2 subplots
plt = matplotlib.pyplot
fig, axes = plt.subplots(nrows=2, ncols=2,figsize=(15,10))
plt.tight_layout(pad=6.0, w_pad=6.0, h_pad=7.5)
title = prefix + " blocks %d to %d" % (blockFrom, blockTo)
plt.suptitle(title, fontsize=16)
####################################
# TPS
# TPS averages --> legend
cols=['TPS_1blk', 'TPS_3blks', 'TPS_5blks', 'TPS_10blks']
averages=df[cols][blockFrom:blockTo].mean()
legend = [col + " (av %.1f)" % averages[col] for col in cols]
# print (legend)
# TPS diagram
cols = ['blocknumber'] + cols
ax=df[cols][blockFrom:blockTo].plot(x='blocknumber', rot=90, ax=axes[0,0])
ax.set_title("transactions per second")
ax.get_xaxis().get_major_formatter().set_useOffset(False)
ax.legend(legend);
###########################################
# bar charts or line charts
# bar charts are too expensive when too many blocks
numBlocks = blockTo - blockFrom
kind = 'bar' if numBlocks<2000 else 'line'
#############################################
# BT
ax=df[['blocknumber', 'blocktime']][blockFrom:blockTo].plot(x='blocknumber', kind=kind, ax=axes[0,1],
logy=bt_logy)
ax.set_title("blocktime since last block")
ax.locator_params(nbins=1, axis='x') # TODO: matplotlib's ticks - how to autoselect few? Any idea welcome
#############################################
# blocksize
ax=df[['blocknumber', 'size']][blockFrom:blockTo].plot(x='blocknumber', rot=90, kind=kind, ax=axes[1,0])
# ax.get_xaxis().get_major_formatter().set_useOffset(False)
ax.get_yaxis().get_major_formatter().set_scientific(False)
ax.set_title("blocksize in bytes")
ax.locator_params(nbins=1, axis='x') # TODO: matplotlib's ticks - how to autoselect few? Any idea welcome
####################################
# gas
ax=df[['blocknumber', 'GLPS_1blk', 'GUPS_1blk']][blockFrom:blockTo].plot(x='blocknumber',
rot=90, ax=axes[1,1],
logy=gas_logy)
ax.get_xaxis().get_major_formatter().set_useOffset(False)
if not gas_logy:
ax.get_yaxis().get_major_formatter().set_scientific(False)
ax.set_title("gasUsed and gasLimit per second")
##############################################
# save diagram to PNG file
fig.savefig("img/%s_tps-bt-bs-gas_blks%d-%d.png" % (prefix,blockFrom,blockTo))
###Output
_____no_output_____
###Markdown
whole chain
###Code
# the whole range of blocks
diagrams(df, 0, len(blocknumbers)-1, NAME_PREFIX, gas_logy=True, bt_logy=True)
###Output
/home/andreas/Documents/LiClipseWorkspace/drandreaskrueger/chainhammer/py3eth/lib/python3.5/site-packages/matplotlib/axes/_base.py:3364: UserWarning: Attempting to set identical bottom==top results
in singular transformations; automatically expanding.
bottom=1.0, top=1.0
self.set_ylim(upper, lower, auto=None)
###Markdown
zoom in on one experiment zooming in ...
###Code
# starting only at block xx because the waiting time before experiment start
diagrams(df, 110,215, NAME_PREFIX, gas_logy=True, bt_logy=False)
###Output
_____no_output_____ |
General/2. Creating Array.ipynb | ###Markdown
Zero-dimensional Arrays
###Code
import numpy as np
x = np.array(42)
print("x: ", x)
print("The type of x: ", type(x))
print("The dimension of x:", np.ndim(x))
x.dtype
###Output
('x: ', array(42))
('The type of x: ', <type 'numpy.ndarray'>)
('The dimension of x:', 0)
###Markdown
One-dimensional Arrays
###Code
F = np.array([1, 1, 2, 3, 5, 8, 13, 21])
V = np.array([3.4, 6.9, 99.8, 12.8])
print("F: ", F)
print("V: ", V)
print("Type of F: ", F.dtype)
print("Type of V: ", V.dtype)
print("Dimension of F: ", np.ndim(F))
print("Dimension of V: ", np.ndim(V))
###Output
('F: ', array([ 1, 1, 2, 3, 5, 8, 13, 21]))
('V: ', array([ 3.4, 6.9, 99.8, 12.8]))
('Type of F: ', dtype('int32'))
('Type of V: ', dtype('float64'))
('Dimension of F: ', 1)
('Dimension of V: ', 1)
###Markdown
Two- and Multidimensional Arrays
###Code
A = np.array([ [3.4, 8.7, 9.9],
[1.1, -7.8, -0.7],
[4.1, 12.3, 4.8]])
print(A)
print(A.ndim)
print(np.ndim(A))
B = np.array([ [[111, 112], [121, 122]],
[[211, 212], [221, 222]],
[[311, 312], [321, 322]] ])
print(B)
print(B.ndim)
print(np.ndim(B))
print(np.shape(B))
###Output
[[[111 112]
[121 122]]
[[211 212]
[221 222]]
[[311 312]
[321 322]]]
3
3
(3, 2, 2)
###Markdown
Shape of an Array The function "shape" returns the shape of an array. The shape is a tuple of integers.
###Code
x = np.array([ [67, 63, 87],
[77, 69, 59],
[85, 87, 99],
[79, 72, 71],
[63, 89, 93],
[68, 92, 78]])
print(np.shape(x))
###Output
(6, 3)
###Markdown
"shape" can also be used to change the shape of an array.
###Code
x.shape = (3, 6)
print(x)
x.shape = (2, 9)
print(x)
###Output
[[67 63 87 77 69 59 85 87 99]
[79 72 71 63 89 93 68 92 78]]
###Markdown
Indexing and Slicing
###Code
F = np.array([1, 1, 2, 3, 5, 8, 13, 21])
# print the first element of F
print(F[0])
# print the last element of F
print(F[-1])
A = np.array([ [3.4, 8.7, 9.9],
[1.1, -7.8, -0.7],
[4.1, 12.3, 4.8]])
print(A[1][0])
###Output
1.1
###Markdown
Array_Name [start:stop:step]
###Code
S = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
print(S[2:5])
print(S[:4])
print(S[6:])
print(S[:])
###Output
[2 3 4]
[0 1 2 3]
[6 7 8 9]
[0 1 2 3 4 5 6 7 8 9]
###Markdown
multidimensional slicing
###Code
A = np.array([
[11, 12, 13, 14, 15],
[21, 22, 23, 24, 25],
[31, 32, 33, 34, 35],
[41, 42, 43, 44, 45],
[51, 52, 53, 54, 55]])
print(A[:3, 2:])
print(A[3:, :])
print(A[:, 4:])
A = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
S = A[2:6]
print(S)
S[0] = 22
S[1] = 23
print(S)
print(A)
###Output
[2 3 4 5]
[22 23 4 5]
[ 0 1 22 23 4 5 6 7 8 9]
###Markdown
Note : Whereas slicings on lists and tuples create new objects, a slicing operation on an array creates a view on the original array. So we get an another possibility to access the array, or better a part of the array. From this follows that if we modify a view, the original array will be modified as well.
###Code
lst = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
lst2 = lst[2:6]
print(lst2)
lst2[0] = 22
lst2[1] = 23
print(lst)
print(lst2)
###Output
[2, 3, 4, 5]
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
[22, 23, 4, 5]
###Markdown
If you want to check, if two array names share the same memory block, you can use the function np.may_share_memory.
###Code
print(np.may_share_memory(A, S))
A = np.arange(12)
B = A.reshape(3, 4)
print(B)
A[0] = 42
print(A)
np.may_share_memory(A, B)
###Output
_____no_output_____
###Markdown
Creating Arrays with Ones, Zeros and Empty
###Code
import numpy as np
E = np.ones((2,3))
print(E)
print(E.dtype)
F = np.ones((3,4),dtype=int)
print(F)
print(F.dtype)
Z = np.zeros((2,4))
print(Z)
###Output
[[0. 0. 0. 0.]
[0. 0. 0. 0.]]
###Markdown
x = np.array([2,5,18,14,4])E = np.ones_like(x)print(E)Z = np.zeros_like(x)print(Z) There is another interesting way to create an array with Ones or with Zeros, if it has to have the same shape as another existing array 'a'. Numpy supplies for this purpose the methods ones_like(a) and zeros_like(a).
###Code
x = np.array([2,5,18,14,4])
E = np.ones_like(x)
print(E)
Z = np.zeros_like(x)
print(Z)
###Output
[1 1 1 1 1]
[0 0 0 0 0]
###Markdown
Copying Arrays numpy.copy()
###Code
import numpy as np
x = np.array([[42,22,12],[44,53,66]], order='A')
y = x.copy()
x[0,0] = 1001
print(x)
print(y)
###Output
[[1001 22 12]
[ 44 53 66]]
[[42 22 12]
[44 53 66]]
###Markdown
Parameter Meaningobj array_like input data.order The possible values are {'C', 'F', 'A', 'K'}. This parameter controls the memory layout of the copy. 'C' means C-order, 'F' means Fortran-order, 'A' means 'F' if the object 'obj' is Fortran contiguous, 'C' otherwise. 'K' means match the layout of 'obj' as closely as possible.
###Code
print(x.flags['C_CONTIGUOUS'])
print(y.flags['C_CONTIGUOUS'])
###Output
True
True
###Markdown
Identity Array In linear algebra, the identity matrix, or unit matrix, of size n is the n × n square matrix with ones on the main diagonal and zeros elsewhere.There are two ways in Numpy to create identity arrays:1) identy2) eye The identity Function
###Code
We can create identity arrays with the function identity:
identity(n, dtype=None)
The parameters:
Parameter Meaning
n An integer number defining the number of rows and columns of the output, i.e. 'n' x 'n'
dtype An optional argument, defining the data-type of the output. The default is 'float'
The output of identity is an 'n' x 'n' array with its main diagonal set to one, and all other elements are 0.
###Output
_____no_output_____
###Markdown
import numpy as npnp.identity(4, dtype=int)
###Code
# The eye Function
Another way to create identity arrays provides the function eye. This function creates also diagonal arrays consisting solely of ones.
It returns a 2-D array with ones on the diagonal and zeros elsewhere.
eye(N, M=None, k=0, dtype=float)
###Output
_____no_output_____
###Markdown
Parameter MeaningN An integer number defining the rows of the output array.M An optional integer for setting the number of columns in the output. If it is None, it defaults to 'N'.k Defining the position of the diagonal. The default is 0. 0 refers to the main diagonal. A positive value refers to an upper diagonal, and a negative value to a lower diagonal.dtype Optional data-type of the returned array.eye returns an ndarray of shape (N,M). All elements of this array are equal to zero, except for the 'k'-th diagonal, whose values are equal to one.
###Code
import numpy as np
np.eye(5, 8, k=1, dtype=int)
###Output
_____no_output_____
###Markdown
Assignment 1) Create an arbitrary one dimensional array called "v".2) Create a new array which consists of the odd indices of previously created array "v".3) Create a new array in backwards ordering from v.4) What will be the output of the following code: a = np.array([1, 2, 3, 4, 5]) b = a[1:4] b[0] = 200 print(a[1]) 5) Create a two dimensional array called "m".6) Create a new array from m, in which the elements of each row are in reverse order.7) Another one, where the rows are in reverse order.8) Create an array from m, where columns and rows are in reverse order.9) Cut of the first and last row and the first and last column.
###Code
v=([2,3,4,5,56,7])
print("Type of V: ", V.dtype)
print("Dimension of V: ", np.ndim(V))
B = v[1::2]
B
c = v[::-1]
c
a = np.array([1, 2, 3, 4, 5])
b = a[1:4]
b[0] = 200
print(a[1])
m = np.array([ [3.4, 8.7, 9.9],
[1.1, -7.8, -0.7],
[4.1, 12.3, 4.8]])
print(m.ndim)
print(m[:,::-1])
import numpy as np;
np.flip(m, 0)
m[::-1, ::-1]
print(m)
print(m[1:2,1:2])
###Output
[[ 3.4 8.7 9.9]
[ 1.1 -7.8 -0.7]
[ 4.1 12.3 4.8]]
[[-7.8]]
###Markdown
dtype
###Code
dt = np.dtype([('country', 'S20'), ('density', 'i4'), ('area', 'i4'), ('population', 'i4')])
population_table = np.array([
('Netherlands', 393, 41526, 16928800),
('Belgium', 337, 30510, 11007020),
('United Kingdom', 256, 243610, 62262000),
('Germany', 233, 357021, 81799600),
('Liechtenstein', 205, 160, 32842),
('Italy', 192, 301230, 59715625),
('Switzerland', 177, 41290, 7301994),
('Luxembourg', 173, 2586, 512000),
('France', 111, 547030, 63601002),
('Austria', 97, 83858, 8169929),
('Greece', 81, 131940, 11606813),
('Ireland', 65, 70280, 4581269),
('Sweden', 20, 449964, 9515744),
('Finland', 16, 338424, 5410233),
('Norway', 13, 385252, 5033675)],
dtype=dt)
print(population_table[:4])
print(population_table['density'])
print(population_table['country'])
print(population_table['area'][2:5])
###Output
[393 337 256 233 205 192 177 173 111 97 81 65 20 16 13]
['Netherlands' 'Belgium' 'United Kingdom' 'Germany' 'Liechtenstein'
'Italy' 'Switzerland' 'Luxembourg' 'France' 'Austria' 'Greece' 'Ireland'
'Sweden' 'Finland' 'Norway']
[243610 357021 160]
###Markdown
Writing data into file using NumPy
###Code
np.savetxt("population_table.csv",
population_table,
fmt="%s;%d;%d;%d",
delimiter=",")
ls
cat population_table.csv
###Output
_____no_output_____
###Markdown
Reading data from file using numpy
###Code
dt = np.dtype([('country', np.unicode, 20), ('density', 'i4'), ('area', 'i4'), ('population', 'i4')])
x = np.genfromtxt("population_table.csv",
dtype=dt,
delimiter=";")
print(x)
dt = np.dtype([('country', np.unicode, 20), ('density', 'i4'), ('area', 'i4'), ('population', 'i4')])
x = np.loadtxt("population_table.csv",
dtype=dt,
converters={0: lambda x: x.decode('utf-8')},
delimiter=";")
print(x)
###Output
_____no_output_____
###Markdown
Assignment
###Code
Question 1:
Define a structured array with two columns. The first column contains the product ID, which can be defined as an int32. The second column shall contain the price for the product. How can you print out the column with the product IDs, the first row and the price for the third article of this structured array?
Question 2:
Figure out a data type definition for time records with entries for hours, minutes and seconds.
###Output
_____no_output_____ |
MLP_Auto_Experiment.ipynb | ###Markdown
###Code
x_train10[0].shape
from keras.engine import input_layer
from keras.layers import Dropout
def trainer(X=x_train10, y=y_train10, x_val=x_train10_test, y_val=y_train10_test, list_layers=[256,128,64,10], funcs=['relu','relu', 'relu', 'softmax'], drops=[0,0,0,0], epochs=40, batch_size=64, optim='Adam', loss='categorical_crossentropy'):
model = keras.Sequential()
model.add(layers.Dense(list_layers[0], activation=funcs[0], input_shape= X[0].shape))
if drops[0]:
model.add(Dropout(drops[0], input_shape=X[0].shape))
model.add(layers.Dense(list_layers[0], activation=funcs[0]))
else:
model.add(layers.Dense(list_layers[0], activation=funcs[0], input_shape= X[0].shape))
for layer, func, drop in zip(list_layers[1:], funcs[1:], drops[1:]):
if drop:
model.add(Dropout(drop))
model.add(layers.Dense(layer, activation=func))
model.compile(
optimizer=optim,
loss=loss,
metrics=['accuracy'])
history = model.fit(X, y, epochs=epochs, batch_size=batch_size, validation_data=(x_val, y_val))
return model, history
model, hist = trainer(x_train10, y_train10, x_train10_test, y_train10_test, [128,64,64,10], ['relu', 'relu', 'relu', 'softmax'],drops=[0,0,0,0], epochs=40, batch_size=64, optim='Adam', loss='categorical_crossentropy')
model.summary()
layer_dict = dict([(layer.name, layer) for layer in model.layers])
layer_dict
filters, biases = layer_dict['dense'].get_weights()
f_min, f_max = np.amin(filters), np.amax(filters)
filters = (filters - f_min) / (f_max - f_min)
filters.shape
show_filter = filters[:,0].reshape(32,32,3)
import matplotlib.pyplot as plt
plt.imshow(show_filter, cmap='viridis')
model, hist = trainer(x_train10, y_train10, x_train10_test, y_train10_test, [10], ['softmax'],drops=[0,0,0,0], epochs=40, batch_size=64, optim='adam', loss='categorical_crossentropy')
layer_dict = dict([(layer.name, layer) for layer in model.layers])
filters, biases = layer_dict['dense_7'].get_weights()
f_min, f_max = np.amin(filters), np.amax(filters)
filters = (filters - f_min) / (f_max - f_min)
show_filter = filters[:,0].reshape(32,32,3)
import matplotlib.pyplot as plt
plt.imshow(show_filter)
layer_dict
y_train10[0]
plt.imshow(cif10[0][0][0].reshape(32,32,3) )
#X, y, x_val, y_val, list_layers, funcs, drops, epochs, batch_size, optim, loss can all be changed for experiments
EXP={'layer_exp':{'layer':([10],[64,10], [128,64,10], [256,128,64,10],[512,256,128,64,10], [1024,512,256,128,10]),
'tf_funcs':(['softmax'], ['relu','softmax'], ['relu','relu','softmax'], ['relu','relu','relu','softmax'], ['relu','relu','relu','relu','softmax'],['relu','relu','relu','relu','softmax']),
'history':[]},
'drop_exp':{'drops': ([0,0,0,0],[0,0.2,0,0.2], [0,0.3,0,0.3], [0,0.5,0,0.5]),
'history': []},
'optim_exp':{'optims':('SGD', 'RMSprop', 'Adam'),
'history':[]}}
for l, f in zip(EXP['layer_exp']['layer'], EXP['layer_exp']['tf_funcs']):
_, hist = trainer(list_layers=l, funcs=f)
EXP['layer_exp']['history'].append(hist)
for o in EXP['optim_exp']['optims']:
_, hist = trainer(optim=o)
EXP['optim_exp']['history'].append(hist)
for d in EXP['drop_exp']['drops']:
_, hist = trainer(drops=d)
EXP['drop_exp']['history'].append(hist)
EXP
import matplotlib.pyplot as plt
for i in range(len(EXP['layer_exp']['history'])):
history = EXP['layer_exp']['history'][i]
print(history.history.keys())
# summarize history for accuracy
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
t = 'model accuracy for: ' + str(EXP['layer_exp']['layer'][i])
plt.title(t)
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
# summarize history for loss
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
k = [5,6,7]
str(k)
import matplotlib.pyplot as plt
for i in range(len(EXP['optim_exp']['history'])):
history = EXP['optim_exp']['history'][i]
print(history.history.keys())
# summarize history for accuracy
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
t = 'model accuracy for: ' + str(EXP['optim_exp']['optims'][i])
plt.title(t)
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
# summarize history for loss
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
import matplotlib.pyplot as plt
for i in range(len(EXP['drop_exp']['drops'])):
history = EXP['drop_exp']['history'][i]
print(history.history.keys())
# summarize history for accuracy
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
t = 'model accuracy for dropout configurtaion: ' + str(EXP['drop_exp']['drops'][i])
plt.title(t)
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
# summarize history for loss
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
import matplotlib.pyplot as plt
for i in range(len(EXP['drop_exp']['drops'])):
history = EXP['drop_exp']['history'][i+2]
print(history.history.keys())
# summarize history for accuracy
plt.plot(history.history['accuracy'])
#plt.plot(history.history['val_accuracy'])
t = 'model training accuracy for dropout configurtaion:'
plt.title(t)
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(EXP['drop_exp']['drops'], loc='upper left')
plt.show()
import matplotlib.pyplot as plt
for i in range(len(EXP['drop_exp']['drops'])):
history = EXP['drop_exp']['history'][i+2]
print(history.history.keys())
# summarize history for accuracy
#plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
t = 'model testing accuracy for dropout configurtaion:'
plt.title(t)
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(EXP['drop_exp']['drops'], loc='center left', bbox_to_anchor=(1, 0.5))
plt.show()
import matplotlib.pyplot as plt
for i in range(len(EXP['layer_exp']['layer'])):
history = EXP['layer_exp']['history'][i]
print(history.history.keys())
# summarize history for accuracy
plt.plot(history.history['accuracy'])
#plt.plot(history.history['val_accuracy'])
t = 'model training accuracy for layer configurtaions'
plt.title(t)
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(EXP['layer_exp']['layer'], loc='center left', bbox_to_anchor=(1, 0.5))
plt.show()
import matplotlib.pyplot as plt
for i in range(len(EXP['layer_exp']['layer'])):
history = EXP['layer_exp']['history'][i]
print(history.history.keys())
# summarize history for accuracy
#plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
t = 'model test accuracy for layer configurtaions'
plt.title(t)
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(EXP['layer_exp']['layer'], loc='center left', bbox_to_anchor=(1, 0.5))
plt.show()
import matplotlib.pyplot as plt
for i in range(len(EXP['optim_exp']['optims'])):
history = EXP['optim_exp']['history'][i]
print(history.history.keys())
# summarize history for accuracy
plt.plot(history.history['accuracy'])
#plt.plot(history.history['val_accuracy'])
t = 'model training accuracy for dropout configurtaion:'
plt.title(t)
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(EXP['optim_exp']['optims'], loc='center left', bbox_to_anchor=(1, 0.5))
plt.show()
import matplotlib.pyplot as plt
for i in range(len(EXP['optim_exp']['optims'])):
history = EXP['optim_exp']['history'][i]
print(history.history.keys())
# summarize history for accuracy
#plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
t = 'model testing accuracy for dropout configurtaion:'
plt.title(t)
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(EXP['optim_exp']['optims'], loc='center left', bbox_to_anchor=(1, 0.5))
plt.show()
EXP
model100, hist100 = trainer(x_train100, y_train100, x_train100_test, y_train100_test, [2056, 1024, 512, 256, 128, 128, 100], ['relu','relu','relu','relu','relu','relu','softmax'],drops=[0,0.2,0,0.2, 0, 0.2,0], epochs=60, batch_size=64, optim='adam', loss='categorical_crossentropy')
model100.summary()
###Output
_____no_output_____ |
testnotebooks/demo-material.ipynb | ###Markdown
TODO: Take some material from here and add to demo
###Code
import time
import numpy as np
from unray import *
from ipydatawidgets import NDArrayWidget
import pythreejs as three
import ipywidgets
from ipywidgets import jslink
filename = "../data/heart.npz"
#filename = "../data/brain.npz"
#filename = "../data/aneurysm.npz"
mesh_data = np.load(filename)
cells_array = mesh_data["cells"].astype(np.int32)
points_array = mesh_data["points"].astype(np.float32)
# Coordinates of all vertices in mesh
x = list(points_array.T) # x[2] = z coordinate array for all vertices
# Model center 3d vector
center = list(map(lambda x: x.mean(), x))
# Model min/max coordinates
bbox = list(map(lambda x: (x.min(), x.max()), x))
# Coordinates with origo shifted to center of model
xm = list(map(lambda x, mp: x - mp, x, center))
# Distance from model center
xd = np.sqrt(sum(map(lambda x: x**2, xm)))
radius = xd.max()
# Distance from center, normalized to max 1.0
func_dist = xd / radius
# A constant for all vertices
func_const = np.ones(x[0].shape)
# x coordinate
func_x = x[0]
# A wave pattern from the center of the model
freq = 4
func_wave = 2.0 + np.sin((freq * 2 * np.pi / radius) * xd)
# Data widgets
cells = NDArrayWidget(cells_array)
points = NDArrayWidget(points_array)
mesh = Mesh(cells=cells, points=points)
field_values = NDArrayWidget((func_x - func_x.min()) / (func_x.max() - func_x.min()))
field = Field(mesh=mesh, values=field_values)
color_lut = ArrayColorMap(values=[[0,0,0], [1,1,1]])
scalar_lut = ArrayScalarMap(values=[0.2, 0.8])
color_field = ColorField(field=field, lut=color_lut)
scalar_field = ScalarField(field=field, lut=scalar_lut)
# pythreejs setup
width = 960
height = 480
camera = three.PerspectiveCamera(position=[10, 10, 10], aspect=width/height)
key_light = three.DirectionalLight(position=[0, 10, 10])
ambient = three.AmbientLight(intensity=0.5)
scene = three.Scene(children=[key_light, ambient, camera], background='#dddddd')
controls = three.OrbitControls(camera)
renderer = three.Renderer(scene, camera, [controls],
width=width, height=height)
display(renderer)
cell_midpoints = sum(points_array[cells_array[:,i],:] for i in range(3))
cell_midpoints *= (1.0/3.0)
left_half, = np.where(cell_midpoints[:,0] < 0.5 * (cell_midpoints[:,0].max() - cell_midpoints[:,0].min()))
cell_indicators_array1 = np.zeros(cells_array.shape[0], dtype="int32")
cell_indicators_array1[left_half] = 1
cell_indicators_array2 = 1 - cell_indicators_array1
assert 0 == cells_array.shape[0] - (sum(cell_indicators_array1)+sum(cell_indicators_array2))
print(sum(cell_indicators_array1), sum(cell_indicators_array2))
scene.children = tuple(filter(lambda c: c._model_module != "unray", scene.children))
cell_indicators1 = IndicatorField(mesh=mesh, values=cell_indicators_array1, space="I3")
cell_indicators2 = IndicatorField(mesh=mesh, values=cell_indicators_array2, space="I3")
restrict1 = ScalarIndicators(field=cell_indicators1)
restrict2 = ScalarIndicators(field=cell_indicators2)
plots = [
SurfacePlot(mesh=mesh, restrict=restrict1),
XrayPlot(mesh=mesh, restrict=restrict2),
]
scene.add(plots[0])
scene.add(plots[1])
ip = IsovalueParams()
ip.dashboard??
plot = IsosurfacePlot(mesh=mesh, color=color_field, values=ip)
scene.add(plot)
scene.remove(plot)
camera.type
scene.children = tuple(filter(lambda c: c._model_module != "unray", scene.children))
# This doesn't work after construction of a plot because the plot doesn't listen to the color_constant changes
color_constant = ColorConstant(color="#ff0000")
picker = color_constant.dashboard()
picker
# This works!
plot1 = SumPlot(mesh=mesh, color=color_field)
plot2 = SumPlot(mesh=mesh, color=color_constant)
plot1.position = [-6, 0, 0]
plot2.position = [+6, 0, 0]
scene.add(plot1)
scene.add(plot2)
slider = ipywidgets.FloatSlider(value=0.0, min=-10.0, max=+10.0, description="Exposure")
jslink((slider, "value"), (plot1, "exposure"))
jslink((slider, "value"), (plot2, "exposure"))
slider
wp = WireframeParams()
wp.dashboard()
plot = SurfacePlot(mesh=mesh, wireframe=wp, color=color_constant)
scene.add(plot)
scene.remove(plot)
scene.children = tuple(filter(lambda c: c._model_module != "unray", scene.children))
# Testing default parameters with minimal arguments for all plots
if 1:
plots = [
SurfacePlot(mesh=mesh),
MinPlot(mesh=mesh, color=color_field),
MaxPlot(mesh=mesh, color=color_field),
XrayPlot(mesh=mesh, density=ScalarConstant(value=0.1)),
XrayPlot(mesh=mesh, density=scalar_field),
SumPlot(mesh=mesh, color=color_field),
VolumePlot(mesh=mesh, color=color_field, density=scalar_field),
]
if 0:
plots = [
SurfacePlot(mesh=mesh),
]
if 0:
plots = [
SurfacePlot(mesh=mesh),
SurfacePlot(mesh=mesh, color=ColorConstant(color="hsl(45,30%,70%)")),
SurfacePlot(mesh=mesh, color=color_field),
#MinPlot(mesh=mesh, color=color_field),
#MaxPlot(mesh=mesh, color=color_field),
XrayPlot(mesh=mesh, density=ScalarConstant(value=0.1)),
SumPlot(mesh=mesh, color=color_field),
SumPlot(mesh=mesh, color=color_field, exposure=-2.3),
SumPlot(mesh=mesh, color=ColorConstant(color="hsl(70,30%,70%)")),
#VolumePlot(mesh=mesh, color=color_field, density=scalar_field), # Doesn't work
]
if 0:
plots = [
XrayPlot(mesh=mesh, density=ScalarConstant(value=0.0), extinction=1.0),
XrayPlot(mesh=mesh, density=ScalarConstant(value=0.05), extinction=1.0),
XrayPlot(mesh=mesh, density=ScalarConstant(value=0.1), extinction=1.0),
XrayPlot(mesh=mesh, density=ScalarConstant(value=1.0), extinction=1.0),
]
sliders = [ipywidgets.FloatSlider(value=p.density.value, min=0.0, max=2.0, step=0.01, description="extinction") for p in plots]
links = [jslink((sl, "value"), (pl, "extinction")) for (sl, pl) in zip(sliders, plots)]
box = ipywidgets.VBox(sliders)
display(box)
for n, p in enumerate(plots):
i = n % 2
j = (n // 2) % 2
k = n // 4
p.position = [i*12, j*11, k*11]
scene.add(p)
n = 30
for i in range(n):
for j, p in enumerate(plots):
p.rotateX(j*2*3.14159/n)
time.sleep(1.0/n)
scene.children = tuple(filter(lambda c: c._model_module != "unray", scene.children))
for i in range(2):
for j in range(2):
for k in range(2):
I = ((i+1) * (j+1) * (k+1)) / 8.0
p = SurfacePlot(mesh=mesh, color=ColorConstant(intensity=I))
p.position = [i*12, j*11, k*11]
scene.add(p)
scene.children = tuple(filter(lambda c: c._model_module != "unray", scene.children))
for i in range(2):
for j in range(2):
for k in range(2):
I = 0.5 + k*0.5
C = "rgb(%d, %d, %d)" % (i*255, j*255, k*255)
print(C)
p = SurfacePlot(mesh=mesh, color=ColorConstant(color=C, intensity=I))
p.position = [i*12, j*11, k*11]
scene.add(p)
scene.children = tuple(filter(lambda c: c._model_module != "unray", scene.children))
for i in range(2):
for j in range(2):
for k in range(2):
C = "hsl(%d, %d%%, %d%%)" % (i*180 + j*90 + k*45, 30 + i*30 + j*30, 50)
print(C)
p = SurfacePlot(mesh=mesh, color=ColorConstant(color=C))
p.position = [i*12, j*11, k*11]
scene.add(p)
scene.children = tuple(filter(lambda c: c._model_module != "unray", scene.children))
for i in range(2):
for j in range(2):
for k in range(2):
hsl = (i*180 + j*90 + k*45, 30 + i*30 + j*30, 50)
hslw = ((hsl[0] + 180) % 360, 100, 50)
C = "hsl(%d, %d%%, %d%%)" % hsl
Cw = "hsl(%d, %d%%, %d%%)" % hslw
print(C, Cw)
wp = WireframeParams(color=Cw, opacity=0.3*i + 0.3*j + 0.3*k)
p = SurfacePlot(mesh=mesh, color=ColorConstant(color=C), wireframe=wp)
p.position = [i*12, j*11, k*11]
scene.add(p)
scene.children = tuple(filter(lambda c: c._model_module != "unray", scene.children))
xplot = XrayPlot(mesh=mesh)
scene.add(xplot)
plot = SurfacePlot(mesh=mesh)
plot.position = [0, 9, 0]
scene.add(plot)
splot = SurfacePlot(
mesh=mesh,
color=ColorField(field=field, lut=lut),
wireframe=False,
)
splot.position = [0, 9, 0]
scene.add(splot)
xplot = XrayPlot(
density=field,
)
xplot.position = [0, 9, 0]
scene.add(xplot)
box = three.Mesh(three.BoxGeometry(1, 1, 1), three.MeshLambertMaterial(), position=[5, 0, 0])
scene.add(box)
plot0 = scene.children[3]
plot1 = scene.children[4]
plot2 = scene.children[6]
plot0.visible = False
plot1.visible = False
[(i, type(c)) for i, c in enumerate(scene.children)]
###Output
_____no_output_____ |
cloud/notebooks/rest_api/curl/experiments/deep_learning/Use Keras to recognize hand-written digits.ipynb | ###Markdown
Use Keras to recognize hand-written digits with Watson Machine Learning REST API This notebook contains steps and code to demonstrate support of Keras Deep Learning experiments in Watson Machine Learning Service. It introduces commands for getting data, training experiments, persisting pipelines, publishing models, deploying models and scoring.Some familiarity with cURL is helpful. This notebook uses cURL examples. Learning goalsThe learning goals of this notebook are:- Working with Watson Machine Learning experiments to train Deep Learning models.- Downloading computed models to local storage.- Online deployment and scoring of trained model. ContentsThis notebook contains the following parts:1. [Setup](setup) 2. [Experiment definition](experiment_definition) 3. [Model definition](model_definition) 4. [Experiment Run](run) 5. [Historical runs](runs) 6. [Deploy and Score](deploy_and_score) 7. [Cleaning](cleaning) 8. [Summary and next steps](summary) 1. Set up the environmentBefore you use the sample code in this notebook, you must perform the following setup tasks:- Create a Watson Machine Learning (WML) Service instance (a free plan is offered and information about how to create the instance can be found here).- Create a Cloud Object Storage (COS) instance (a lite plan is offered and information about how to order storage can be found here). **Note: When using Watson Studio, you already have a COS instance associated with the project you are running the notebook in.** You can find your COS credentials in COS instance dashboard under the **Service credentials** tab.Go to the **Endpoint** tab in the COS instance's dashboard to get the endpoint information.Authenticate the Watson Machine Learning service on IBM Cloud.Your Cloud API key can be generated by going to the [**Users** section of the Cloud console](https://cloud.ibm.com/iam/users). From that page, click your name, scroll down to the **API Keys** section, and click **Create an IBM Cloud API key**. Give your key a name and click **Create**, then copy the created key and paste it below. **NOTE:** You can also get service specific apikey by going to the [**Service IDs** section of the Cloud Console](https://cloud.ibm.com/iam/serviceids). From that page, click **Create**, then copy the created key and paste it below.
###Code
%env API_KEY=...
%env WML_ENDPOINT_URL=...
%env WML_INSTANCE_CRN="fill out only if you want to create a new space"
%env WML_INSTANCE_NAME=...
%env COS_CRN="fill out only if you want to create a new space"
%env COS_ENDPOINT=...
%env COS_BUCKET=...
%env COS_ACCESS_KEY_ID=...
%env COS_SECRET_ACCESS_KEY=...
%env COS_API_KEY=...
%env SPACE_ID="fill out only if you have space already created"
%env DATAPLATFORM_URL=https://api.dataplatform.cloud.ibm.com
%env AUTH_ENDPOINT=https://iam.cloud.ibm.com/oidc/token
###Output
_____no_output_____
###Markdown
Getting WML authorization token for further cURL calls Example of cURL call to get WML token
###Code
%%bash --out token
curl -sk -X POST \
--header "Content-Type: application/x-www-form-urlencoded" \
--header "Accept: application/json" \
--data-urlencode "grant_type=urn:ibm:params:oauth:grant-type:apikey" \
--data-urlencode "apikey=$API_KEY" \
"$AUTH_ENDPOINT" \
| cut -d '"' -f 4
%env TOKEN=$token
###Output
env: TOKEN=eyJraWQiOiIyMDIwMDgyMzE4MzIiLCJhbGciOiJSUzI1NiJ9.eyJpYW1faWQiOiJJQk1pZC01NTAwMDA5MVZDIiwiaWQiOiJJQk1pZC01NTAwMDA5MVZDIiwicmVhbG1pZCI6IklCTWlkIiwiaWRlbnRpZmllciI6IjU1MDAwMDkxVkMiLCJnaXZlbl9uYW1lIjoiV01MIiwiZmFtaWx5X25hbWUiOiJXTUwtQmV0YSIsIm5hbWUiOiJXTUwgV01MLUJldGEiLCJlbWFpbCI6IldNTC1CZXRhQHBsLmlibS5jb20iLCJzdWIiOiJXTUwtQmV0YUBwbC5pYm0uY29tIiwiYWNjb3VudCI6eyJ2YWxpZCI6dHJ1ZSwiYnNzIjoiZTBmN2VjM2FjMWIyNGVjOWFlNzcxZWZkNzcyNTM4YTIiLCJpbXNfdXNlcl9pZCI6IjcxNzY5NDMiLCJmcm96ZW4iOnRydWUsImltcyI6IjE2ODQ3ODMifSwiaWF0IjoxNjAwMDkxNTA0LCJleHAiOjE2MDAwOTUxMDQsImlzcyI6Imh0dHBzOi8vaWFtLmNsb3VkLmlibS5jb20vaWRlbnRpdHkiLCJncmFudF90eXBlIjoidXJuOmlibTpwYXJhbXM6b2F1dGg6Z3JhbnQtdHlwZTphcGlrZXkiLCJzY29wZSI6ImlibSBvcGVuaWQiLCJjbGllbnRfaWQiOiJkZWZhdWx0IiwiYWNyIjoxLCJhbXIiOlsicHdkIl19.tuOQzIM3IDTItzspumnC4PR73tlTZ-uz-mP1RoD1y-dXJojGr_rnyxhYXnWsTeD7-OmHOg1XePBd0fnjTZ3-fp7nMMMAeAGxW_GUOFfgrfRzvcugtOHsIUqr6zdvZl6KyLL0t3d9gnvE7mWzsqsAFOQ8O5885oI-jVok3c5exuptC5BeTleRVOdnbY1jq5U52l0g9Loej3sR9BG909fx4nOusBcSXPWKyQnIn0XOFvtbT-3R53OROTAwDnr7PPsCGBg-CS69r2ZD3YEDC1tdn9l0_BcXenHsOQpgxMaH24aFuAKuCWDWPUsyvbFPchw_OMdvq0j_2zRB0l5JEa1qUQ
###Markdown
Space creation**Tip:** If you do not have `space` already created, please convert below three cells to `code` and run them.First of all, you need to create a `space` that will be used in all of your further cURL calls. If you do not have `space` already created, below is the cURL call to create one. <a href="https://cpd-spaces-api.eu-gb.cf.appdomain.cloud//Spaces/spaces_create" target="_blank" rel="noopener no referrer">Space creation
###Code
%%bash --out space_id
curl -sk -X POST \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
--data '{"name": "curl_DL", "storage": {"type": "bmcos_object_storage", "resource_crn": "'"$COS_CRN"'"}, "compute": [{"name": "'"$WML_INSTANCE_NAME"'", "crn": "'"$WML_INSTANCE_CRN"'", "type": "machine_learning"}]}' \
"$DATAPLATFORM_URL/v2/spaces" \
| grep '"id": ' | awk -F '"' '{ print $4 }'
space_id = space_id.split('\n')[1]
%env SPACE_ID=$space_id
###Output
_____no_output_____
###Markdown
Space creation is asynchronous. This means that you need to check space creation status after creation call.Make sure that your newly created space is `active`. <a href="https://cpd-spaces-api.eu-gb.cf.appdomain.cloud//Spaces/spaces_get" target="_blank" rel="noopener no referrer">Get space information
###Code
%%bash
curl -sk -X GET \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
"$DATAPLATFORM_URL/v2/spaces/$SPACE_ID"
###Output
_____no_output_____
###Markdown
2. Experiment / optimizer configuration Training data connectionDefine connection information to COS bucket and training data npz file. This example uses the MNIST dataset. The dataset can be downloaded from [here](https://s3.amazonaws.com/img-datasets/mnist.npz). You can also download it to local filesystem by running the cell below.**Action**: Upload training data to COS bucket and enter location information in the next cURL examples.
###Code
%%bash
wget -q https://s3.amazonaws.com/img-datasets/mnist.npz \
-O mnist.npz
###Output
_____no_output_____
###Markdown
Get COS tokenRetrieve COS token for further authentication calls. <a href="https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-curlcurl-token" target="_blank" rel="noopener no referrer">Retrieve COS authentication token
###Code
%%bash --out cos_token
curl -s -X "POST" "$AUTH_ENDPOINT" \
-H 'Accept: application/json' \
-H 'Content-Type: application/x-www-form-urlencoded' \
--data-urlencode "apikey=$COS_API_KEY" \
--data-urlencode "response_type=cloud_iam" \
--data-urlencode "grant_type=urn:ibm:params:oauth:grant-type:apikey" \
| cut -d '"' -f 4
%env COS_TOKEN=$cos_token
###Output
env: COS_TOKEN=eyJraWQiOiIyMDIwMDgyMzE4MzIiLCJhbGciOiJSUzI1NiJ9.eyJpYW1faWQiOiJpYW0tU2VydmljZUlkLTU3MzMwOTM3LWJkNTQtNDYyYi04ODUxLWY3OTFhZDYyYzY1OSIsImlkIjoiaWFtLVNlcnZpY2VJZC01NzMzMDkzNy1iZDU0LTQ2MmItODg1MS1mNzkxYWQ2MmM2NTkiLCJyZWFsbWlkIjoiaWFtIiwiaWRlbnRpZmllciI6IlNlcnZpY2VJZC01NzMzMDkzNy1iZDU0LTQ2MmItODg1MS1mNzkxYWQ2MmM2NTkiLCJuYW1lIjoiY3JlZGVudGlhbHNfMSIsInN1YiI6IlNlcnZpY2VJZC01NzMzMDkzNy1iZDU0LTQ2MmItODg1MS1mNzkxYWQ2MmM2NTkiLCJzdWJfdHlwZSI6IlNlcnZpY2VJZCIsImFjY291bnQiOnsidmFsaWQiOnRydWUsImJzcyI6ImUwZjdlYzNhYzFiMjRlYzlhZTc3MWVmZDc3MjUzOGEyIiwiZnJvemVuIjp0cnVlfSwiaWF0IjoxNjAwMDkxNTEyLCJleHAiOjE2MDAwOTUxMTIsImlzcyI6Imh0dHBzOi8vaWFtLmNsb3VkLmlibS5jb20vaWRlbnRpdHkiLCJncmFudF90eXBlIjoidXJuOmlibTpwYXJhbXM6b2F1dGg6Z3JhbnQtdHlwZTphcGlrZXkiLCJzY29wZSI6ImlibSBvcGVuaWQiLCJjbGllbnRfaWQiOiJkZWZhdWx0IiwiYWNyIjoxLCJhbXIiOlsicHdkIl19.wdTfdoe9lqN_8-Y1-N_AcX6qopqnfVwLVFg5e-7kIvWIhEO_LvtfeMS8XuNEp8NXR6b4xWNl7aWbLiYMm5M7rIlhsPoij7OoryXv09Q1UTl8mwJ7i0PpnbUVK7Qt0PNuNmIj9BW-0ONYZsHB9SWC1ZnodD9k1SzUqe76RvOkiTU2cM4MQqghZp25RCLlGsBg4nCmD2_5wi72_acTW2z-8qz7ZaLXk8bYBCLfkLw-vK83Nwd9nUpRsre50rbmCS6KVatOjPxGtevEJp3AznrlO-9PDNkXtPyIvGhy72k2S2gYCGcV7AAEw96cUp6DH8VqOy0oWJwQXcmeqrE3Z8xEdw
###Markdown
Upload file to COSUpload your local dataset into your COS bucket <a href="https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-curlcurl-put-object" target="_blank" rel="noopener no referrer">Upload file to COS
###Code
%%bash
curl -sk -X PUT \
--header "Authorization: Bearer $COS_TOKEN" \
--header "Content-Type: application/octet-stream" \
--data-binary "@mnist.npz" \
"$COS_ENDPOINT/$COS_BUCKET/mnist.npz"
###Output
_____no_output_____
###Markdown
There should be an empty response when upload finished succesfully. 3. Model definition This section provides samples about how to store model definition via cURL calls. <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Model%20Definitions/model_definitions_create" target="_blank" rel="noopener no referrer">Store a model definition for Deep Learning experiment
###Code
%%bash --out model_definition_payload
MODEL_DEFINITION_PAYLOAD='{"name": "mlp-model-definition", "space_id": "'"$SPACE_ID"'", "description": "mlp-model-definition", "tags": ["DL", "MNIST"], "version": "2.0", "platform": {"name": "python", "versions": ["3.7"]}, "command": "python3 mnist_mlp.py"}'
echo $MODEL_DEFINITION_PAYLOAD | python -m json.tool
%env MODEL_DEFINITION_PAYLOAD=$model_definition_payload
%%bash --out model_definition_id
curl -sk -X POST \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
--data "$MODEL_DEFINITION_PAYLOAD" \
"$WML_ENDPOINT_URL/ml/v4/model_definitions?version=2020-08-01"| grep '"id": ' | awk -F '"' '{ print $4 }'
%env MODEL_DEFINITION_ID=$model_definition_id
###Output
env: MODEL_DEFINITION_ID=7883edea-3751-47bd-9c47-f8fae0b837da
###Markdown
Model preparationDownload files with keras code. You can either download it via link below or run the cell below the link. <a href="https://github.com/IBM/watson-machine-learning-samples/raw/master/cloud/definitions/keras/mnist/MNIST.zip" target="_blank" rel="noopener no referrer">Download MNIST.zip
###Code
%%bash
wget https://github.com/IBM/watson-machine-learning-samples/raw/master/cloud/definitions/keras/mnist/MNIST.zip \
-O MNIST.zip
###Output
--2020-09-14 15:52:14-- https://github.com/IBM/watson-machine-learning-samples/raw/master/cloud/definitions/keras/mnist/MNIST.zip
Resolving github.com... 140.82.121.4
Connecting to github.com|140.82.121.4|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://raw.githubusercontent.com/IBM/watson-machine-learning-samples/master/cloud/definitions/keras/mnist/MNIST.zip [following]
--2020-09-14 15:52:15-- https://raw.githubusercontent.com/IBM/watson-machine-learning-samples/master/cloud/definitions/keras/mnist/MNIST.zip
Resolving raw.githubusercontent.com... 151.101.112.133
Connecting to raw.githubusercontent.com|151.101.112.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 3836 (3.7K) [application/zip]
Saving to: 'MNIST.zip'
0K ... 100% 7.83M=0s
2020-09-14 15:52:15 (7.83 MB/s) - 'MNIST.zip' saved [3836/3836]
###Markdown
**Tip**: Convert below cell to code and run it to see model deinition's code.
###Code
!unzip -oqd . MNIST.zip && cat mnist_mlp.py
###Output
_____no_output_____
###Markdown
Upload model for the model definition <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Model%20Definitions/model_definitions_upload_model" target="_blank" rel="noopener no referrer">Upload model for the model definition
###Code
%%bash
curl -sk -X PUT \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
--data-binary "@MNIST.zip" \
"$WML_ENDPOINT_URL/ml/v4/model_definitions/$MODEL_DEFINITION_ID/model?version=2020-08-01&space_id=$SPACE_ID" \
| python -m json.tool
###Output
{
"attachment_id": "189ef773-bc16-4a35-bca3-569020476c49",
"content_format": "native",
"persisted": true
}
###Markdown
4. Experiment runThis section provides samples about how to trigger Deep Learning experiment via cURL calls. <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Trainings/trainings_create" target="_blank" rel="noopener no referrer">Schedule a training job for Deep Learning experiment
###Code
%%bash --out training_payload
TRAINING_PAYLOAD='{"training_data_references": [{"name": "training_input_data", "type": "s3", "connection": {"endpoint_url": "'"$COS_ENDPOINT"'", "access_key_id": "'"$COS_ACCESS_KEY_ID"'", "secret_access_key": "'"$COS_SECRET_ACCESS_KEY"'"}, "location": {"bucket": "'"$COS_BUCKET"'"}, "schema": {"id": "idmlp_schema", "fields": [{"name": "text", "type": "string"}]}}], "results_reference": {"name": "MNIST results", "connection": {"endpoint_url": "'"$COS_ENDPOINT"'", "access_key_id": "'"$COS_ACCESS_KEY_ID"'", "secret_access_key": "'"$COS_SECRET_ACCESS_KEY"'"}, "location": {"bucket": "'"$COS_BUCKET"'"}, "type": "s3"}, "tags": [{"value": "tags_mnist", "description": "dome MNIST"}], "name": "MNIST mlp", "description": "test training modeldef MNIST", "model_definition": {"id": "'"$MODEL_DEFINITION_ID"'", "hardware_spec": {"name": "K80", "nodes": 1}, "software_spec": {"name": "tensorflow_2.1-py3.7"}, "parameters": {"name": "MNIST mlp", "description": "Simple MNIST mlp model"}}, "space_id": "'"$SPACE_ID"'"}'
echo $TRAINING_PAYLOAD | python -m json.tool
%env TRAINING_PAYLOAD=$training_payload
%%bash --out training_id
curl -sk -X POST \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
--data "$TRAINING_PAYLOAD" \
"$WML_ENDPOINT_URL/ml/v4/trainings?version=2020-08-01" | awk -F'"id":' '{print $2}' | cut -c2-37
%env TRAINING_ID=$training_id
###Output
env: TRAINING_ID=7709e2d9-1d6b-4185-91d0-c926cb4b8185
###Markdown
Get training detailsTreining is an asynchronous endpoint. In case you want to monitor training status and details,you need to use a GET method and specify which training you want to monitor by usage of training ID. <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Trainings/trainings_get" target="_blank" rel="noopener no referrer">Get information about training job
###Code
%%bash
curl -sk -X GET \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
"$WML_ENDPOINT_URL/ml/v4/trainings/$TRAINING_ID?space_id=$SPACE_ID&version=2020-08-01" \
| python -m json.tool
###Output
_____no_output_____
###Markdown
Get training status
###Code
%%bash
STATUS=$(curl -sk -X GET\
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
"$WML_ENDPOINT_URL/ml/v4/trainings/$TRAINING_ID?space_id=$SPACE_ID&version=2020-08-01")
STATUS=${STATUS#*state\":\"}
STATUS=${STATUS%%\"*}
echo $STATUS
###Output
completed
###Markdown
Please make sure that training is completed before you go to the next sections.Monitor `state` of your training by running above cell couple of times. Get selected modelGet a Keras saved model location in COS from the Deep Learning training job.
###Code
%%bash --out model_name
PATH=$(curl -sk -X GET \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
"$WML_ENDPOINT_URL/ml/v4/trainings/$TRAINING_ID?space_id=$SPACE_ID&version=2020-08-01")
PATH=${PATH#*logs\":\"}
MODEL_NAME=${PATH%%\"*}
echo $MODEL_NAME
%env MODEL_NAME=$model_name
###Output
env: MODEL_NAME=training-RuDTVvdMR
###Markdown
Use Keras to recognize hand-written digits with Watson Machine Learning REST API This notebook contains steps and code to demonstrate support of Keras Deep Learning experiments in Watson Machine Learning Service. It introduces commands for getting data, training experiments, persisting pipelines, publishing models, deploying models and scoring.Some familiarity with cURL is helpful. This notebook uses cURL examples. Learning goalsThe learning goals of this notebook are:- Working with Watson Machine Learning experiments to train Deep Learning models.- Downloading computed models to local storage.- Online deployment and scoring of trained model. ContentsThis notebook contains the following parts:1. [Setup](setup) 2. [Experiment definition](experiment_definition) 3. [Model definition](model_definition) 4. [Experiment Run](run) 5. [Historical runs](runs) 6. [Deploy and Score](deploy_and_score) 7. [Cleaning](cleaning) 8. [Summary and next steps](summary) 1. Set up the environmentBefore you use the sample code in this notebook, you must perform the following setup tasks:- Create a Watson Machine Learning (WML) Service instance (a free plan is offered and information about how to create the instance can be found here).- Create a Cloud Object Storage (COS) instance (a lite plan is offered and information about how to order storage can be found here). **Note: When using Watson Studio, you already have a COS instance associated with the project you are running the notebook in.** You can find your COS credentials in COS instance dashboard under the **Service credentials** tab.Go to the **Endpoint** tab in the COS instance's dashboard to get the endpoint information.Authenticate the Watson Machine Learning service on IBM Cloud.Your Cloud API key can be generated by going to the [**Users** section of the Cloud console](https://cloud.ibm.com/iam/users). From that page, click your name, scroll down to the **API Keys** section, and click **Create an IBM Cloud API key**. Give your key a name and click **Create**, then copy the created key and paste it below. **NOTE:** You can also get service specific apikey by going to the [**Service IDs** section of the Cloud Console](https://cloud.ibm.com/iam/serviceids). From that page, click **Create**, then copy the created key and paste it below.
###Code
%env API_KEY=...
%env WML_ENDPOINT_URL=...
%env WML_INSTANCE_CRN="fill out only if you want to create a new space"
%env WML_INSTANCE_NAME=...
%env COS_CRN="fill out only if you want to create a new space"
%env COS_ENDPOINT=...
%env COS_BUCKET=...
%env COS_ACCESS_KEY_ID=...
%env COS_SECRET_ACCESS_KEY=...
%env COS_API_KEY=...
%env SPACE_ID="fill out only if you have space already created"
%env DATAPLATFORM_URL=https://api.dataplatform.cloud.ibm.com
%env AUTH_ENDPOINT=https://iam.cloud.ibm.com/oidc/token
###Output
_____no_output_____
###Markdown
Getting WML authorization token for further cURL calls Example of cURL call to get WML token
###Code
%%bash --out token
curl -sk -X POST \
--header "Content-Type: application/x-www-form-urlencoded" \
--header "Accept: application/json" \
--data-urlencode "grant_type=urn:ibm:params:oauth:grant-type:apikey" \
--data-urlencode "apikey=$API_KEY" \
"$AUTH_ENDPOINT" \
| cut -d '"' -f 4
%env TOKEN=$token
###Output
env: TOKEN=eyJraWQiOiIyMDIwMDgyMzE4MzIiLCJhbGciOiJSUzI1NiJ9.eyJpYW1faWQiOiJJQk1pZC01NTAwMDA5MVZDIiwiaWQiOiJJQk1pZC01NTAwMDA5MVZDIiwicmVhbG1pZCI6IklCTWlkIiwiaWRlbnRpZmllciI6IjU1MDAwMDkxVkMiLCJnaXZlbl9uYW1lIjoiV01MIiwiZmFtaWx5X25hbWUiOiJXTUwtQmV0YSIsIm5hbWUiOiJXTUwgV01MLUJldGEiLCJlbWFpbCI6IldNTC1CZXRhQHBsLmlibS5jb20iLCJzdWIiOiJXTUwtQmV0YUBwbC5pYm0uY29tIiwiYWNjb3VudCI6eyJ2YWxpZCI6dHJ1ZSwiYnNzIjoiZTBmN2VjM2FjMWIyNGVjOWFlNzcxZWZkNzcyNTM4YTIiLCJpbXNfdXNlcl9pZCI6IjcxNzY5NDMiLCJmcm96ZW4iOnRydWUsImltcyI6IjE2ODQ3ODMifSwiaWF0IjoxNjAwMDkxNTA0LCJleHAiOjE2MDAwOTUxMDQsImlzcyI6Imh0dHBzOi8vaWFtLmNsb3VkLmlibS5jb20vaWRlbnRpdHkiLCJncmFudF90eXBlIjoidXJuOmlibTpwYXJhbXM6b2F1dGg6Z3JhbnQtdHlwZTphcGlrZXkiLCJzY29wZSI6ImlibSBvcGVuaWQiLCJjbGllbnRfaWQiOiJkZWZhdWx0IiwiYWNyIjoxLCJhbXIiOlsicHdkIl19.tuOQzIM3IDTItzspumnC4PR73tlTZ-uz-mP1RoD1y-dXJojGr_rnyxhYXnWsTeD7-OmHOg1XePBd0fnjTZ3-fp7nMMMAeAGxW_GUOFfgrfRzvcugtOHsIUqr6zdvZl6KyLL0t3d9gnvE7mWzsqsAFOQ8O5885oI-jVok3c5exuptC5BeTleRVOdnbY1jq5U52l0g9Loej3sR9BG909fx4nOusBcSXPWKyQnIn0XOFvtbT-3R53OROTAwDnr7PPsCGBg-CS69r2ZD3YEDC1tdn9l0_BcXenHsOQpgxMaH24aFuAKuCWDWPUsyvbFPchw_OMdvq0j_2zRB0l5JEa1qUQ
###Markdown
Space creation**Tip:** If you do not have `space` already created, please convert below three cells to `code` and run them.First of all, you need to create a `space` that will be used in all of your further cURL calls. If you do not have `space` already created, below is the cURL call to create one. <a href="https://cpd-spaces-api.eu-gb.cf.appdomain.cloud//Spaces/spaces_create" target="_blank" rel="noopener no referrer">Space creation
###Code
%%bash --out space_id
curl -sk -X POST \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
--data '{"name": "curl_DL", "storage": {"type": "bmcos_object_storage", "resource_crn": "'"$COS_CRN"'"}, "compute": [{"name": "'"$WML_INSTANCE_NAME"'", "crn": "'"$WML_INSTANCE_CRN"'", "type": "machine_learning"}]}' \
"$DATAPLATFORM_URL/v2/spaces" \
| grep '"id": ' | awk -F '"' '{ print $4 }'
space_id = space_id.split('\n')[1]
%env SPACE_ID=$space_id
###Output
_____no_output_____
###Markdown
Space creation is asynchronous. This means that you need to check space creation status after creation call.Make sure that your newly created space is `active`. <a href="https://cpd-spaces-api.eu-gb.cf.appdomain.cloud//Spaces/spaces_get" target="_blank" rel="noopener no referrer">Get space information
###Code
%%bash
curl -sk -X GET \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
"$DATAPLATFORM_URL/v2/spaces/$SPACE_ID"
###Output
_____no_output_____
###Markdown
2. Experiment / optimizer configuration Training data connectionDefine connection information to COS bucket and training data npz file. This example uses the MNIST dataset. The dataset can be downloaded from [here](https://s3.amazonaws.com/img-datasets/mnist.npz). You can also download it to local filesystem by running the cell below.**Action**: Upload training data to COS bucket and enter location information in the next cURL examples.
###Code
%%bash
wget -q https://s3.amazonaws.com/img-datasets/mnist.npz \
-O mnist.npz
###Output
_____no_output_____
###Markdown
Get COS tokenRetrieve COS token for further authentication calls. <a href="https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-curlcurl-token" target="_blank" rel="noopener no referrer">Retrieve COS authentication token
###Code
%%bash --out cos_token
curl -s -X "POST" "$AUTH_ENDPOINT" \
-H 'Accept: application/json' \
-H 'Content-Type: application/x-www-form-urlencoded' \
--data-urlencode "apikey=$COS_API_KEY" \
--data-urlencode "response_type=cloud_iam" \
--data-urlencode "grant_type=urn:ibm:params:oauth:grant-type:apikey" \
| cut -d '"' -f 4
%env COS_TOKEN=$cos_token
###Output
env: COS_TOKEN=eyJraWQiOiIyMDIwMDgyMzE4MzIiLCJhbGciOiJSUzI1NiJ9.eyJpYW1faWQiOiJpYW0tU2VydmljZUlkLTU3MzMwOTM3LWJkNTQtNDYyYi04ODUxLWY3OTFhZDYyYzY1OSIsImlkIjoiaWFtLVNlcnZpY2VJZC01NzMzMDkzNy1iZDU0LTQ2MmItODg1MS1mNzkxYWQ2MmM2NTkiLCJyZWFsbWlkIjoiaWFtIiwiaWRlbnRpZmllciI6IlNlcnZpY2VJZC01NzMzMDkzNy1iZDU0LTQ2MmItODg1MS1mNzkxYWQ2MmM2NTkiLCJuYW1lIjoiY3JlZGVudGlhbHNfMSIsInN1YiI6IlNlcnZpY2VJZC01NzMzMDkzNy1iZDU0LTQ2MmItODg1MS1mNzkxYWQ2MmM2NTkiLCJzdWJfdHlwZSI6IlNlcnZpY2VJZCIsImFjY291bnQiOnsidmFsaWQiOnRydWUsImJzcyI6ImUwZjdlYzNhYzFiMjRlYzlhZTc3MWVmZDc3MjUzOGEyIiwiZnJvemVuIjp0cnVlfSwiaWF0IjoxNjAwMDkxNTEyLCJleHAiOjE2MDAwOTUxMTIsImlzcyI6Imh0dHBzOi8vaWFtLmNsb3VkLmlibS5jb20vaWRlbnRpdHkiLCJncmFudF90eXBlIjoidXJuOmlibTpwYXJhbXM6b2F1dGg6Z3JhbnQtdHlwZTphcGlrZXkiLCJzY29wZSI6ImlibSBvcGVuaWQiLCJjbGllbnRfaWQiOiJkZWZhdWx0IiwiYWNyIjoxLCJhbXIiOlsicHdkIl19.wdTfdoe9lqN_8-Y1-N_AcX6qopqnfVwLVFg5e-7kIvWIhEO_LvtfeMS8XuNEp8NXR6b4xWNl7aWbLiYMm5M7rIlhsPoij7OoryXv09Q1UTl8mwJ7i0PpnbUVK7Qt0PNuNmIj9BW-0ONYZsHB9SWC1ZnodD9k1SzUqe76RvOkiTU2cM4MQqghZp25RCLlGsBg4nCmD2_5wi72_acTW2z-8qz7ZaLXk8bYBCLfkLw-vK83Nwd9nUpRsre50rbmCS6KVatOjPxGtevEJp3AznrlO-9PDNkXtPyIvGhy72k2S2gYCGcV7AAEw96cUp6DH8VqOy0oWJwQXcmeqrE3Z8xEdw
###Markdown
Upload file to COSUpload your local dataset into your COS bucket <a href="https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-curlcurl-put-object" target="_blank" rel="noopener no referrer">Upload file to COS
###Code
%%bash
curl -sk -X PUT \
--header "Authorization: Bearer $COS_TOKEN" \
--header "Content-Type: application/octet-stream" \
--data-binary "@mnist.npz" \
"$COS_ENDPOINT/$COS_BUCKET/mnist.npz"
###Output
_____no_output_____
###Markdown
There should be an empty response when upload finished succesfully. 3. Model definition This section provides samples about how to store model definition via cURL calls. <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Model%20Definitions/model_definitions_create" target="_blank" rel="noopener no referrer">Store a model definition for Deep Learning experiment
###Code
%%bash --out model_definition_payload
MODEL_DEFINITION_PAYLOAD='{"name": "mlp-model-definition", "space_id": "'"$SPACE_ID"'", "description": "mlp-model-definition", "tags": ["DL", "MNIST"], "version": "2.0", "platform": {"name": "python", "versions": ["3.7"]}, "command": "python3 mnist_mlp.py"}'
echo $MODEL_DEFINITION_PAYLOAD | python -m json.tool
%env MODEL_DEFINITION_PAYLOAD=$model_definition_payload
%%bash --out model_definition_id
curl -sk -X POST \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
--data "$MODEL_DEFINITION_PAYLOAD" \
"$WML_ENDPOINT_URL/ml/v4/model_definitions?version=2020-08-01"| grep '"id": ' | awk -F '"' '{ print $4 }'
%env MODEL_DEFINITION_ID=$model_definition_id
###Output
env: MODEL_DEFINITION_ID=7883edea-3751-47bd-9c47-f8fae0b837da
###Markdown
Model preparationDownload files with keras code. You can either download it via link below or run the cell below the link. <a href="https://github.com/IBM/watson-machine-learning-samples/raw/master/cloud/definitions/keras/mnist/MNIST.zip" target="_blank" rel="noopener no referrer">Download MNIST.zip
###Code
%%bash
wget https://github.com/IBM/watson-machine-learning-samples/raw/master/cloud/definitions/keras/mnist/MNIST.zip \
-O MNIST.zip
###Output
--2020-09-14 15:52:14-- https://github.com/IBM/watson-machine-learning-samples/raw/master/cloud/definitions/keras/mnist/MNIST.zip
Resolving github.com... 140.82.121.4
Connecting to github.com|140.82.121.4|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://raw.githubusercontent.com/IBM/watson-machine-learning-samples/master/cloud/definitions/keras/mnist/MNIST.zip [following]
--2020-09-14 15:52:15-- https://raw.githubusercontent.com/IBM/watson-machine-learning-samples/master/cloud/definitions/keras/mnist/MNIST.zip
Resolving raw.githubusercontent.com... 151.101.112.133
Connecting to raw.githubusercontent.com|151.101.112.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 3836 (3.7K) [application/zip]
Saving to: 'MNIST.zip'
0K ... 100% 7.83M=0s
2020-09-14 15:52:15 (7.83 MB/s) - 'MNIST.zip' saved [3836/3836]
###Markdown
**Tip**: Convert below cell to code and run it to see model deinition's code.
###Code
!unzip -oqd . MNIST.zip && cat mnist_mlp.py
###Output
_____no_output_____
###Markdown
Upload model for the model definition <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Model%20Definitions/model_definitions_upload_model" target="_blank" rel="noopener no referrer">Upload model for the model definition
###Code
%%bash
curl -sk -X PUT \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
--data-binary "@MNIST.zip" \
"$WML_ENDPOINT_URL/ml/v4/model_definitions/$MODEL_DEFINITION_ID/model?version=2020-08-01&space_id=$SPACE_ID" \
| python -m json.tool
###Output
{
"attachment_id": "189ef773-bc16-4a35-bca3-569020476c49",
"content_format": "native",
"persisted": true
}
###Markdown
4. Experiment runThis section provides samples about how to trigger Deep Learning experiment via cURL calls. <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Trainings/trainings_create" target="_blank" rel="noopener no referrer">Schedule a training job for Deep Learning experiment
###Code
%%bash --out training_payload
TRAINING_PAYLOAD='{"training_data_references": [{"name": "training_input_data", "type": "s3", "connection": {"endpoint_url": "'"$COS_ENDPOINT"'", "access_key_id": "'"$COS_ACCESS_KEY_ID"'", "secret_access_key": "'"$COS_SECRET_ACCESS_KEY"'"}, "location": {"bucket": "'"$COS_BUCKET"'"}, "schema": {"id": "idmlp_schema", "fields": [{"name": "text", "type": "string"}]}}], "results_reference": {"name": "MNIST results", "connection": {"endpoint_url": "'"$COS_ENDPOINT"'", "access_key_id": "'"$COS_ACCESS_KEY_ID"'", "secret_access_key": "'"$COS_SECRET_ACCESS_KEY"'"}, "location": {"bucket": "'"$COS_BUCKET"'"}, "type": "s3"}, "tags": [{"value": "tags_mnist", "description": "dome MNIST"}], "name": "MNIST mlp", "description": "test training modeldef MNIST", "model_definition": {"id": "'"$MODEL_DEFINITION_ID"'", "hardware_spec": {"name": "K80", "nodes": 1}, "software_spec": {"name": "tensorflow_2.1-py3.7"}, "parameters": {"name": "MNIST mlp", "description": "Simple MNIST mlp model"}}, "space_id": "'"$SPACE_ID"'"}'
echo $TRAINING_PAYLOAD | python -m json.tool
%env TRAINING_PAYLOAD=$training_payload
%%bash --out training_id
curl -sk -X POST \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
--data "$TRAINING_PAYLOAD" \
"$WML_ENDPOINT_URL/ml/v4/trainings?version=2020-08-01" | awk -F'"id":' '{print $2}' | cut -c2-37
%env TRAINING_ID=$training_id
###Output
env: TRAINING_ID=7709e2d9-1d6b-4185-91d0-c926cb4b8185
###Markdown
Get training detailsTreining is an asynchronous endpoint. In case you want to monitor training status and details,you need to use a GET method and specify which training you want to monitor by usage of training ID. <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Trainings/trainings_get" target="_blank" rel="noopener no referrer">Get information about training job
###Code
%%bash
curl -sk -X GET \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
"$WML_ENDPOINT_URL/ml/v4/trainings/$TRAINING_ID?space_id=$SPACE_ID&version=2020-08-01" \
| python -m json.tool
###Output
_____no_output_____
###Markdown
Get training status
###Code
%%bash
STATUS=$(curl -sk -X GET\
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
"$WML_ENDPOINT_URL/ml/v4/trainings/$TRAINING_ID?space_id=$SPACE_ID&version=2020-08-01")
STATUS=${STATUS#*state\":\"}
STATUS=${STATUS%%\"*}
echo $STATUS
###Output
completed
###Markdown
Please make sure that training is completed before you go to the next sections.Monitor `state` of your training by running above cell couple of times. Get selected modelGet a Keras saved model location in COS from the Deep Learning training job.
###Code
%%bash --out model_name
PATH=$(curl -sk -X GET \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
"$WML_ENDPOINT_URL/ml/v4/trainings/$TRAINING_ID?space_id=$SPACE_ID&version=2020-08-01")
PATH=${PATH#*logs\":\"}
MODEL_NAME=${PATH%%\"*}
echo $MODEL_NAME
%env MODEL_NAME=$model_name
###Output
env: MODEL_NAME=training-RuDTVvdMR
###Markdown
5. Historical runsIn this section you will see cURL examples describing how to get historical training runs information. Output should be similar to the output from training creation but you should see more trainings entries. Listing trainings: <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Trainings/trainings_list" target="_blank" rel="noopener no referrer">Get list of historical training jobs information
###Code
%%bash
HISTORICAL_TRAINING_LIMIT_TO_GET=2
curl -sk -X GET \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
"$WML_ENDPOINT_URL/ml/v4/trainings?space_id=$SPACE_ID&version=2020-08-01&limit=$HISTORICAL_TRAINING_LIMIT_TO_GET" \
| python -m json.tool
###Output
_____no_output_____
###Markdown
Cancel training run**Tip:** If you want to cancel your training, please convert below cell to `code`, specify training ID and run. <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Trainings/trainings_delete" target="_blank" rel="noopener no referrer">Canceling training
###Code
%%bash
TRAINING_ID_TO_CANCEL=...
curl -sk -X DELETE \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
"$WML_ENDPOINT_URL/ml/v4/trainings/$TRAINING_ID_TO_DELETE?space_id=$SPACE_ID&version=2020-08-01"
###Output
_____no_output_____
###Markdown
--- 6. Deploy and ScoreIn this section you will learn how to deploy and score pipeline model as webservice using WML instance. Before deployment creation, you need store your model in WML repository.Please see below cURL call example how to do it. Remember that you need tospecify where your chosen model is stored in COS. Store Deep Learning modelStore information about your model to WML repository. <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Models/models_create" target="_blank" rel="noopener no referrer">Model storing
###Code
%%bash --out model_payload
MODEL_PAYLOAD='{"space_id": "'"$SPACE_ID"'", "name": "MNIST mlp", "description": "This is description", "type": "tensorflow_2.1", "software_spec": {"name": "default_py3.7"}, "content_location": { "type": "s3", "contents": [ { "content_format": "native", "file_name": "'"$MODEL_NAME.zip"'", "location": "'"$TRAINING_ID/assets/$TRAINING_ID/resources/wml_model/$MODEL_NAME.zip"'"}],"connection": {"endpoint_url": "'"$COS_ENDPOINT"'", "access_key_id": "'"$COS_ACCESS_KEY_ID"'", "secret_access_key": "'"$COS_SECRET_ACCESS_KEY"'"}, "location": {"bucket": "'"$COS_BUCKET"'"}}}'
echo $MODEL_PAYLOAD | python -m json.tool
%env MODEL_PAYLOAD=$model_payload
%%bash --out model_id
curl -sk -X POST \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
--data "$MODEL_PAYLOAD" \
"$WML_ENDPOINT_URL/ml/v4/models?version=2020-08-01" | grep '"id": ' | awk -F '"' '{ print $4 }' | sed -n 2p
%env MODEL_ID=$model_id
###Output
env: MODEL_ID=e5417dbf-77b4-4e72-a47e-0be8aa2ecd83
###Markdown
Download model contentIf you want to download your saved model, please make the following call. <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Models/models_filtered_download" target="_blank" rel="noopener no referrer">Download model content
###Code
%%bash
curl -sk -X GET \
--header "Authorization: Bearer $TOKEN" \
--output "mnist_cnn.h5.tar.gz" \
"$WML_ENDPOINT_URL/ml/v4/models/$MODEL_ID/download?space_id=$SPACE_ID&version=2020-08-01"
!ls -l mnist_cnn.h5.tar.gz
###Output
-rw-r--r-- 1 jansoltysik staff 4765639 Sep 14 16:00 mnist_cnn.h5.tar.gz
###Markdown
Deployment creationAn Deep Learning online deployment creation is presented below. <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Deployments/deployments_create" target="_blank" rel="noopener no referrer">Create deployment
###Code
%%bash --out deployment_payload
DEPLOYMENT_PAYLOAD='{"space_id": "'"$SPACE_ID"'","name": "MNIST deployment", "description": "This is description","online": {},"hardware_spec": {"name": "S"},"asset": {"id": "'"$MODEL_ID"'"}}'
echo $DEPLOYMENT_PAYLOAD | python -m json.tool
%env DEPLOYMENT_PAYLOAD=$deployment_payload
%%bash --out deployment_id
curl -sk -X POST \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
--data "$DEPLOYMENT_PAYLOAD" \
"$WML_ENDPOINT_URL/ml/v4/deployments?version=2020-08-01" | grep '"id": ' | awk -F '"' '{ print $4 }' | sed -n 3p
%env DEPLOYMENT_ID=$deployment_id
###Output
env: DEPLOYMENT_ID=c7568bf5-9c70-41bf-83c1-ed23fbda2ccc
###Markdown
Get deployment detailsAs deployment API is asynchronous, please make sure your deployment is in `ready` state before going to the next points. <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Deployments/deployments_get" target="_blank" rel="noopener no referrer">Get deployment details
###Code
%%bash
curl -sk -X GET \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
"$WML_ENDPOINT_URL/ml/v4/deployments/$DEPLOYMENT_ID?space_id=$SPACE_ID&version=2020-08-01" \
| python -m json.tool
###Output
{
"entity": {
"asset": {
"id": "e5417dbf-77b4-4e72-a47e-0be8aa2ecd83"
},
"custom": {},
"deployed_asset_type": "model",
"description": "This is description",
"hardware_spec": {
"id": "e7ed1d6c-2e89-42d7-aed5-863b972c1d2b",
"name": "S",
"num_nodes": 1
},
"name": "MNIST deployment",
"online": {},
"space_id": "d70a423e-bab5-4b24-943a-3b0b29ad7527",
"status": {
"online_url": {
"url": "https://yp-qa.ml.cloud.ibm.com/ml/v4/deployments/c7568bf5-9c70-41bf-83c1-ed23fbda2ccc/predictions"
},
"state": "initializing"
}
},
"metadata": {
"created_at": "2020-09-14T14:00:55.027Z",
"description": "This is description",
"id": "c7568bf5-9c70-41bf-83c1-ed23fbda2ccc",
"modified_at": "2020-09-14T14:00:55.027Z",
"name": "MNIST deployment",
"owner": "IBMid-55000091VC",
"space_id": "d70a423e-bab5-4b24-943a-3b0b29ad7527"
}
}
###Markdown
Prepare scoring input data**Hint:** You may need to install numpy using following command `!pip install numpy`
###Code
import numpy as np
mnist_dataset = np.load('mnist.npz')
test_mnist = mnist_dataset['x_test']
image_1 = (test_mnist[0].ravel() / 255).tolist()
image_2 = (test_mnist[1].ravel() / 255).tolist()
%matplotlib inline
import matplotlib.pyplot as plt
for i, image in enumerate([test_mnist[0], test_mnist[1]]):
plt.subplot(2, 2, i + 1)
plt.axis('off')
plt.imshow(image, cmap=plt.cm.gray_r, interpolation='nearest')
###Output
_____no_output_____
###Markdown
Scoring of a webserviceIf you want to make a `score` call on your deployment, please follow a below method: <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Deployment%20Jobs/deployment_jobs_create" target="_blank" rel="noopener no referrer">Create deployment job
###Code
%%bash -s "$image_1" "$image_2"
curl -sk -X POST \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
--data '{"space_id": "$SPACE_ID","input_data": [{"values": ['"$1"', '"$2"']}]}' \
"$WML_ENDPOINT_URL/ml/v4/deployments/$DEPLOYMENT_ID/predictions?version=2020-08-01" \
| python -m json.tool
###Output
{
"predictions": [
{
"fields": [
"prediction",
"prediction_classes",
"probability"
],
"values": [
[
[
2.8162902240315424e-13,
2.634996214279095e-11,
1.940126725941127e-09,
6.535556984488267e-09,
4.630183633421228e-15,
2.0969738532411464e-12,
2.5965000387704643e-19,
1.0,
2.8750441697505957e-12,
4.534635333897086e-09
],
7,
[
2.8162902240315424e-13,
2.634996214279095e-11,
1.940126725941127e-09,
6.535556984488267e-09,
4.630183633421228e-15,
2.0969738532411464e-12,
2.5965000387704643e-19,
1.0,
2.8750441697505957e-12,
4.534635333897086e-09
]
],
[
[
9.832916815846748e-13,
1.2868907788288197e-06,
0.9999985694885254,
1.0709795361663055e-07,
1.9135524797852587e-17,
3.4615290633865925e-09,
3.791623265358979e-12,
1.2983511835096273e-12,
1.8498602649685836e-09,
2.8920135803820433e-15
],
2,
[
9.832916815846748e-13,
1.2868907788288197e-06,
0.9999985694885254,
1.0709795361663055e-07,
1.9135524797852587e-17,
3.4615290633865925e-09,
3.791623265358979e-12,
1.2983511835096273e-12,
1.8498602649685836e-09,
2.8920135803820433e-15
]
]
]
}
]
}
###Markdown
Listing all deployments <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Deployments/deployments_list" target="_blank" rel="noopener no referrer">List deployments details
###Code
%%bash
curl -sk -X GET \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
"$WML_ENDPOINT_URL/ml/v4/deployments?space_id=$SPACE_ID&version=2020-08-01" \
| python -m json.tool
###Output
_____no_output_____
###Markdown
7. Cleaning sectionBelow section is useful when you want to clean all of your previous work within this notebook.Just convert below cells into the `code` and run them. Delete training run**Tip:** You can completely delete a training run with its metadata. <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Trainings/trainings_delete" target="_blank" rel="noopener no referrer">Deleting training
###Code
%%bash
%env TRAINING_ID_TO_DELETE=...
curl -sk -X DELETE \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
"$WML_ENDPOINT_URL/ml/v4/trainings/$TRAINING_ID_TO_DELETE?space_id=$SPACE_ID&version=2020-08-01&hard_delete=true"
###Output
_____no_output_____
###Markdown
Deleting deployment**Tip:** You can delete existing deployment by calling DELETE method. <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Deployments/deployments_delete" target="_blank" rel="noopener no referrer">Delete deployment
###Code
%%bash
curl -sk -X DELETE \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
"$WML_ENDPOINT_URL/ml/v4/deployments/$DEPLOYMENT_ID?space_id=$SPACE_ID&version=2020-08-01"
###Output
_____no_output_____
###Markdown
Delete model from repository**Tip:** If you want to completely remove your stored model and model metadata, just use a DELETE method. <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Models/models_delete" target="_blank" rel="noopener no referrer">Delete model from repository
###Code
%%bash
curl -sk -X DELETE \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
"$WML_ENDPOINT_URL/ml/v4/models/$MODEL_ID?space_id=$SPACE_ID&version=2020-08-01"
###Output
_____no_output_____
###Markdown
Delete model definition**Tip:** If you want to completely remove your model definition, just use a DELETE method. <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Model%20Definitions/model_definitions_delete" target="_blank" rel="noopener no referrer">Delete model definition
###Code
%%bash
curl -sk -X DELETE \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
"$WML_ENDPOINT_URL/ml/v4/model_definitions/$MODEL_DEFINITION_ID?space_id=$SPACE_ID&version=2020-08-01"
###Output
_____no_output_____
###Markdown
Use Keras to recognize hand-written digits with Watson Machine Learning REST API This notebook contains steps and code to demonstrate support of Keras Deep Learning experiments in Watson Machine Learning Service. It introduces commands for getting data, training experiments, persisting pipelines, publishing models, deploying models and scoring.Some familiarity with cURL is helpful. This notebook uses cURL examples. Learning goalsThe learning goals of this notebook are:- Working with Watson Machine Learning experiments to train Deep Learning models.- Downloading computed models to local storage.- Online deployment and scoring of trained model. ContentsThis notebook contains the following parts:1. [Setup](setup) 2. [Experiment definition](experiment_definition) 3. [Model definition](model_definition) 4. [Experiment Run](run) 5. [Historical runs](runs) 6. [Deploy and Score](deploy_and_score) 7. [Cleaning](cleaning) 8. [Summary and next steps](summary) 1. Set up the environmentBefore you use the sample code in this notebook, you must perform the following setup tasks:- Create a Watson Machine Learning (WML) Service instance (a free plan is offered and information about how to create the instance can be found here).- Create a Cloud Object Storage (COS) instance (a lite plan is offered and information about how to order storage can be found here). **Note: When using Watson Studio, you already have a COS instance associated with the project you are running the notebook in.** You can find your COS credentials in COS instance dashboard under the **Service credentials** tab.Go to the **Endpoint** tab in the COS instance's dashboard to get the endpoint information.Authenticate the Watson Machine Learning service on IBM Cloud.Your Cloud API key can be generated by going to the [**Users** section of the Cloud console](https://cloud.ibm.com/iam/users). From that page, click your name, scroll down to the **API Keys** section, and click **Create an IBM Cloud API key**. Give your key a name and click **Create**, then copy the created key and paste it below. **NOTE:** You can also get service specific apikey by going to the [**Service IDs** section of the Cloud Console](https://cloud.ibm.com/iam/serviceids). From that page, click **Create**, then copy the created key and paste it below.
###Code
%env API_KEY=...
%env WML_ENDPOINT_URL=...
%env WML_INSTANCE_CRN="fill out only if you want to create a new space"
%env WML_INSTANCE_NAME=...
%env COS_CRN="fill out only if you want to create a new space"
%env COS_ENDPOINT=...
%env COS_BUCKET=...
%env COS_ACCESS_KEY_ID=...
%env COS_SECRET_ACCESS_KEY=...
%env COS_API_KEY=...
%env SPACE_ID="fill out only if you have space already created"
%env DATAPLATFORM_URL=https://api.dataplatform.cloud.ibm.com
%env AUTH_ENDPOINT=https://iam.cloud.ibm.com/oidc/token
###Output
_____no_output_____
###Markdown
Getting WML authorization token for further cURL calls Example of cURL call to get WML token
###Code
%%bash --out token
curl -sk -X POST \
--header "Content-Type: application/x-www-form-urlencoded" \
--header "Accept: application/json" \
--data-urlencode "grant_type=urn:ibm:params:oauth:grant-type:apikey" \
--data-urlencode "apikey=$API_KEY" \
"$AUTH_ENDPOINT" \
| cut -d '"' -f 4
%env TOKEN=$token
###Output
env: TOKEN=eyJraWQiOiIyMDIwMDgyMzE4MzIiLCJhbGciOiJSUzI1NiJ9.eyJpYW1faWQiOiJJQk1pZC01NTAwMDA5MVZDIiwiaWQiOiJJQk1pZC01NTAwMDA5MVZDIiwicmVhbG1pZCI6IklCTWlkIiwiaWRlbnRpZmllciI6IjU1MDAwMDkxVkMiLCJnaXZlbl9uYW1lIjoiV01MIiwiZmFtaWx5X25hbWUiOiJXTUwtQmV0YSIsIm5hbWUiOiJXTUwgV01MLUJldGEiLCJlbWFpbCI6IldNTC1CZXRhQHBsLmlibS5jb20iLCJzdWIiOiJXTUwtQmV0YUBwbC5pYm0uY29tIiwiYWNjb3VudCI6eyJ2YWxpZCI6dHJ1ZSwiYnNzIjoiZTBmN2VjM2FjMWIyNGVjOWFlNzcxZWZkNzcyNTM4YTIiLCJpbXNfdXNlcl9pZCI6IjcxNzY5NDMiLCJmcm96ZW4iOnRydWUsImltcyI6IjE2ODQ3ODMifSwiaWF0IjoxNjAwMDkxNTA0LCJleHAiOjE2MDAwOTUxMDQsImlzcyI6Imh0dHBzOi8vaWFtLmNsb3VkLmlibS5jb20vaWRlbnRpdHkiLCJncmFudF90eXBlIjoidXJuOmlibTpwYXJhbXM6b2F1dGg6Z3JhbnQtdHlwZTphcGlrZXkiLCJzY29wZSI6ImlibSBvcGVuaWQiLCJjbGllbnRfaWQiOiJkZWZhdWx0IiwiYWNyIjoxLCJhbXIiOlsicHdkIl19.tuOQzIM3IDTItzspumnC4PR73tlTZ-uz-mP1RoD1y-dXJojGr_rnyxhYXnWsTeD7-OmHOg1XePBd0fnjTZ3-fp7nMMMAeAGxW_GUOFfgrfRzvcugtOHsIUqr6zdvZl6KyLL0t3d9gnvE7mWzsqsAFOQ8O5885oI-jVok3c5exuptC5BeTleRVOdnbY1jq5U52l0g9Loej3sR9BG909fx4nOusBcSXPWKyQnIn0XOFvtbT-3R53OROTAwDnr7PPsCGBg-CS69r2ZD3YEDC1tdn9l0_BcXenHsOQpgxMaH24aFuAKuCWDWPUsyvbFPchw_OMdvq0j_2zRB0l5JEa1qUQ
###Markdown
Space creation**Tip:** If you do not have `space` already created, please convert below three cells to `code` and run them.First of all, you need to create a `space` that will be used in all of your further cURL calls. If you do not have `space` already created, below is the cURL call to create one. <a href="https://cpd-spaces-api.eu-gb.cf.appdomain.cloud//Spaces/spaces_create" target="_blank" rel="noopener no referrer">Space creation
###Code
%%bash --out space_id
curl -sk -X POST \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
--data '{"name": "curl_DL", "storage": {"type": "bmcos_object_storage", "resource_crn": "'"$COS_CRN"'"}, "compute": [{"name": "'"$WML_INSTANCE_NAME"'", "crn": "'"$WML_INSTANCE_CRN"'", "type": "machine_learning"}]}' \
"$DATAPLATFORM_URL/v2/spaces" \
| grep '"id": ' | awk -F '"' '{ print $4 }'
space_id = space_id.split('\n')[1]
%env SPACE_ID=$space_id
###Output
_____no_output_____
###Markdown
Space creation is asynchronous. This means that you need to check space creation status after creation call.Make sure that your newly created space is `active`. <a href="https://cpd-spaces-api.eu-gb.cf.appdomain.cloud//Spaces/spaces_get" target="_blank" rel="noopener no referrer">Get space information
###Code
%%bash
curl -sk -X GET \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
"$DATAPLATFORM_URL/v2/spaces/$SPACE_ID"
###Output
_____no_output_____
###Markdown
2. Experiment / optimizer configuration Training data connectionDefine connection information to COS bucket and training data npz file. This example uses the MNIST dataset. The dataset can be downloaded from [here](https://s3.amazonaws.com/img-datasets/mnist.npz). You can also download it to local filesystem by running the cell below.**Action**: Upload training data to COS bucket and enter location information in the next cURL examples.
###Code
%%bash
wget -q https://s3.amazonaws.com/img-datasets/mnist.npz \
-O mnist.npz
###Output
_____no_output_____
###Markdown
Get COS tokenRetrieve COS token for further authentication calls. <a href="https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-curlcurl-token" target="_blank" rel="noopener no referrer">Retrieve COS authentication token
###Code
%%bash --out cos_token
curl -s -X "POST" "$AUTH_ENDPOINT" \
-H 'Accept: application/json' \
-H 'Content-Type: application/x-www-form-urlencoded' \
--data-urlencode "apikey=$COS_API_KEY" \
--data-urlencode "response_type=cloud_iam" \
--data-urlencode "grant_type=urn:ibm:params:oauth:grant-type:apikey" \
| cut -d '"' -f 4
%env COS_TOKEN=$cos_token
###Output
env: COS_TOKEN=eyJraWQiOiIyMDIwMDgyMzE4MzIiLCJhbGciOiJSUzI1NiJ9.eyJpYW1faWQiOiJpYW0tU2VydmljZUlkLTU3MzMwOTM3LWJkNTQtNDYyYi04ODUxLWY3OTFhZDYyYzY1OSIsImlkIjoiaWFtLVNlcnZpY2VJZC01NzMzMDkzNy1iZDU0LTQ2MmItODg1MS1mNzkxYWQ2MmM2NTkiLCJyZWFsbWlkIjoiaWFtIiwiaWRlbnRpZmllciI6IlNlcnZpY2VJZC01NzMzMDkzNy1iZDU0LTQ2MmItODg1MS1mNzkxYWQ2MmM2NTkiLCJuYW1lIjoiY3JlZGVudGlhbHNfMSIsInN1YiI6IlNlcnZpY2VJZC01NzMzMDkzNy1iZDU0LTQ2MmItODg1MS1mNzkxYWQ2MmM2NTkiLCJzdWJfdHlwZSI6IlNlcnZpY2VJZCIsImFjY291bnQiOnsidmFsaWQiOnRydWUsImJzcyI6ImUwZjdlYzNhYzFiMjRlYzlhZTc3MWVmZDc3MjUzOGEyIiwiZnJvemVuIjp0cnVlfSwiaWF0IjoxNjAwMDkxNTEyLCJleHAiOjE2MDAwOTUxMTIsImlzcyI6Imh0dHBzOi8vaWFtLmNsb3VkLmlibS5jb20vaWRlbnRpdHkiLCJncmFudF90eXBlIjoidXJuOmlibTpwYXJhbXM6b2F1dGg6Z3JhbnQtdHlwZTphcGlrZXkiLCJzY29wZSI6ImlibSBvcGVuaWQiLCJjbGllbnRfaWQiOiJkZWZhdWx0IiwiYWNyIjoxLCJhbXIiOlsicHdkIl19.wdTfdoe9lqN_8-Y1-N_AcX6qopqnfVwLVFg5e-7kIvWIhEO_LvtfeMS8XuNEp8NXR6b4xWNl7aWbLiYMm5M7rIlhsPoij7OoryXv09Q1UTl8mwJ7i0PpnbUVK7Qt0PNuNmIj9BW-0ONYZsHB9SWC1ZnodD9k1SzUqe76RvOkiTU2cM4MQqghZp25RCLlGsBg4nCmD2_5wi72_acTW2z-8qz7ZaLXk8bYBCLfkLw-vK83Nwd9nUpRsre50rbmCS6KVatOjPxGtevEJp3AznrlO-9PDNkXtPyIvGhy72k2S2gYCGcV7AAEw96cUp6DH8VqOy0oWJwQXcmeqrE3Z8xEdw
###Markdown
Upload file to COSUpload your local dataset into your COS bucket <a href="https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-curlcurl-put-object" target="_blank" rel="noopener no referrer">Upload file to COS
###Code
%%bash
curl -sk -X PUT \
--header "Authorization: Bearer $COS_TOKEN" \
--header "Content-Type: application/octet-stream" \
--data-binary "@mnist.npz" \
"$COS_ENDPOINT/$COS_BUCKET/mnist.npz"
###Output
_____no_output_____
###Markdown
There should be an empty response when upload finished succesfully. 3. Model definition This section provides samples about how to store model definition via cURL calls. <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Model%20Definitions/model_definitions_create" target="_blank" rel="noopener no referrer">Store a model definition for Deep Learning experiment
###Code
%%bash --out model_definition_payload
MODEL_DEFINITION_PAYLOAD='{"name": "mlp-model-definition", "space_id": "'"$SPACE_ID"'", "description": "mlp-model-definition", "tags": ["DL", "MNIST"], "version": "2.0", "platform": {"name": "python", "versions": ["3.7"]}, "command": "python3 mnist_mlp.py --trainImagesFile ${DATA_DIR}/train-images-idx3-ubyte.gz --trainLabelsFile ${DATA_DIR}/train-labels-idx1-ubyte.gz --testImagesFile ${DATA_DIR}/t10k-images-idx3-ubyte.gz --testLabelsFile ${DATA_DIR}/t10k-labels-idx1-ubyte.gz --learningRate 0.001 --trainingIters 6000"}'
echo $MODEL_DEFINITION_PAYLOAD | python -m json.tool
%env MODEL_DEFINITION_PAYLOAD=$model_definition_payload
%%bash --out model_definition_id
curl -sk -X POST \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
--data "$MODEL_DEFINITION_PAYLOAD" \
"$WML_ENDPOINT_URL/ml/v4/model_definitions?version=2020-08-01"| grep '"id": ' | awk -F '"' '{ print $4 }'
%env MODEL_DEFINITION_ID=$model_definition_id
###Output
env: MODEL_DEFINITION_ID=7883edea-3751-47bd-9c47-f8fae0b837da
###Markdown
Model preparationDownload files with keras code. You can either download it via link below or run the cell below the link. <a href="https://github.com/IBM/watson-machine-learning-samples/raw/master/cloud/definitions/keras/mnist/MNIST.zip" target="_blank" rel="noopener no referrer">Download MNIST.zip
###Code
%%bash
wget https://github.com/IBM/watson-machine-learning-samples/raw/master/cloud/definitions/keras/mnist/MNIST.zip \
-O MNIST.zip
###Output
--2020-09-14 15:52:14-- https://github.com/IBM/watson-machine-learning-samples/raw/master/cloud/definitions/keras/mnist/MNIST.zip
Resolving github.com... 140.82.121.4
Connecting to github.com|140.82.121.4|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://raw.githubusercontent.com/IBM/watson-machine-learning-samples/master/cloud/definitions/keras/mnist/MNIST.zip [following]
--2020-09-14 15:52:15-- https://raw.githubusercontent.com/IBM/watson-machine-learning-samples/master/cloud/definitions/keras/mnist/MNIST.zip
Resolving raw.githubusercontent.com... 151.101.112.133
Connecting to raw.githubusercontent.com|151.101.112.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 3836 (3.7K) [application/zip]
Saving to: 'MNIST.zip'
0K ... 100% 7.83M=0s
2020-09-14 15:52:15 (7.83 MB/s) - 'MNIST.zip' saved [3836/3836]
###Markdown
**Tip**: Convert below cell to code and run it to see model deinition's code.
###Code
!unzip -oqd . MNIST.zip && cat mnist_mlp.py
###Output
_____no_output_____
###Markdown
Upload model for the model definition <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Model%20Definitions/model_definitions_upload_model" target="_blank" rel="noopener no referrer">Upload model for the model definition
###Code
%%bash
curl -sk -X PUT \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
--data-binary "@MNIST.zip" \
"$WML_ENDPOINT_URL/ml/v4/model_definitions/$MODEL_DEFINITION_ID/model?version=2020-08-01&space_id=$SPACE_ID" \
| python -m json.tool
###Output
{
"attachment_id": "189ef773-bc16-4a35-bca3-569020476c49",
"content_format": "native",
"persisted": true
}
###Markdown
4. Experiment runThis section provides samples about how to trigger Deep Learning experiment via cURL calls. <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Trainings/trainings_create" target="_blank" rel="noopener no referrer">Schedule a training job for Deep Learning experiment
###Code
%%bash --out training_payload
TRAINING_PAYLOAD='{"training_data_references": [{"name": "training_input_data", "type": "s3", "connection": {"endpoint_url": "'"$COS_ENDPOINT"'", "access_key_id": "'"$COS_ACCESS_KEY_ID"'", "secret_access_key": "'"$COS_SECRET_ACCESS_KEY"'"}, "location": {"bucket": "'"$COS_BUCKET"'"}, "schema": {"id": "idmlp_schema", "fields": [{"name": "text", "type": "string"}]}}], "results_reference": {"name": "MNIST results", "connection": {"endpoint_url": "'"$COS_ENDPOINT"'", "access_key_id": "'"$COS_ACCESS_KEY_ID"'", "secret_access_key": "'"$COS_SECRET_ACCESS_KEY"'"}, "location": {"bucket": "'"$COS_BUCKET"'"}, "type": "s3"}, "tags": [{"value": "tags_mnist", "description": "dome MNIST"}], "name": "MNIST mlp", "description": "test training modeldef MNIST", "model_definition": {"id": "'"$MODEL_DEFINITION_ID"'", "command": "python3 mnist_mlp.py --trainImagesFile ${DATA_DIR}/train-images-idx3-ubyte.gz --trainLabelsFile ${DATA_DIR}/train-labels-idx1-ubyte.gz --testImagesFile ${DATA_DIR}/t10k-images-idx3-ubyte.gz --testLabelsFile ${DATA_DIR}/t10k-labels-idx1-ubyte.gz --learningRate 0.001 --trainingIters 6000", "hardware_spec": {"name": "K80", "nodes": 1}, "software_spec": {"name": "tensorflow_2.1-py3.7"}, "parameters": {"name": "MNIST mlp", "description": "Simple MNIST mlp model"}}, "space_id": "'"$SPACE_ID"'"}'
echo $TRAINING_PAYLOAD | python -m json.tool
%env TRAINING_PAYLOAD=$training_payload
%%bash --out training_id
curl -sk -X POST \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
--data "$TRAINING_PAYLOAD" \
"$WML_ENDPOINT_URL/ml/v4/trainings?version=2020-08-01" | awk -F'"id":' '{print $2}' | cut -c2-37
%env TRAINING_ID=$training_id
###Output
env: TRAINING_ID=7709e2d9-1d6b-4185-91d0-c926cb4b8185
###Markdown
Get training detailsTreining is an asynchronous endpoint. In case you want to monitor training status and details,you need to use a GET method and specify which training you want to monitor by usage of training ID. <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Trainings/trainings_get" target="_blank" rel="noopener no referrer">Get information about training job
###Code
%%bash
curl -sk -X GET \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
"$WML_ENDPOINT_URL/ml/v4/trainings/$TRAINING_ID?space_id=$SPACE_ID&version=2020-08-01" \
| python -m json.tool
###Output
_____no_output_____
###Markdown
Get training status
###Code
%%bash
STATUS=$(curl -sk -X GET\
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
"$WML_ENDPOINT_URL/ml/v4/trainings/$TRAINING_ID?space_id=$SPACE_ID&version=2020-08-01")
STATUS=${STATUS#*state\":\"}
STATUS=${STATUS%%\"*}
echo $STATUS
###Output
completed
###Markdown
Please make sure that training is completed before you go to the next sections.Monitor `state` of your training by running above cell couple of times. Get selected modelGet a Keras saved model location in COS from the Deep Learning training job.
###Code
%%bash --out model_name
PATH=$(curl -sk -X GET \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
"$WML_ENDPOINT_URL/ml/v4/trainings/$TRAINING_ID?space_id=$SPACE_ID&version=2020-08-01")
PATH=${PATH#*logs\":\"}
MODEL_NAME=${PATH%%\"*}
echo $MODEL_NAME
%env MODEL_NAME=$model_name
###Output
env: MODEL_NAME=training-RuDTVvdMR
###Markdown
5. Historical runsIn this section you will see cURL examples describing how to get historical training runs information. Output should be similar to the output from training creation but you should see more trainings entries. Listing trainings: <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Trainings/trainings_list" target="_blank" rel="noopener no referrer">Get list of historical training jobs information
###Code
%%bash
HISTORICAL_TRAINING_LIMIT_TO_GET=2
curl -sk -X GET \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
"$WML_ENDPOINT_URL/ml/v4/trainings?space_id=$SPACE_ID&version=2020-08-01&limit=$HISTORICAL_TRAINING_LIMIT_TO_GET" \
| python -m json.tool
###Output
_____no_output_____
###Markdown
Cancel training run**Tip:** If you want to cancel your training, please convert below cell to `code`, specify training ID and run. <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Trainings/trainings_delete" target="_blank" rel="noopener no referrer">Canceling training
###Code
%%bash
TRAINING_ID_TO_CANCEL=...
curl -sk -X DELETE \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
"$WML_ENDPOINT_URL/ml/v4/trainings/$TRAINING_ID_TO_DELETE?space_id=$SPACE_ID&version=2020-08-01"
###Output
_____no_output_____
###Markdown
--- 6. Deploy and ScoreIn this section you will learn how to deploy and score pipeline model as webservice using WML instance. Before deployment creation, you need store your model in WML repository.Please see below cURL call example how to do it. Remember that you need tospecify where your chosen model is stored in COS. Store Deep Learning modelStore information about your model to WML repository. <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Models/models_create" target="_blank" rel="noopener no referrer">Model storing
###Code
%%bash --out model_payload
MODEL_PAYLOAD='{"space_id": "'"$SPACE_ID"'", "name": "MNIST mlp", "description": "This is description", "type": "tensorflow_2.1", "software_spec": {"name": "default_py3.7"}, "content_location": { "type": "s3", "contents": [ { "content_format": "native", "file_name": "'"$MODEL_NAME.zip"'", "location": "'"$TRAINING_ID/assets/$TRAINING_ID/resources/wml_model/$MODEL_NAME.zip"'"}],"connection": {"endpoint_url": "'"$COS_ENDPOINT"'", "access_key_id": "'"$COS_ACCESS_KEY_ID"'", "secret_access_key": "'"$COS_SECRET_ACCESS_KEY"'"}, "location": {"bucket": "'"$COS_BUCKET"'"}}}'
echo $MODEL_PAYLOAD | python -m json.tool
%env MODEL_PAYLOAD=$model_payload
%%bash --out model_id
curl -sk -X POST \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
--data "$MODEL_PAYLOAD" \
"$WML_ENDPOINT_URL/ml/v4/models?version=2020-08-01" | grep '"id": ' | awk -F '"' '{ print $4 }' | sed -n 2p
%env MODEL_ID=$model_id
###Output
env: MODEL_ID=e5417dbf-77b4-4e72-a47e-0be8aa2ecd83
###Markdown
Download model contentIf you want to download your saved model, please make the following call. <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Models/models_filtered_download" target="_blank" rel="noopener no referrer">Download model content
###Code
%%bash
curl -sk -X GET \
--header "Authorization: Bearer $TOKEN" \
--output "mnist_cnn.h5.tar.gz" \
"$WML_ENDPOINT_URL/ml/v4/models/$MODEL_ID/download?space_id=$SPACE_ID&version=2020-08-01"
!ls -l mnist_cnn.h5.tar.gz
###Output
-rw-r--r-- 1 jansoltysik staff 4765639 Sep 14 16:00 mnist_cnn.h5.tar.gz
###Markdown
Deployment creationAn Deep Learning online deployment creation is presented below. <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Deployments/deployments_create" target="_blank" rel="noopener no referrer">Create deployment
###Code
%%bash --out deployment_payload
DEPLOYMENT_PAYLOAD='{"space_id": "'"$SPACE_ID"'","name": "MNIST deployment", "description": "This is description","online": {},"hardware_spec": {"name": "S"},"asset": {"id": "'"$MODEL_ID"'"}}'
echo $DEPLOYMENT_PAYLOAD | python -m json.tool
%env DEPLOYMENT_PAYLOAD=$deployment_payload
%%bash --out deployment_id
curl -sk -X POST \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
--data "$DEPLOYMENT_PAYLOAD" \
"$WML_ENDPOINT_URL/ml/v4/deployments?version=2020-08-01" | grep '"id": ' | awk -F '"' '{ print $4 }' | sed -n 3p
%env DEPLOYMENT_ID=$deployment_id
###Output
env: DEPLOYMENT_ID=c7568bf5-9c70-41bf-83c1-ed23fbda2ccc
###Markdown
Get deployment detailsAs deployment API is asynchronous, please make sure your deployment is in `ready` state before going to the next points. <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Deployments/deployments_get" target="_blank" rel="noopener no referrer">Get deployment details
###Code
%%bash
curl -sk -X GET \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
"$WML_ENDPOINT_URL/ml/v4/deployments/$DEPLOYMENT_ID?space_id=$SPACE_ID&version=2020-08-01" \
| python -m json.tool
###Output
{
"entity": {
"asset": {
"id": "e5417dbf-77b4-4e72-a47e-0be8aa2ecd83"
},
"custom": {},
"deployed_asset_type": "model",
"description": "This is description",
"hardware_spec": {
"id": "e7ed1d6c-2e89-42d7-aed5-863b972c1d2b",
"name": "S",
"num_nodes": 1
},
"name": "MNIST deployment",
"online": {},
"space_id": "d70a423e-bab5-4b24-943a-3b0b29ad7527",
"status": {
"online_url": {
"url": "https://yp-qa.ml.cloud.ibm.com/ml/v4/deployments/c7568bf5-9c70-41bf-83c1-ed23fbda2ccc/predictions"
},
"state": "initializing"
}
},
"metadata": {
"created_at": "2020-09-14T14:00:55.027Z",
"description": "This is description",
"id": "c7568bf5-9c70-41bf-83c1-ed23fbda2ccc",
"modified_at": "2020-09-14T14:00:55.027Z",
"name": "MNIST deployment",
"owner": "IBMid-55000091VC",
"space_id": "d70a423e-bab5-4b24-943a-3b0b29ad7527"
}
}
###Markdown
Prepare scoring input data**Hint:** You may need to install numpy using following command `!pip install numpy`
###Code
import numpy as np
mnist_dataset = np.load('mnist.npz')
test_mnist = mnist_dataset['x_test']
image_1 = (test_mnist[0].ravel() / 255).tolist()
image_2 = (test_mnist[1].ravel() / 255).tolist()
%matplotlib inline
import matplotlib.pyplot as plt
for i, image in enumerate([test_mnist[0], test_mnist[1]]):
plt.subplot(2, 2, i + 1)
plt.axis('off')
plt.imshow(image, cmap=plt.cm.gray_r, interpolation='nearest')
###Output
_____no_output_____
###Markdown
Scoring of a webserviceIf you want to make a `score` call on your deployment, please follow a below method: <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Deployment%20Jobs/deployment_jobs_create" target="_blank" rel="noopener no referrer">Create deployment job
###Code
%%bash -s "$image_1" "$image_2"
curl -sk -X POST \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
--data '{"space_id": "$SPACE_ID","input_data": [{"values": ['"$1"', '"$2"']}]}' \
"$WML_ENDPOINT_URL/ml/v4/deployments/$DEPLOYMENT_ID/predictions?version=2020-08-01" \
| python -m json.tool
###Output
{
"predictions": [
{
"fields": [
"prediction",
"prediction_classes",
"probability"
],
"values": [
[
[
2.8162902240315424e-13,
2.634996214279095e-11,
1.940126725941127e-09,
6.535556984488267e-09,
4.630183633421228e-15,
2.0969738532411464e-12,
2.5965000387704643e-19,
1.0,
2.8750441697505957e-12,
4.534635333897086e-09
],
7,
[
2.8162902240315424e-13,
2.634996214279095e-11,
1.940126725941127e-09,
6.535556984488267e-09,
4.630183633421228e-15,
2.0969738532411464e-12,
2.5965000387704643e-19,
1.0,
2.8750441697505957e-12,
4.534635333897086e-09
]
],
[
[
9.832916815846748e-13,
1.2868907788288197e-06,
0.9999985694885254,
1.0709795361663055e-07,
1.9135524797852587e-17,
3.4615290633865925e-09,
3.791623265358979e-12,
1.2983511835096273e-12,
1.8498602649685836e-09,
2.8920135803820433e-15
],
2,
[
9.832916815846748e-13,
1.2868907788288197e-06,
0.9999985694885254,
1.0709795361663055e-07,
1.9135524797852587e-17,
3.4615290633865925e-09,
3.791623265358979e-12,
1.2983511835096273e-12,
1.8498602649685836e-09,
2.8920135803820433e-15
]
]
]
}
]
}
###Markdown
Listing all deployments <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Deployments/deployments_list" target="_blank" rel="noopener no referrer">List deployments details
###Code
%%bash
curl -sk -X GET \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
"$WML_ENDPOINT_URL/ml/v4/deployments?space_id=$SPACE_ID&version=2020-08-01" \
| python -m json.tool
###Output
_____no_output_____
###Markdown
7. Cleaning sectionBelow section is useful when you want to clean all of your previous work within this notebook.Just convert below cells into the `code` and run them. Delete training run**Tip:** You can completely delete a training run with its metadata. <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Trainings/trainings_delete" target="_blank" rel="noopener no referrer">Deleting training
###Code
%%bash
%env TRAINING_ID_TO_DELETE=...
curl -sk -X DELETE \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
"$WML_ENDPOINT_URL/ml/v4/trainings/$TRAINING_ID_TO_DELETE?space_id=$SPACE_ID&version=2020-08-01&hard_delete=true"
###Output
_____no_output_____
###Markdown
Deleting deployment**Tip:** You can delete existing deployment by calling DELETE method. <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Deployments/deployments_delete" target="_blank" rel="noopener no referrer">Delete deployment
###Code
%%bash
curl -sk -X DELETE \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
"$WML_ENDPOINT_URL/ml/v4/deployments/$DEPLOYMENT_ID?space_id=$SPACE_ID&version=2020-08-01"
###Output
_____no_output_____
###Markdown
Delete model from repository**Tip:** If you want to completely remove your stored model and model metadata, just use a DELETE method. <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Models/models_delete" target="_blank" rel="noopener no referrer">Delete model from repository
###Code
%%bash
curl -sk -X DELETE \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
"$WML_ENDPOINT_URL/ml/v4/models/$MODEL_ID?space_id=$SPACE_ID&version=2020-08-01"
###Output
_____no_output_____
###Markdown
Delete model definition**Tip:** If you want to completely remove your model definition, just use a DELETE method. <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Model%20Definitions/model_definitions_delete" target="_blank" rel="noopener no referrer">Delete model definition
###Code
%%bash
curl -sk -X DELETE \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
"$WML_ENDPOINT_URL/ml/v4/model_definitions/$MODEL_DEFINITION_ID?space_id=$SPACE_ID&version=2020-08-01"
###Output
_____no_output_____
###Markdown
Use Keras to recognize hand-written digits with Watson Machine Learning REST API This notebook contains steps and code to demonstrate support of Keras Deep Learning experiments in Watson Machine Learning Service. It introduces commands for getting data, training experiments, persisting pipelines, publishing models, deploying models and scoring.Some familiarity with cURL is helpful. This notebook uses cURL examples. Learning goalsThe learning goals of this notebook are:- Working with Watson Machine Learning experiments to train Deep Learning models.- Downloading computed models to local storage.- Online deployment and scoring of trained model. ContentsThis notebook contains the following parts:1. [Setup](setup) 2. [Experiment definition](experiment_definition) 3. [Model definition](model_definition) 4. [Experiment Run](run) 5. [Historical runs](runs) 6. [Deploy and Score](deploy_and_score) 7. [Cleaning](cleaning) 8. [Summary and next steps](summary) 1. Set up the environmentBefore you use the sample code in this notebook, you must perform the following setup tasks:- Create a Watson Machine Learning (WML) Service instance (a free plan is offered and information about how to create the instance can be found here).- Create a Cloud Object Storage (COS) instance (a lite plan is offered and information about how to order storage can be found here). **Note: When using Watson Studio, you already have a COS instance associated with the project you are running the notebook in.** You can find your COS credentials in COS instance dashboard under the **Service credentials** tab.Go to the **Endpoint** tab in the COS instance's dashboard to get the endpoint information.Authenticate the Watson Machine Learning service on IBM Cloud.Your Cloud API key can be generated by going to the [**Users** section of the Cloud console](https://cloud.ibm.com/iam/users). From that page, click your name, scroll down to the **API Keys** section, and click **Create an IBM Cloud API key**. Give your key a name and click **Create**, then copy the created key and paste it below. **NOTE:** You can also get service specific apikey by going to the [**Service IDs** section of the Cloud Console](https://cloud.ibm.com/iam/serviceids). From that page, click **Create**, then copy the created key and paste it below.
###Code
%env API_KEY=...
%env WML_ENDPOINT_URL=...
%env WML_INSTANCE_CRN="fill out only if you want to create a new space"
%env WML_INSTANCE_NAME=...
%env COS_CRN="fill out only if you want to create a new space"
%env COS_ENDPOINT=...
%env COS_BUCKET=...
%env COS_ACCESS_KEY_ID=...
%env COS_SECRET_ACCESS_KEY=...
%env COS_API_KEY=...
%env SPACE_ID="fill out only if you have space already created"
%env DATAPLATFORM_URL=https://api.dataplatform.cloud.ibm.com
%env AUTH_ENDPOINT=https://iam.cloud.ibm.com/oidc/token
###Output
_____no_output_____
###Markdown
Getting WML authorization token for further cURL calls Example of cURL call to get WML token
###Code
%%bash --out token
curl -sk -X POST \
--header "Content-Type: application/x-www-form-urlencoded" \
--header "Accept: application/json" \
--data-urlencode "grant_type=urn:ibm:params:oauth:grant-type:apikey" \
--data-urlencode "apikey=$API_KEY" \
"$AUTH_ENDPOINT" \
| cut -d '"' -f 4
%env TOKEN=$token
###Output
env: TOKEN=eyJraWQiOiIyMDIxMDUyMDE4MzYiLCJhbGciOiJSUzI1NiJ9.eyJpYW1faWQiOiJJQk1pZC01NTAwMDA5MVZDIiwiaWQiOiJJQk1pZC01NTAwMDA5MVZDIiwicmVhbG1pZCI6IklCTWlkIiwianRpIjoiMzllMmRiMjQtZjFiNy00MWYyLTg2ODItZGRiZjRlZTlmZjM1IiwiaWRlbnRpZmllciI6IjU1MDAwMDkxVkMiLCJnaXZlbl9uYW1lIjoiV01MIiwiZmFtaWx5X25hbWUiOiJXTUwtQmV0YSIsIm5hbWUiOiJXTUwgV01MLUJldGEiLCJlbWFpbCI6IldNTC1CZXRhQHBsLmlibS5jb20iLCJzdWIiOiJXTUwtQmV0YUBwbC5pYm0uY29tIiwiYXV0aG4iOnsic3ViIjoiV01MLUJldGFAcGwuaWJtLmNvbSIsImlhbV9pZCI6ImlhbS01NTAwMDA5MVZDIiwibmFtZSI6IldNTCBXTUwtQmV0YSIsImdpdmVuX25hbWUiOiJXTUwiLCJmYW1pbHlfbmFtZSI6IldNTC1CZXRhIiwiZW1haWwiOiJXTUwtQmV0YUBwbC5pYm0uY29tIn0sImFjY291bnQiOnsiYm91bmRhcnkiOiJnbG9iYWwiLCJ2YWxpZCI6dHJ1ZSwiYnNzIjoiZTBmN2VjM2FjMWIyNGVjOWFlNzcxZWZkNzcyNTM4YTIiLCJpbXNfdXNlcl9pZCI6IjcxNzY5NDMiLCJmcm96ZW4iOnRydWUsImltcyI6IjE2ODQ3ODMifSwiaWF0IjoxNjIzODM3MTc1LCJleHAiOjE2MjM4NDA3NzUsImlzcyI6Imh0dHBzOi8vaWFtLmNsb3VkLmlibS5jb20vaWRlbnRpdHkiLCJncmFudF90eXBlIjoidXJuOmlibTpwYXJhbXM6b2F1dGg6Z3JhbnQtdHlwZTphcGlrZXkiLCJzY29wZSI6ImlibSBvcGVuaWQiLCJjbGllbnRfaWQiOiJkZWZhdWx0IiwiYWNyIjoxLCJhbXIiOlsicHdkIl19.ddBw_llqrcMW51a-iL6IyexaWGOsnkfH8RucO4gLhCebp6DPOQYD-zMT_iiebMFSxqq-gJ-72oDDQQclEzjp79iHQxrlmRRastkSrRlFYKqiVi-y0l_4Ka7PpCzYz9p5HdnHjfDQsOZIMGcKz-Y4nDZvqeTAnbYOshbgcUJuzO7GyFKKEFtbmi-ttRDu1Ex6fgbLCYTYTkUvRdxcoGIXQwMYnoU75yqr3CHSCAGnwhGDgKu5eoqVKZFV9WQSZMYYQuotxFzyaqXDcJfE71R1W764ajlHYcYykHCL6kSG7ksNNZJGyAjDks_WaU1mF8UBen_iC8_k-axJmx5AJFDQmQ
###Markdown
Space creation**Tip:** If you do not have `space` already created, please convert below three cells to `code` and run them.First of all, you need to create a `space` that will be used in all of your further cURL calls. If you do not have `space` already created, below is the cURL call to create one. <a href="https://cpd-spaces-api.eu-gb.cf.appdomain.cloud//Spaces/spaces_create" target="_blank" rel="noopener no referrer">Space creation
###Code
%%bash --out space_id
curl -sk -X POST \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
--data '{"name": "curl_DL", "storage": {"type": "bmcos_object_storage", "resource_crn": "'"$COS_CRN"'"}, "compute": [{"name": "'"$WML_INSTANCE_NAME"'", "crn": "'"$WML_INSTANCE_CRN"'", "type": "machine_learning"}]}' \
"$DATAPLATFORM_URL/v2/spaces" \
| grep '"id": ' | awk -F '"' '{ print $4 }'
space_id = space_id.split('\n')[1]
%env SPACE_ID=$space_id
###Output
_____no_output_____
###Markdown
Space creation is asynchronous. This means that you need to check space creation status after creation call.Make sure that your newly created space is `active`. <a href="https://cpd-spaces-api.eu-gb.cf.appdomain.cloud//Spaces/spaces_get" target="_blank" rel="noopener no referrer">Get space information
###Code
%%bash
curl -sk -X GET \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
"$DATAPLATFORM_URL/v2/spaces/$SPACE_ID"
###Output
_____no_output_____
###Markdown
2. Experiment / optimizer configuration Training data connectionDefine connection information to COS bucket and training data npz file. This example uses the MNIST dataset. The dataset can be downloaded from [here](https://s3.amazonaws.com/img-datasets/mnist.npz). You can also download it to local filesystem by running the cell below.**Action**: Upload training data to COS bucket and enter location information in the next cURL examples.
###Code
%%bash
wget -q https://s3.amazonaws.com/img-datasets/mnist.npz \
-O mnist.npz
###Output
_____no_output_____
###Markdown
Get COS tokenRetrieve COS token for further authentication calls. <a href="https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-curlcurl-token" target="_blank" rel="noopener no referrer">Retrieve COS authentication token
###Code
%%bash --out cos_token
curl -s -X "POST" "$AUTH_ENDPOINT" \
-H 'Accept: application/json' \
-H 'Content-Type: application/x-www-form-urlencoded' \
--data-urlencode "apikey=$COS_API_KEY" \
--data-urlencode "response_type=cloud_iam" \
--data-urlencode "grant_type=urn:ibm:params:oauth:grant-type:apikey" \
| cut -d '"' -f 4
%env COS_TOKEN=$cos_token
###Output
env: COS_TOKEN=eyJraWQiOiIyMDIxMDUyMDE4MzYiLCJhbGciOiJSUzI1NiJ9.eyJpYW1faWQiOiJpYW0tU2VydmljZUlkLTU3MzMwOTM3LWJkNTQtNDYyYi04ODUxLWY3OTFhZDYyYzY1OSIsImlkIjoiaWFtLVNlcnZpY2VJZC01NzMzMDkzNy1iZDU0LTQ2MmItODg1MS1mNzkxYWQ2MmM2NTkiLCJyZWFsbWlkIjoiaWFtIiwianRpIjoiZTZhOWJjY2UtM2EzMS00MjhmLThiYjktNjgwNTgxYzAyOGQ5IiwiaWRlbnRpZmllciI6IlNlcnZpY2VJZC01NzMzMDkzNy1iZDU0LTQ2MmItODg1MS1mNzkxYWQ2MmM2NTkiLCJuYW1lIjoiY3JlZGVudGlhbHNfMSIsInN1YiI6IlNlcnZpY2VJZC01NzMzMDkzNy1iZDU0LTQ2MmItODg1MS1mNzkxYWQ2MmM2NTkiLCJzdWJfdHlwZSI6IlNlcnZpY2VJZCIsImF1dGhuIjp7InN1YiI6ImlhbS1TZXJ2aWNlSWQtNTczMzA5MzctYmQ1NC00NjJiLTg4NTEtZjc5MWFkNjJjNjU5IiwiaWFtX2lkIjoiaWFtLWlhbS1TZXJ2aWNlSWQtNTczMzA5MzctYmQ1NC00NjJiLTg4NTEtZjc5MWFkNjJjNjU5Iiwic3ViX3R5cGUiOiIxIiwibmFtZSI6ImNyZWRlbnRpYWxzXzEifSwiYWNjb3VudCI6eyJib3VuZGFyeSI6Imdsb2JhbCIsInZhbGlkIjp0cnVlLCJic3MiOiJlMGY3ZWMzYWMxYjI0ZWM5YWU3NzFlZmQ3NzI1MzhhMiIsImZyb3plbiI6dHJ1ZX0sImlhdCI6MTYyMzgzNzE4MSwiZXhwIjoxNjIzODQwNzgxLCJpc3MiOiJodHRwczovL2lhbS5jbG91ZC5pYm0uY29tL2lkZW50aXR5IiwiZ3JhbnRfdHlwZSI6InVybjppYm06cGFyYW1zOm9hdXRoOmdyYW50LXR5cGU6YXBpa2V5Iiwic2NvcGUiOiJpYm0gb3BlbmlkIiwiY2xpZW50X2lkIjoiZGVmYXVsdCIsImFjciI6MSwiYW1yIjpbInB3ZCJdfQ.aN1pGaXxDcCGsoT6AtIsD-IRWSew2rc5S4JNFRT-KI-axDy2P834ZNdqwjwaguXz0tL4N6CQsy-NN8OSUs7Uy0bo8l1wPJUL42tDdKS5Wq5F8redL90wt1RQ2feRzbX18YvcnIrcygZRPZ3HRSSoBhiN0lm3OxXihwCotEf1dPu1z9Vfp1NEv6k1BCHMoHvbtwkxYXaoCZfbEm3Nvss0MKnDzr9-YNcpgLcVjvPpBpMUt1fuKzjNnjHQzy7dGAOlL6nRgqasxx8k7VC4JQ5DaYH7JSPSphbGhkB1Ut46oL4svsXpN7Jx3BUzp67y2pJ5AVXnlImRlhD7R9YKOiPSJw
###Markdown
Upload file to COSUpload your local dataset into your COS bucket <a href="https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-curlcurl-put-object" target="_blank" rel="noopener no referrer">Upload file to COS
###Code
%%bash
curl -sk -X PUT \
--header "Authorization: Bearer $COS_TOKEN" \
--header "Content-Type: application/octet-stream" \
--data-binary "@mnist.npz" \
"$COS_ENDPOINT/$COS_BUCKET/mnist.npz"
###Output
_____no_output_____
###Markdown
There should be an empty response when upload finished succesfully. Create connection to COSCreated connections will be used in training for pointing to given COS location.
###Code
%%bash --out connection_payload
CONNECTION_PAYLOAD='{"name": "REST COS connection", "datasource_type": "193a97c1-4475-4a19-b90c-295c4fdc6517", "properties": {"bucket": "'"$COS_BUCKET"'", "access_key": "'"$COS_ACCESS_KEY_ID"'", "secret_key": "'"$COS_SECRET_ACCESS_KEY"'", "iam_url": "'"$AUTH_ENDPOINT"'", "url": "'"$COS_ENDPOINT"'"}, "origin_country": "US"}'
echo $CONNECTION_PAYLOAD | python -m json.tool
%env CONNECTION_PAYLOAD=$connection_payload
%%bash --out connection_id
CONNECTION_ID=$(curl -sk -X POST \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
--data "$CONNECTION_PAYLOAD" \
"$DATAPLATFORM_URL/v2/connections?space_id=$SPACE_ID")
CONNECTION_ID=${CONNECTION_ID#*asset_id\":\"}
CONNECTION_ID=${CONNECTION_ID%%\"*}
echo $CONNECTION_ID
%env CONNECTION_ID=$connection_id
###Output
env: CONNECTION_ID=65c9a953-8816-413b-9af7-a264bb5c610e
###Markdown
3. Model definition This section provides samples about how to store model definition via cURL calls. <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Model%20Definitions/model_definitions_create" target="_blank" rel="noopener no referrer">Store a model definition for Deep Learning experiment
###Code
%%bash --out model_definition_payload
MODEL_DEFINITION_PAYLOAD='{"name": "mlp-model-definition", "space_id": "'"$SPACE_ID"'", "description": "mlp-model-definition", "tags": ["DL", "MNIST"], "version": "2.0", "platform": {"name": "python", "versions": ["3.8"]}, "command": "python3 mnist_mlp.py"}'
echo $MODEL_DEFINITION_PAYLOAD | python -m json.tool
%env MODEL_DEFINITION_PAYLOAD=$model_definition_payload
%%bash --out model_definition_id
curl -sk -X POST \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
--data "$MODEL_DEFINITION_PAYLOAD" \
"$WML_ENDPOINT_URL/ml/v4/model_definitions?version=2020-08-01"| grep '"id": ' | awk -F '"' '{ print $4 }'
%env MODEL_DEFINITION_ID=$model_definition_id
###Output
env: MODEL_DEFINITION_ID=7883edea-3751-47bd-9c47-f8fae0b837da
###Markdown
Model preparationDownload files with keras code. You can either download it via link below or run the cell below the link. <a href="https://github.com/IBM/watson-machine-learning-samples/raw/master/cloud/definitions/keras/mnist/MNIST.zip" target="_blank" rel="noopener no referrer">Download MNIST.zip
###Code
%%bash
wget https://github.com/IBM/watson-machine-learning-samples/raw/master/cloud/definitions/keras/mnist/MNIST.zip \
-O MNIST.zip
###Output
--2020-09-14 15:52:14-- https://github.com/IBM/watson-machine-learning-samples/raw/master/cloud/definitions/keras/mnist/MNIST.zip
Resolving github.com... 140.82.121.4
Connecting to github.com|140.82.121.4|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://raw.githubusercontent.com/IBM/watson-machine-learning-samples/master/cloud/definitions/keras/mnist/MNIST.zip [following]
--2020-09-14 15:52:15-- https://raw.githubusercontent.com/IBM/watson-machine-learning-samples/master/cloud/definitions/keras/mnist/MNIST.zip
Resolving raw.githubusercontent.com... 151.101.112.133
Connecting to raw.githubusercontent.com|151.101.112.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 3836 (3.7K) [application/zip]
Saving to: 'MNIST.zip'
0K ... 100% 7.83M=0s
2020-09-14 15:52:15 (7.83 MB/s) - 'MNIST.zip' saved [3836/3836]
###Markdown
**Tip**: Convert below cell to code and run it to see model deinition's code.
###Code
!unzip -oqd . MNIST.zip && cat mnist_mlp.py
###Output
_____no_output_____
###Markdown
Upload model for the model definition <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Model%20Definitions/model_definitions_upload_model" target="_blank" rel="noopener no referrer">Upload model for the model definition
###Code
%%bash
curl -sk -X PUT \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
--data-binary "@MNIST.zip" \
"$WML_ENDPOINT_URL/ml/v4/model_definitions/$MODEL_DEFINITION_ID/model?version=2020-08-01&space_id=$SPACE_ID" \
| python -m json.tool
###Output
{
"attachment_id": "189ef773-bc16-4a35-bca3-569020476c49",
"content_format": "native",
"persisted": true
}
###Markdown
4. Experiment runThis section provides samples about how to trigger Deep Learning experiment via cURL calls. <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Trainings/trainings_create" target="_blank" rel="noopener no referrer">Schedule a training job for Deep Learning experiment
###Code
%%bash --out training_payload
TRAINING_PAYLOAD='{"training_data_references": [{"name": "training_input_data", "type": "connection_asset", "connection": {"id": "'"$CONNECTION_ID"'"}, "location": {"bucket": "'"$COS_BUCKET"'", "file_name": "."}, "schema": {"id": "idmlp_schema", "fields": [{"name": "text", "type": "string"}]}}], "results_reference": {"name": "MNIST results", "connection": {"id": "'"$CONNECTION_ID"'"}, "location": {"bucket": "'"$COS_BUCKET"'", "file_name": "."}, "type": "connection_asset"}, "tags": [{"value": "tags_mnist", "description": "dome MNIST"}], "name": "MNIST mlp", "description": "test training modeldef MNIST", "model_definition": {"id": "'"$MODEL_DEFINITION_ID"'", "hardware_spec": {"name": "K80", "nodes": 1}, "software_spec": {"name": "tensorflow_2.4-py3.8"}, "parameters": {"name": "MNIST mlp", "description": "Simple MNIST mlp model"}}, "space_id": "'"$SPACE_ID"'"}'
echo $TRAINING_PAYLOAD | python -m json.tool
%env TRAINING_PAYLOAD=$training_payload
%%bash --out training_id
curl -sk -X POST \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
--data "$TRAINING_PAYLOAD" \
"$WML_ENDPOINT_URL/ml/v4/trainings?version=2020-08-01" | awk -F'"id":' '{print $2}' | cut -c2-37
%env TRAINING_ID=$training_id
###Output
env: TRAINING_ID=7709e2d9-1d6b-4185-91d0-c926cb4b8185
###Markdown
Get training detailsTreining is an asynchronous endpoint. In case you want to monitor training status and details,you need to use a GET method and specify which training you want to monitor by usage of training ID. <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Trainings/trainings_get" target="_blank" rel="noopener no referrer">Get information about training job
###Code
%%bash
curl -sk -X GET \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
"$WML_ENDPOINT_URL/ml/v4/trainings/$TRAINING_ID?space_id=$SPACE_ID&version=2020-08-01" \
| python -m json.tool
###Output
_____no_output_____
###Markdown
Get training status
###Code
%%bash
STATUS=$(curl -sk -X GET\
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
"$WML_ENDPOINT_URL/ml/v4/trainings/$TRAINING_ID?space_id=$SPACE_ID&version=2020-08-01")
STATUS=${STATUS#*state\":\"}
STATUS=${STATUS%%\"*}
echo $STATUS
###Output
completed
###Markdown
Please make sure that training is completed before you go to the next sections.Monitor `state` of your training by running above cell couple of times. Get selected modelGet a Keras saved model location in COS from the Deep Learning training job.
###Code
%%bash --out model_name
PATH=$(curl -sk -X GET \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
"$WML_ENDPOINT_URL/ml/v4/trainings/$TRAINING_ID?space_id=$SPACE_ID&version=2020-08-01")
PATH=${PATH#*logs\":\"}
MODEL_NAME=${PATH%%\"*}
echo $MODEL_NAME
%env MODEL_NAME=$model_name
###Output
env: MODEL_NAME=training-RuDTVvdMR
###Markdown
5. Historical runsIn this section you will see cURL examples describing how to get historical training runs information. Output should be similar to the output from training creation but you should see more trainings entries. Listing trainings: <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Trainings/trainings_list" target="_blank" rel="noopener no referrer">Get list of historical training jobs information
###Code
%%bash
HISTORICAL_TRAINING_LIMIT_TO_GET=2
curl -sk -X GET \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
"$WML_ENDPOINT_URL/ml/v4/trainings?space_id=$SPACE_ID&version=2020-08-01&limit=$HISTORICAL_TRAINING_LIMIT_TO_GET" \
| python -m json.tool
###Output
_____no_output_____
###Markdown
Cancel training run**Tip:** If you want to cancel your training, please convert below cell to `code`, specify training ID and run. <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Trainings/trainings_delete" target="_blank" rel="noopener no referrer">Canceling training
###Code
%%bash
TRAINING_ID_TO_CANCEL=...
curl -sk -X DELETE \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
"$WML_ENDPOINT_URL/ml/v4/trainings/$TRAINING_ID_TO_DELETE?space_id=$SPACE_ID&version=2020-08-01"
###Output
_____no_output_____
###Markdown
--- 6. Deploy and ScoreIn this section you will learn how to deploy and score pipeline model as webservice using WML instance. Before deployment creation, you need store your model in WML repository.Please see below cURL call example how to do it. Remember that you need tospecify where your chosen model is stored in COS. Store Deep Learning modelStore information about your model to WML repository. <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Models/models_create" target="_blank" rel="noopener no referrer">Model storing
###Code
%%bash --out model_payload
MODEL_PAYLOAD='{"space_id": "'"$SPACE_ID"'", "name": "MNIST mlp", "description": "This is description", "type": "tensorflow_2.4", "software_spec": {"name": "tensorflow_2.4-py3.8"}, "content_location": { "type": "s3", "contents": [ { "content_format": "native", "file_name": "'"$MODEL_NAME.zip"'", "location": "'"$TRAINING_ID/assets/$TRAINING_ID/resources/wml_model/$MODEL_NAME.zip"'"}],"connection": {"endpoint_url": "'"$COS_ENDPOINT"'", "access_key_id": "'"$COS_ACCESS_KEY_ID"'", "secret_access_key": "'"$COS_SECRET_ACCESS_KEY"'"}, "location": {"bucket": "'"$COS_BUCKET"'"}}}'
echo $MODEL_PAYLOAD | python -m json.tool
%env MODEL_PAYLOAD=$model_payload
%%bash --out model_id
curl -sk -X POST \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
--data "$MODEL_PAYLOAD" \
"$WML_ENDPOINT_URL/ml/v4/models?version=2020-08-01" | grep '"id": ' | awk -F '"' '{ print $4 }' | sed -n 2p
%env MODEL_ID=$model_id
###Output
env: MODEL_ID=e5417dbf-77b4-4e72-a47e-0be8aa2ecd83
###Markdown
Download model contentIf you want to download your saved model, please make the following call. <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Models/models_filtered_download" target="_blank" rel="noopener no referrer">Download model content
###Code
%%bash
curl -sk -X GET \
--header "Authorization: Bearer $TOKEN" \
--output "mnist_cnn.h5.tar.gz" \
"$WML_ENDPOINT_URL/ml/v4/models/$MODEL_ID/download?space_id=$SPACE_ID&version=2020-08-01"
!ls -l mnist_cnn.h5.tar.gz
###Output
-rw-r--r-- 1 jansoltysik staff 4765639 Sep 14 16:00 mnist_cnn.h5.tar.gz
###Markdown
Deployment creationAn Deep Learning online deployment creation is presented below. <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Deployments/deployments_create" target="_blank" rel="noopener no referrer">Create deployment
###Code
%%bash --out deployment_payload
DEPLOYMENT_PAYLOAD='{"space_id": "'"$SPACE_ID"'","name": "MNIST deployment", "description": "This is description","online": {},"hardware_spec": {"name": "S"},"asset": {"id": "'"$MODEL_ID"'"}}'
echo $DEPLOYMENT_PAYLOAD | python -m json.tool
%env DEPLOYMENT_PAYLOAD=$deployment_payload
%%bash --out deployment_id
curl -sk -X POST \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
--data "$DEPLOYMENT_PAYLOAD" \
"$WML_ENDPOINT_URL/ml/v4/deployments?version=2020-08-01" | grep '"id": ' | awk -F '"' '{ print $4 }' | sed -n 3p
%env DEPLOYMENT_ID=$deployment_id
###Output
env: DEPLOYMENT_ID=c7568bf5-9c70-41bf-83c1-ed23fbda2ccc
###Markdown
Get deployment detailsAs deployment API is asynchronous, please make sure your deployment is in `ready` state before going to the next points. <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Deployments/deployments_get" target="_blank" rel="noopener no referrer">Get deployment details
###Code
%%bash
curl -sk -X GET \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
"$WML_ENDPOINT_URL/ml/v4/deployments/$DEPLOYMENT_ID?space_id=$SPACE_ID&version=2020-08-01" \
| python -m json.tool
###Output
{
"entity": {
"asset": {
"id": "e5417dbf-77b4-4e72-a47e-0be8aa2ecd83"
},
"custom": {},
"deployed_asset_type": "model",
"description": "This is description",
"hardware_spec": {
"id": "e7ed1d6c-2e89-42d7-aed5-863b972c1d2b",
"name": "S",
"num_nodes": 1
},
"name": "MNIST deployment",
"online": {},
"space_id": "d70a423e-bab5-4b24-943a-3b0b29ad7527",
"status": {
"online_url": {
"url": "https://yp-qa.ml.cloud.ibm.com/ml/v4/deployments/c7568bf5-9c70-41bf-83c1-ed23fbda2ccc/predictions"
},
"state": "initializing"
}
},
"metadata": {
"created_at": "2020-09-14T14:00:55.027Z",
"description": "This is description",
"id": "c7568bf5-9c70-41bf-83c1-ed23fbda2ccc",
"modified_at": "2020-09-14T14:00:55.027Z",
"name": "MNIST deployment",
"owner": "IBMid-55000091VC",
"space_id": "d70a423e-bab5-4b24-943a-3b0b29ad7527"
}
}
###Markdown
Prepare scoring input data**Hint:** You may need to install numpy using following command `!pip install numpy`
###Code
import numpy as np
mnist_dataset = np.load('mnist.npz')
test_mnist = mnist_dataset['x_test']
image_1 = (test_mnist[0].ravel() / 255).tolist()
image_2 = (test_mnist[1].ravel() / 255).tolist()
%matplotlib inline
import matplotlib.pyplot as plt
for i, image in enumerate([test_mnist[0], test_mnist[1]]):
plt.subplot(2, 2, i + 1)
plt.axis('off')
plt.imshow(image, cmap=plt.cm.gray_r, interpolation='nearest')
###Output
_____no_output_____
###Markdown
Scoring of a webserviceIf you want to make a `score` call on your deployment, please follow a below method: <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Deployment%20Jobs/deployment_jobs_create" target="_blank" rel="noopener no referrer">Create deployment job
###Code
%%bash -s "$image_1" "$image_2"
curl -sk -X POST \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
--data '{"space_id": "$SPACE_ID","input_data": [{"values": ['"$1"', '"$2"']}]}' \
"$WML_ENDPOINT_URL/ml/v4/deployments/$DEPLOYMENT_ID/predictions?version=2020-08-01" \
| python -m json.tool
###Output
{
"predictions": [
{
"fields": [
"prediction",
"prediction_classes",
"probability"
],
"values": [
[
[
2.8162902240315424e-13,
2.634996214279095e-11,
1.940126725941127e-09,
6.535556984488267e-09,
4.630183633421228e-15,
2.0969738532411464e-12,
2.5965000387704643e-19,
1.0,
2.8750441697505957e-12,
4.534635333897086e-09
],
7,
[
2.8162902240315424e-13,
2.634996214279095e-11,
1.940126725941127e-09,
6.535556984488267e-09,
4.630183633421228e-15,
2.0969738532411464e-12,
2.5965000387704643e-19,
1.0,
2.8750441697505957e-12,
4.534635333897086e-09
]
],
[
[
9.832916815846748e-13,
1.2868907788288197e-06,
0.9999985694885254,
1.0709795361663055e-07,
1.9135524797852587e-17,
3.4615290633865925e-09,
3.791623265358979e-12,
1.2983511835096273e-12,
1.8498602649685836e-09,
2.8920135803820433e-15
],
2,
[
9.832916815846748e-13,
1.2868907788288197e-06,
0.9999985694885254,
1.0709795361663055e-07,
1.9135524797852587e-17,
3.4615290633865925e-09,
3.791623265358979e-12,
1.2983511835096273e-12,
1.8498602649685836e-09,
2.8920135803820433e-15
]
]
]
}
]
}
###Markdown
Listing all deployments <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Deployments/deployments_list" target="_blank" rel="noopener no referrer">List deployments details
###Code
%%bash
curl -sk -X GET \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
"$WML_ENDPOINT_URL/ml/v4/deployments?space_id=$SPACE_ID&version=2020-08-01" \
| python -m json.tool
###Output
_____no_output_____
###Markdown
7. Cleaning sectionBelow section is useful when you want to clean all of your previous work within this notebook.Just convert below cells into the `code` and run them. Delete training run**Tip:** You can completely delete a training run with its metadata. <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Trainings/trainings_delete" target="_blank" rel="noopener no referrer">Deleting training
###Code
%%bash
%env TRAINING_ID_TO_DELETE=...
curl -sk -X DELETE \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
"$WML_ENDPOINT_URL/ml/v4/trainings/$TRAINING_ID_TO_DELETE?space_id=$SPACE_ID&version=2020-08-01&hard_delete=true"
###Output
_____no_output_____
###Markdown
Deleting deployment**Tip:** You can delete existing deployment by calling DELETE method. <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Deployments/deployments_delete" target="_blank" rel="noopener no referrer">Delete deployment
###Code
%%bash
curl -sk -X DELETE \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
"$WML_ENDPOINT_URL/ml/v4/deployments/$DEPLOYMENT_ID?space_id=$SPACE_ID&version=2020-08-01"
###Output
_____no_output_____
###Markdown
Delete model from repository**Tip:** If you want to completely remove your stored model and model metadata, just use a DELETE method. <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Models/models_delete" target="_blank" rel="noopener no referrer">Delete model from repository
###Code
%%bash
curl -sk -X DELETE \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
"$WML_ENDPOINT_URL/ml/v4/models/$MODEL_ID?space_id=$SPACE_ID&version=2020-08-01"
###Output
_____no_output_____
###Markdown
Delete model definition**Tip:** If you want to completely remove your model definition, just use a DELETE method. <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Model%20Definitions/model_definitions_delete" target="_blank" rel="noopener no referrer">Delete model definition
###Code
%%bash
curl -sk -X DELETE \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
"$WML_ENDPOINT_URL/ml/v4/model_definitions/$MODEL_DEFINITION_ID?space_id=$SPACE_ID&version=2020-08-01"
###Output
_____no_output_____
###Markdown
Use Keras to recognize hand-written digits with Watson Machine Learning REST API This notebook contains steps and code to demonstrate support of Keras Deep Learning experiments in Watson Machine Learning Service. It introduces commands for getting data, training experiments, persisting pipelines, publishing models, deploying models and scoring.Some familiarity with cURL is helpful. This notebook uses cURL examples. Learning goalsThe learning goals of this notebook are:- Working with Watson Machine Learning experiments to train Deep Learning models.- Downloading computed models to local storage.- Online deployment and scoring of trained model. ContentsThis notebook contains the following parts:1. [Setup](setup) 2. [Experiment definition](experiment_definition) 3. [Model definition](model_definition) 4. [Experiment Run](run) 5. [Historical runs](runs) 6. [Deploy and Score](deploy_and_score) 7. [Cleaning](cleaning) 8. [Summary and next steps](summary) 1. Set up the environmentBefore you use the sample code in this notebook, you must perform the following setup tasks:- Create a Watson Machine Learning (WML) Service instance (a free plan is offered and information about how to create the instance can be found here).- Create a Cloud Object Storage (COS) instance (a lite plan is offered and information about how to order storage can be found here). **Note: When using Watson Studio, you already have a COS instance associated with the project you are running the notebook in.** You can find your COS credentials in COS instance dashboard under the **Service credentials** tab.Go to the **Endpoint** tab in the COS instance's dashboard to get the endpoint information.Authenticate the Watson Machine Learning service on IBM Cloud.Your Cloud API key can be generated by going to the [**Users** section of the Cloud console](https://cloud.ibm.com/iam/users). From that page, click your name, scroll down to the **API Keys** section, and click **Create an IBM Cloud API key**. Give your key a name and click **Create**, then copy the created key and paste it below. **NOTE:** You can also get service specific apikey by going to the [**Service IDs** section of the Cloud Console](https://cloud.ibm.com/iam/serviceids). From that page, click **Create**, then copy the created key and paste it below.
###Code
%env API_KEY=...
%env WML_ENDPOINT_URL=...
%env WML_INSTANCE_CRN="fill out only if you want to create a new space"
%env WML_INSTANCE_NAME=...
%env COS_CRN="fill out only if you want to create a new space"
%env COS_ENDPOINT=...
%env COS_BUCKET=...
%env COS_ACCESS_KEY_ID=...
%env COS_SECRET_ACCESS_KEY=...
%env COS_API_KEY=...
%env SPACE_ID="fill out only if you have space already created"
%env DATAPLATFORM_URL=https://api.dataplatform.cloud.ibm.com
%env AUTH_ENDPOINT=https://iam.cloud.ibm.com/oidc/token
###Output
_____no_output_____
###Markdown
Getting WML authorization token for further cURL calls Example of cURL call to get WML token
###Code
%%bash --out token
curl -sk -X POST \
--header "Content-Type: application/x-www-form-urlencoded" \
--header "Accept: application/json" \
--data-urlencode "grant_type=urn:ibm:params:oauth:grant-type:apikey" \
--data-urlencode "apikey=$API_KEY" \
"$AUTH_ENDPOINT" \
| cut -d '"' -f 4
%env TOKEN=$token
###Output
env: TOKEN=eyJraWQiOiIyMDIxMDUyMDE4MzYiLCJhbGciOiJSUzI1NiJ9.eyJpYW1faWQiOiJJQk1pZC01NTAwMDA5MVZDIiwiaWQiOiJJQk1pZC01NTAwMDA5MVZDIiwicmVhbG1pZCI6IklCTWlkIiwianRpIjoiMzllMmRiMjQtZjFiNy00MWYyLTg2ODItZGRiZjRlZTlmZjM1IiwiaWRlbnRpZmllciI6IjU1MDAwMDkxVkMiLCJnaXZlbl9uYW1lIjoiV01MIiwiZmFtaWx5X25hbWUiOiJXTUwtQmV0YSIsIm5hbWUiOiJXTUwgV01MLUJldGEiLCJlbWFpbCI6IldNTC1CZXRhQHBsLmlibS5jb20iLCJzdWIiOiJXTUwtQmV0YUBwbC5pYm0uY29tIiwiYXV0aG4iOnsic3ViIjoiV01MLUJldGFAcGwuaWJtLmNvbSIsImlhbV9pZCI6ImlhbS01NTAwMDA5MVZDIiwibmFtZSI6IldNTCBXTUwtQmV0YSIsImdpdmVuX25hbWUiOiJXTUwiLCJmYW1pbHlfbmFtZSI6IldNTC1CZXRhIiwiZW1haWwiOiJXTUwtQmV0YUBwbC5pYm0uY29tIn0sImFjY291bnQiOnsiYm91bmRhcnkiOiJnbG9iYWwiLCJ2YWxpZCI6dHJ1ZSwiYnNzIjoiZTBmN2VjM2FjMWIyNGVjOWFlNzcxZWZkNzcyNTM4YTIiLCJpbXNfdXNlcl9pZCI6IjcxNzY5NDMiLCJmcm96ZW4iOnRydWUsImltcyI6IjE2ODQ3ODMifSwiaWF0IjoxNjIzODM3MTc1LCJleHAiOjE2MjM4NDA3NzUsImlzcyI6Imh0dHBzOi8vaWFtLmNsb3VkLmlibS5jb20vaWRlbnRpdHkiLCJncmFudF90eXBlIjoidXJuOmlibTpwYXJhbXM6b2F1dGg6Z3JhbnQtdHlwZTphcGlrZXkiLCJzY29wZSI6ImlibSBvcGVuaWQiLCJjbGllbnRfaWQiOiJkZWZhdWx0IiwiYWNyIjoxLCJhbXIiOlsicHdkIl19.ddBw_llqrcMW51a-iL6IyexaWGOsnkfH8RucO4gLhCebp6DPOQYD-zMT_iiebMFSxqq-gJ-72oDDQQclEzjp79iHQxrlmRRastkSrRlFYKqiVi-y0l_4Ka7PpCzYz9p5HdnHjfDQsOZIMGcKz-Y4nDZvqeTAnbYOshbgcUJuzO7GyFKKEFtbmi-ttRDu1Ex6fgbLCYTYTkUvRdxcoGIXQwMYnoU75yqr3CHSCAGnwhGDgKu5eoqVKZFV9WQSZMYYQuotxFzyaqXDcJfE71R1W764ajlHYcYykHCL6kSG7ksNNZJGyAjDks_WaU1mF8UBen_iC8_k-axJmx5AJFDQmQ
###Markdown
Space creation**Tip:** If you do not have `space` already created, please convert below three cells to `code` and run them.First of all, you need to create a `space` that will be used in all of your further cURL calls. If you do not have `space` already created, below is the cURL call to create one. <a href="https://cpd-spaces-api.eu-gb.cf.appdomain.cloud//Spaces/spaces_create" target="_blank" rel="noopener no referrer">Space creation
###Code
%%bash --out space_id
curl -sk -X POST \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
--data '{"name": "curl_DL", "storage": {"type": "bmcos_object_storage", "resource_crn": "'"$COS_CRN"'"}, "compute": [{"name": "'"$WML_INSTANCE_NAME"'", "crn": "'"$WML_INSTANCE_CRN"'", "type": "machine_learning"}]}' \
"$DATAPLATFORM_URL/v2/spaces" \
| grep '"id": ' | awk -F '"' '{ print $4 }'
space_id = space_id.split('\n')[1]
%env SPACE_ID=$space_id
###Output
_____no_output_____
###Markdown
Space creation is asynchronous. This means that you need to check space creation status after creation call.Make sure that your newly created space is `active`. <a href="https://cpd-spaces-api.eu-gb.cf.appdomain.cloud//Spaces/spaces_get" target="_blank" rel="noopener no referrer">Get space information
###Code
%%bash
curl -sk -X GET \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
"$DATAPLATFORM_URL/v2/spaces/$SPACE_ID"
###Output
_____no_output_____
###Markdown
2. Experiment / optimizer configuration Training data connectionDefine connection information to COS bucket and training data npz file. This example uses the MNIST dataset. The dataset can be downloaded from [here](https://s3.amazonaws.com/img-datasets/mnist.npz). You can also download it to local filesystem by running the cell below.**Action**: Upload training data to COS bucket and enter location information in the next cURL examples.
###Code
%%bash
wget -q https://s3.amazonaws.com/img-datasets/mnist.npz \
-O mnist.npz
###Output
_____no_output_____
###Markdown
Get COS tokenRetrieve COS token for further authentication calls. <a href="https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-curlcurl-token" target="_blank" rel="noopener no referrer">Retrieve COS authentication token
###Code
%%bash --out cos_token
curl -s -X "POST" "$AUTH_ENDPOINT" \
-H 'Accept: application/json' \
-H 'Content-Type: application/x-www-form-urlencoded' \
--data-urlencode "apikey=$COS_API_KEY" \
--data-urlencode "response_type=cloud_iam" \
--data-urlencode "grant_type=urn:ibm:params:oauth:grant-type:apikey" \
| cut -d '"' -f 4
%env COS_TOKEN=$cos_token
###Output
env: COS_TOKEN=eyJraWQiOiIyMDIxMDUyMDE4MzYiLCJhbGciOiJSUzI1NiJ9.eyJpYW1faWQiOiJpYW0tU2VydmljZUlkLTU3MzMwOTM3LWJkNTQtNDYyYi04ODUxLWY3OTFhZDYyYzY1OSIsImlkIjoiaWFtLVNlcnZpY2VJZC01NzMzMDkzNy1iZDU0LTQ2MmItODg1MS1mNzkxYWQ2MmM2NTkiLCJyZWFsbWlkIjoiaWFtIiwianRpIjoiZTZhOWJjY2UtM2EzMS00MjhmLThiYjktNjgwNTgxYzAyOGQ5IiwiaWRlbnRpZmllciI6IlNlcnZpY2VJZC01NzMzMDkzNy1iZDU0LTQ2MmItODg1MS1mNzkxYWQ2MmM2NTkiLCJuYW1lIjoiY3JlZGVudGlhbHNfMSIsInN1YiI6IlNlcnZpY2VJZC01NzMzMDkzNy1iZDU0LTQ2MmItODg1MS1mNzkxYWQ2MmM2NTkiLCJzdWJfdHlwZSI6IlNlcnZpY2VJZCIsImF1dGhuIjp7InN1YiI6ImlhbS1TZXJ2aWNlSWQtNTczMzA5MzctYmQ1NC00NjJiLTg4NTEtZjc5MWFkNjJjNjU5IiwiaWFtX2lkIjoiaWFtLWlhbS1TZXJ2aWNlSWQtNTczMzA5MzctYmQ1NC00NjJiLTg4NTEtZjc5MWFkNjJjNjU5Iiwic3ViX3R5cGUiOiIxIiwibmFtZSI6ImNyZWRlbnRpYWxzXzEifSwiYWNjb3VudCI6eyJib3VuZGFyeSI6Imdsb2JhbCIsInZhbGlkIjp0cnVlLCJic3MiOiJlMGY3ZWMzYWMxYjI0ZWM5YWU3NzFlZmQ3NzI1MzhhMiIsImZyb3plbiI6dHJ1ZX0sImlhdCI6MTYyMzgzNzE4MSwiZXhwIjoxNjIzODQwNzgxLCJpc3MiOiJodHRwczovL2lhbS5jbG91ZC5pYm0uY29tL2lkZW50aXR5IiwiZ3JhbnRfdHlwZSI6InVybjppYm06cGFyYW1zOm9hdXRoOmdyYW50LXR5cGU6YXBpa2V5Iiwic2NvcGUiOiJpYm0gb3BlbmlkIiwiY2xpZW50X2lkIjoiZGVmYXVsdCIsImFjciI6MSwiYW1yIjpbInB3ZCJdfQ.aN1pGaXxDcCGsoT6AtIsD-IRWSew2rc5S4JNFRT-KI-axDy2P834ZNdqwjwaguXz0tL4N6CQsy-NN8OSUs7Uy0bo8l1wPJUL42tDdKS5Wq5F8redL90wt1RQ2feRzbX18YvcnIrcygZRPZ3HRSSoBhiN0lm3OxXihwCotEf1dPu1z9Vfp1NEv6k1BCHMoHvbtwkxYXaoCZfbEm3Nvss0MKnDzr9-YNcpgLcVjvPpBpMUt1fuKzjNnjHQzy7dGAOlL6nRgqasxx8k7VC4JQ5DaYH7JSPSphbGhkB1Ut46oL4svsXpN7Jx3BUzp67y2pJ5AVXnlImRlhD7R9YKOiPSJw
###Markdown
Upload file to COSUpload your local dataset into your COS bucket <a href="https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-curlcurl-put-object" target="_blank" rel="noopener no referrer">Upload file to COS
###Code
%%bash
curl -sk -X PUT \
--header "Authorization: Bearer $COS_TOKEN" \
--header "Content-Type: application/octet-stream" \
--data-binary "@mnist.npz" \
"$COS_ENDPOINT/$COS_BUCKET/mnist.npz"
###Output
_____no_output_____
###Markdown
There should be an empty response when upload finished succesfully. Create connection to COSCreated connections will be used in training for pointing to given COS location.
###Code
%%bash --out connection_payload
CONNECTION_PAYLOAD='{"name": "REST COS connection", "datasource_type": "193a97c1-4475-4a19-b90c-295c4fdc6517", "properties": {"bucket": "'"$COS_BUCKET"'", "access_key": "'"$COS_ACCESS_KEY_ID"'", "secret_key": "'"$COS_SECRET_ACCESS_KEY"'", "iam_url": "'"$AUTH_ENDPOINT"'", "url": "'"$COS_ENDPOINT"'"}, "origin_country": "US"}'
echo $CONNECTION_PAYLOAD | python -m json.tool
%env CONNECTION_PAYLOAD=$connection_payload
%%bash --out connection_id
CONNECTION_ID=$(curl -sk -X POST \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
--data "$CONNECTION_PAYLOAD" \
"$DATAPLATFORM_URL/v2/connections?space_id=$SPACE_ID")
CONNECTION_ID=${CONNECTION_ID#*asset_id\":\"}
CONNECTION_ID=${CONNECTION_ID%%\"*}
echo $CONNECTION_ID
%env CONNECTION_ID=$connection_id
###Output
env: CONNECTION_ID=65c9a953-8816-413b-9af7-a264bb5c610e
###Markdown
3. Model definition This section provides samples about how to store model definition via cURL calls. <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Model%20Definitions/model_definitions_create" target="_blank" rel="noopener no referrer">Store a model definition for Deep Learning experiment
###Code
%%bash --out model_definition_payload
MODEL_DEFINITION_PAYLOAD='{"name": "mlp-model-definition", "space_id": "'"$SPACE_ID"'", "description": "mlp-model-definition", "tags": ["DL", "MNIST"], "version": "2.0", "platform": {"name": "python", "versions": ["3.8"]}, "command": "python3 mnist_mlp.py"}'
echo $MODEL_DEFINITION_PAYLOAD | python -m json.tool
%env MODEL_DEFINITION_PAYLOAD=$model_definition_payload
%%bash --out model_definition_id
curl -sk -X POST \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
--data "$MODEL_DEFINITION_PAYLOAD" \
"$WML_ENDPOINT_URL/ml/v4/model_definitions?version=2020-08-01"| grep '"id": ' | awk -F '"' '{ print $4 }'
%env MODEL_DEFINITION_ID=$model_definition_id
###Output
env: MODEL_DEFINITION_ID=7883edea-3751-47bd-9c47-f8fae0b837da
###Markdown
Model preparationDownload files with keras code. You can either download it via link below or run the cell below the link. <a href="https://github.com/IBM/watson-machine-learning-samples/raw/master/cloud/definitions/keras/mnist/MNIST.zip" target="_blank" rel="noopener no referrer">Download MNIST.zip
###Code
%%bash
wget https://github.com/IBM/watson-machine-learning-samples/raw/master/cloud/definitions/keras/mnist/MNIST.zip \
-O MNIST.zip
###Output
--2020-09-14 15:52:14-- https://github.com/IBM/watson-machine-learning-samples/raw/master/cloud/definitions/keras/mnist/MNIST.zip
Resolving github.com... 140.82.121.4
Connecting to github.com|140.82.121.4|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://raw.githubusercontent.com/IBM/watson-machine-learning-samples/master/cloud/definitions/keras/mnist/MNIST.zip [following]
--2020-09-14 15:52:15-- https://raw.githubusercontent.com/IBM/watson-machine-learning-samples/master/cloud/definitions/keras/mnist/MNIST.zip
Resolving raw.githubusercontent.com... 151.101.112.133
Connecting to raw.githubusercontent.com|151.101.112.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 3836 (3.7K) [application/zip]
Saving to: 'MNIST.zip'
0K ... 100% 7.83M=0s
2020-09-14 15:52:15 (7.83 MB/s) - 'MNIST.zip' saved [3836/3836]
###Markdown
**Tip**: Convert below cell to code and run it to see model deinition's code.
###Code
!unzip -oqd . MNIST.zip && cat mnist_mlp.py
###Output
_____no_output_____
###Markdown
Upload model for the model definition <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Model%20Definitions/model_definitions_upload_model" target="_blank" rel="noopener no referrer">Upload model for the model definition
###Code
%%bash
curl -sk -X PUT \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
--data-binary "@MNIST.zip" \
"$WML_ENDPOINT_URL/ml/v4/model_definitions/$MODEL_DEFINITION_ID/model?version=2020-08-01&space_id=$SPACE_ID" \
| python -m json.tool
###Output
{
"attachment_id": "189ef773-bc16-4a35-bca3-569020476c49",
"content_format": "native",
"persisted": true
}
###Markdown
4. Experiment runThis section provides samples about how to trigger Deep Learning experiment via cURL calls. <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Trainings/trainings_create" target="_blank" rel="noopener no referrer">Schedule a training job for Deep Learning experiment
###Code
%%bash --out training_payload
TRAINING_PAYLOAD='{"training_data_references": [{"name": "training_input_data", "type": "connection_asset", "connection": {"id": "'"$CONNECTION_ID"'"}, "location": {"bucket": "'"$COS_BUCKET"'", "file_name": "."}, "schema": {"id": "idmlp_schema", "fields": [{"name": "text", "type": "string"}]}}], "results_reference": {"name": "MNIST results", "connection": {"id": "'"$CONNECTION_ID"'"}, "location": {"bucket": "'"$COS_BUCKET"'", "file_name": "."}, "type": "connection_asset"}, "tags": [{"value": "tags_mnist", "description": "dome MNIST"}], "name": "MNIST mlp", "description": "test training modeldef MNIST", "model_definition": {"id": "'"$MODEL_DEFINITION_ID"'", "hardware_spec": {"name": "K80", "nodes": 1}, "software_spec": {"name": "tensorflow_2.4-py3.8"}, "parameters": {"name": "MNIST mlp", "description": "Simple MNIST mlp model"}}, "space_id": "'"$SPACE_ID"'"}'
echo $TRAINING_PAYLOAD | python -m json.tool
%env TRAINING_PAYLOAD=$training_payload
%%bash --out training_id
curl -sk -X POST \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
--data "$TRAINING_PAYLOAD" \
"$WML_ENDPOINT_URL/ml/v4/trainings?version=2020-08-01" | awk -F'"id":' '{print $2}' | cut -c2-37
%env TRAINING_ID=$training_id
###Output
env: TRAINING_ID=7709e2d9-1d6b-4185-91d0-c926cb4b8185
###Markdown
Get training detailsTreining is an asynchronous endpoint. In case you want to monitor training status and details,you need to use a GET method and specify which training you want to monitor by usage of training ID. <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Trainings/trainings_get" target="_blank" rel="noopener no referrer">Get information about training job
###Code
%%bash
curl -sk -X GET \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
"$WML_ENDPOINT_URL/ml/v4/trainings/$TRAINING_ID?space_id=$SPACE_ID&version=2020-08-01" \
| python -m json.tool
###Output
_____no_output_____
###Markdown
Get training status
###Code
%%bash
STATUS=$(curl -sk -X GET\
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
"$WML_ENDPOINT_URL/ml/v4/trainings/$TRAINING_ID?space_id=$SPACE_ID&version=2020-08-01")
STATUS=${STATUS#*state\":\"}
STATUS=${STATUS%%\"*}
echo $STATUS
###Output
completed
###Markdown
Please make sure that training is completed before you go to the next sections.Monitor `state` of your training by running above cell couple of times. Get selected modelGet a Keras saved model location in COS from the Deep Learning training job.
###Code
%%bash --out model_name
PATH=$(curl -sk -X GET \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
"$WML_ENDPOINT_URL/ml/v4/trainings/$TRAINING_ID?space_id=$SPACE_ID&version=2020-08-01")
PATH=${PATH#*logs\":\"}
MODEL_NAME=${PATH%%\"*}
echo $MODEL_NAME
%env MODEL_NAME=$model_name
###Output
env: MODEL_NAME=training-RuDTVvdMR
###Markdown
5. Historical runsIn this section you will see cURL examples describing how to get historical training runs information. Output should be similar to the output from training creation but you should see more trainings entries. Listing trainings: <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Trainings/trainings_list" target="_blank" rel="noopener no referrer">Get list of historical training jobs information
###Code
%%bash
HISTORICAL_TRAINING_LIMIT_TO_GET=2
curl -sk -X GET \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
"$WML_ENDPOINT_URL/ml/v4/trainings?space_id=$SPACE_ID&version=2020-08-01&limit=$HISTORICAL_TRAINING_LIMIT_TO_GET" \
| python -m json.tool
###Output
_____no_output_____
###Markdown
Cancel training run**Tip:** If you want to cancel your training, please convert below cell to `code`, specify training ID and run. <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Trainings/trainings_delete" target="_blank" rel="noopener no referrer">Canceling training
###Code
%%bash
TRAINING_ID_TO_CANCEL=...
curl -sk -X DELETE \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
"$WML_ENDPOINT_URL/ml/v4/trainings/$TRAINING_ID_TO_DELETE?space_id=$SPACE_ID&version=2020-08-01"
###Output
_____no_output_____
###Markdown
--- 6. Deploy and ScoreIn this section you will learn how to deploy and score pipeline model as webservice using WML instance. Before deployment creation, you need store your model in WML repository.Please see below cURL call example how to do it. Remember that you need tospecify where your chosen model is stored in COS. Store Deep Learning modelStore information about your model to WML repository. <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Models/models_create" target="_blank" rel="noopener no referrer">Model storing
###Code
%%bash --out model_payload
MODEL_PAYLOAD='{"space_id": "'"$SPACE_ID"'", "name": "MNIST mlp", "description": "This is description", "type": "tensorflow_2.4", "software_spec": {"name": "tensorflow_2.4-py3.8"}, "content_location": { "type": "s3", "contents": [ { "content_format": "native", "file_name": "'"$MODEL_NAME.zip"'", "location": "'"$TRAINING_ID/assets/$TRAINING_ID/resources/wml_model/$MODEL_NAME.zip"'"}],"connection": {"endpoint_url": "'"$COS_ENDPOINT"'", "access_key_id": "'"$COS_ACCESS_KEY_ID"'", "secret_access_key": "'"$COS_SECRET_ACCESS_KEY"'"}, "location": {"bucket": "'"$COS_BUCKET"'"}}}'
echo $MODEL_PAYLOAD | python -m json.tool
%env MODEL_PAYLOAD=$model_payload
%%bash --out model_id
curl -sk -X POST \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
--data "$MODEL_PAYLOAD" \
"$WML_ENDPOINT_URL/ml/v4/models?version=2020-08-01" | grep '"id": ' | awk -F '"' '{ print $4 }' | sed -n 2p
%env MODEL_ID=$model_id
###Output
env: MODEL_ID=e5417dbf-77b4-4e72-a47e-0be8aa2ecd83
###Markdown
Download model contentIf you want to download your saved model, please make the following call. <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Models/models_filtered_download" target="_blank" rel="noopener no referrer">Download model content
###Code
%%bash
curl -sk -X GET \
--header "Authorization: Bearer $TOKEN" \
--output "mnist_cnn.h5.tar.gz" \
"$WML_ENDPOINT_URL/ml/v4/models/$MODEL_ID/download?space_id=$SPACE_ID&version=2020-08-01"
!ls -l mnist_cnn.h5.tar.gz
###Output
-rw-r--r-- 1 jansoltysik staff 4765639 Sep 14 16:00 mnist_cnn.h5.tar.gz
###Markdown
Deployment creationAn Deep Learning online deployment creation is presented below. <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Deployments/deployments_create" target="_blank" rel="noopener no referrer">Create deployment
###Code
%%bash --out deployment_payload
DEPLOYMENT_PAYLOAD='{"space_id": "'"$SPACE_ID"'","name": "MNIST deployment", "description": "This is description","online": {},"hardware_spec": {"name": "S"},"asset": {"id": "'"$MODEL_ID"'"}}'
echo $DEPLOYMENT_PAYLOAD | python -m json.tool
%env DEPLOYMENT_PAYLOAD=$deployment_payload
%%bash --out deployment_id
curl -sk -X POST \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
--data "$DEPLOYMENT_PAYLOAD" \
"$WML_ENDPOINT_URL/ml/v4/deployments?version=2020-08-01" | grep '"id": ' | awk -F '"' '{ print $4 }' | sed -n 3p
%env DEPLOYMENT_ID=$deployment_id
###Output
env: DEPLOYMENT_ID=c7568bf5-9c70-41bf-83c1-ed23fbda2ccc
###Markdown
Get deployment detailsAs deployment API is asynchronous, please make sure your deployment is in `ready` state before going to the next points. <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Deployments/deployments_get" target="_blank" rel="noopener no referrer">Get deployment details
###Code
%%bash
curl -sk -X GET \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
"$WML_ENDPOINT_URL/ml/v4/deployments/$DEPLOYMENT_ID?space_id=$SPACE_ID&version=2020-08-01" \
| python -m json.tool
###Output
{
"entity": {
"asset": {
"id": "e5417dbf-77b4-4e72-a47e-0be8aa2ecd83"
},
"custom": {},
"deployed_asset_type": "model",
"description": "This is description",
"hardware_spec": {
"id": "e7ed1d6c-2e89-42d7-aed5-863b972c1d2b",
"name": "S",
"num_nodes": 1
},
"name": "MNIST deployment",
"online": {},
"space_id": "d70a423e-bab5-4b24-943a-3b0b29ad7527",
"status": {
"online_url": {
"url": "https://yp-qa.ml.cloud.ibm.com/ml/v4/deployments/c7568bf5-9c70-41bf-83c1-ed23fbda2ccc/predictions"
},
"state": "initializing"
}
},
"metadata": {
"created_at": "2020-09-14T14:00:55.027Z",
"description": "This is description",
"id": "c7568bf5-9c70-41bf-83c1-ed23fbda2ccc",
"modified_at": "2020-09-14T14:00:55.027Z",
"name": "MNIST deployment",
"owner": "IBMid-55000091VC",
"space_id": "d70a423e-bab5-4b24-943a-3b0b29ad7527"
}
}
###Markdown
Prepare scoring input data**Hint:** You may need to install numpy using following command `!pip install numpy`
###Code
import numpy as np
mnist_dataset = np.load('mnist.npz')
test_mnist = mnist_dataset['x_test']
image_1 = (test_mnist[0].ravel() / 255).tolist()
image_2 = (test_mnist[1].ravel() / 255).tolist()
%matplotlib inline
import matplotlib.pyplot as plt
for i, image in enumerate([test_mnist[0], test_mnist[1]]):
plt.subplot(2, 2, i + 1)
plt.axis('off')
plt.imshow(image, cmap=plt.cm.gray_r, interpolation='nearest')
###Output
_____no_output_____
###Markdown
Scoring of a webserviceIf you want to make a `score` call on your deployment, please follow a below method: <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Deployment%20Jobs/deployment_jobs_create" target="_blank" rel="noopener no referrer">Create deployment job
###Code
%%bash -s "$image_1" "$image_2"
curl -sk -X POST \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
--data '{"space_id": "$SPACE_ID","input_data": [{"values": ['"$1"', '"$2"']}]}' \
"$WML_ENDPOINT_URL/ml/v4/deployments/$DEPLOYMENT_ID/predictions?version=2020-08-01" \
| python -m json.tool
###Output
{
"predictions": [
{
"fields": [
"prediction",
"prediction_classes",
"probability"
],
"values": [
[
[
2.8162902240315424e-13,
2.634996214279095e-11,
1.940126725941127e-09,
6.535556984488267e-09,
4.630183633421228e-15,
2.0969738532411464e-12,
2.5965000387704643e-19,
1.0,
2.8750441697505957e-12,
4.534635333897086e-09
],
7,
[
2.8162902240315424e-13,
2.634996214279095e-11,
1.940126725941127e-09,
6.535556984488267e-09,
4.630183633421228e-15,
2.0969738532411464e-12,
2.5965000387704643e-19,
1.0,
2.8750441697505957e-12,
4.534635333897086e-09
]
],
[
[
9.832916815846748e-13,
1.2868907788288197e-06,
0.9999985694885254,
1.0709795361663055e-07,
1.9135524797852587e-17,
3.4615290633865925e-09,
3.791623265358979e-12,
1.2983511835096273e-12,
1.8498602649685836e-09,
2.8920135803820433e-15
],
2,
[
9.832916815846748e-13,
1.2868907788288197e-06,
0.9999985694885254,
1.0709795361663055e-07,
1.9135524797852587e-17,
3.4615290633865925e-09,
3.791623265358979e-12,
1.2983511835096273e-12,
1.8498602649685836e-09,
2.8920135803820433e-15
]
]
]
}
]
}
###Markdown
Listing all deployments <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Deployments/deployments_list" target="_blank" rel="noopener no referrer">List deployments details
###Code
%%bash
curl -sk -X GET \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
"$WML_ENDPOINT_URL/ml/v4/deployments?space_id=$SPACE_ID&version=2020-08-01" \
| python -m json.tool
###Output
_____no_output_____
###Markdown
7. Cleaning sectionBelow section is useful when you want to clean all of your previous work within this notebook.Just convert below cells into the `code` and run them. Delete training run**Tip:** You can completely delete a training run with its metadata. <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Trainings/trainings_delete" target="_blank" rel="noopener no referrer">Deleting training
###Code
%%bash
%env TRAINING_ID_TO_DELETE=...
curl -sk -X DELETE \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
"$WML_ENDPOINT_URL/ml/v4/trainings/$TRAINING_ID_TO_DELETE?space_id=$SPACE_ID&version=2020-08-01&hard_delete=true"
###Output
_____no_output_____
###Markdown
Deleting deployment**Tip:** You can delete existing deployment by calling DELETE method. <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Deployments/deployments_delete" target="_blank" rel="noopener no referrer">Delete deployment
###Code
%%bash
curl -sk -X DELETE \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
"$WML_ENDPOINT_URL/ml/v4/deployments/$DEPLOYMENT_ID?space_id=$SPACE_ID&version=2020-08-01"
###Output
_____no_output_____
###Markdown
Delete model from repository**Tip:** If you want to completely remove your stored model and model metadata, just use a DELETE method. <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Models/models_delete" target="_blank" rel="noopener no referrer">Delete model from repository
###Code
%%bash
curl -sk -X DELETE \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
"$WML_ENDPOINT_URL/ml/v4/models/$MODEL_ID?space_id=$SPACE_ID&version=2020-08-01"
###Output
_____no_output_____
###Markdown
Delete model definition**Tip:** If you want to completely remove your model definition, just use a DELETE method. <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Model%20Definitions/model_definitions_delete" target="_blank" rel="noopener no referrer">Delete model definition
###Code
%%bash
curl -sk -X DELETE \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
"$WML_ENDPOINT_URL/ml/v4/model_definitions/$MODEL_DEFINITION_ID?space_id=$SPACE_ID&version=2020-08-01"
###Output
_____no_output_____
###Markdown
5. Historical runsIn this section you will see cURL examples describing how to get historical training runs information. Output should be similar to the output from training creation but you should see more trainings entries. Listing trainings: <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Trainings/trainings_list" target="_blank" rel="noopener no referrer">Get list of historical training jobs information
###Code
%%bash
HISTORICAL_TRAINING_LIMIT_TO_GET=2
curl -sk -X GET \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
"$WML_ENDPOINT_URL/ml/v4/trainings?space_id=$SPACE_ID&version=2020-08-01&limit=$HISTORICAL_TRAINING_LIMIT_TO_GET" \
| python -m json.tool
###Output
_____no_output_____
###Markdown
Cancel training run**Tip:** If you want to cancel your training, please convert below cell to `code`, specify training ID and run. <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Trainings/trainings_delete" target="_blank" rel="noopener no referrer">Canceling training
###Code
%%bash
TRAINING_ID_TO_CANCEL=...
curl -sk -X DELETE \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
"$WML_ENDPOINT_URL/ml/v4/trainings/$TRAINING_ID_TO_DELETE?space_id=$SPACE_ID&version=2020-08-01"
###Output
_____no_output_____
###Markdown
--- 6. Deploy and ScoreIn this section you will learn how to deploy and score pipeline model as webservice using WML instance. Before deployment creation, you need store your model in WML repository.Please see below cURL call example how to do it. Remember that you need tospecify where your chosen model is stored in COS. Store Deep Learning modelStore information about your model to WML repository. <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Models/models_create" target="_blank" rel="noopener no referrer">Model storing
###Code
%%bash --out model_payload
MODEL_PAYLOAD='{"space_id": "'"$SPACE_ID"'", "name": "MNIST mlp", "description": "This is description", "type": "tensorflow_2.1", "software_spec": {"name": "default_py3.7"}, "content_location": { "type": "s3", "contents": [ { "content_format": "native", "file_name": "'"$MODEL_NAME.zip"'", "location": "'"$TRAINING_ID/assets/$TRAINING_ID/resources/wml_model/$MODEL_NAME.zip"'"}],"connection": {"endpoint_url": "'"$COS_ENDPOINT"'", "access_key_id": "'"$COS_ACCESS_KEY_ID"'", "secret_access_key": "'"$COS_SECRET_ACCESS_KEY"'"}, "location": {"bucket": "'"$COS_BUCKET"'"}}}'
echo $MODEL_PAYLOAD | python -m json.tool
%env MODEL_PAYLOAD=$model_payload
%%bash --out model_id
curl -sk -X POST \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
--data "$MODEL_PAYLOAD" \
"$WML_ENDPOINT_URL/ml/v4/models?version=2020-08-01" | grep '"id": ' | awk -F '"' '{ print $4 }' | sed -n 2p
%env MODEL_ID=$model_id
###Output
env: MODEL_ID=e5417dbf-77b4-4e72-a47e-0be8aa2ecd83
###Markdown
Download model contentIf you want to download your saved model, please make the following call. <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Models/models_filtered_download" target="_blank" rel="noopener no referrer">Download model content
###Code
%%bash
curl -sk -X GET \
--header "Authorization: Bearer $TOKEN" \
--output "mnist_cnn.h5.tar.gz" \
"$WML_ENDPOINT_URL/ml/v4/models/$MODEL_ID/download?space_id=$SPACE_ID&version=2020-08-01"
!ls -l mnist_cnn.h5.tar.gz
###Output
-rw-r--r-- 1 jansoltysik staff 4765639 Sep 14 16:00 mnist_cnn.h5.tar.gz
###Markdown
Deployment creationAn Deep Learning online deployment creation is presented below. <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Deployments/deployments_create" target="_blank" rel="noopener no referrer">Create deployment
###Code
%%bash --out deployment_payload
DEPLOYMENT_PAYLOAD='{"space_id": "'"$SPACE_ID"'","name": "MNIST deployment", "description": "This is description","online": {},"hardware_spec": {"name": "S"},"asset": {"id": "'"$MODEL_ID"'"}}'
echo $DEPLOYMENT_PAYLOAD | python -m json.tool
%env DEPLOYMENT_PAYLOAD=$deployment_payload
%%bash --out deployment_id
curl -sk -X POST \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
--data "$DEPLOYMENT_PAYLOAD" \
"$WML_ENDPOINT_URL/ml/v4/deployments?version=2020-08-01" | grep '"id": ' | awk -F '"' '{ print $4 }' | sed -n 3p
%env DEPLOYMENT_ID=$deployment_id
###Output
env: DEPLOYMENT_ID=c7568bf5-9c70-41bf-83c1-ed23fbda2ccc
###Markdown
Get deployment detailsAs deployment API is asynchronous, please make sure your deployment is in `ready` state before going to the next points. <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Deployments/deployments_get" target="_blank" rel="noopener no referrer">Get deployment details
###Code
%%bash
curl -sk -X GET \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
"$WML_ENDPOINT_URL/ml/v4/deployments/$DEPLOYMENT_ID?space_id=$SPACE_ID&version=2020-08-01" \
| python -m json.tool
###Output
{
"entity": {
"asset": {
"id": "e5417dbf-77b4-4e72-a47e-0be8aa2ecd83"
},
"custom": {},
"deployed_asset_type": "model",
"description": "This is description",
"hardware_spec": {
"id": "e7ed1d6c-2e89-42d7-aed5-863b972c1d2b",
"name": "S",
"num_nodes": 1
},
"name": "MNIST deployment",
"online": {},
"space_id": "d70a423e-bab5-4b24-943a-3b0b29ad7527",
"status": {
"online_url": {
"url": "https://yp-qa.ml.cloud.ibm.com/ml/v4/deployments/c7568bf5-9c70-41bf-83c1-ed23fbda2ccc/predictions"
},
"state": "initializing"
}
},
"metadata": {
"created_at": "2020-09-14T14:00:55.027Z",
"description": "This is description",
"id": "c7568bf5-9c70-41bf-83c1-ed23fbda2ccc",
"modified_at": "2020-09-14T14:00:55.027Z",
"name": "MNIST deployment",
"owner": "IBMid-55000091VC",
"space_id": "d70a423e-bab5-4b24-943a-3b0b29ad7527"
}
}
###Markdown
Prepare scoring input data**Hint:** You may need to install numpy using following command `!pip install numpy`
###Code
import numpy as np
mnist_dataset = np.load('mnist.npz')
test_mnist = mnist_dataset['x_test']
image_1 = (test_mnist[0].ravel() / 255).tolist()
image_2 = (test_mnist[1].ravel() / 255).tolist()
%matplotlib inline
import matplotlib.pyplot as plt
for i, image in enumerate([test_mnist[0], test_mnist[1]]):
plt.subplot(2, 2, i + 1)
plt.axis('off')
plt.imshow(image, cmap=plt.cm.gray_r, interpolation='nearest')
###Output
_____no_output_____
###Markdown
Scoring of a webserviceIf you want to make a `score` call on your deployment, please follow a below method: <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Deployment%20Jobs/deployment_jobs_create" target="_blank" rel="noopener no referrer">Create deployment job
###Code
%%bash -s "$image_1" "$image_2"
curl -sk -X POST \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
--data '{"space_id": "$SPACE_ID","input_data": [{"values": ['"$1"', '"$2"']}]}' \
"$WML_ENDPOINT_URL/ml/v4/deployments/$DEPLOYMENT_ID/predictions?version=2020-08-01" \
| python -m json.tool
###Output
{
"predictions": [
{
"fields": [
"prediction",
"prediction_classes",
"probability"
],
"values": [
[
[
2.8162902240315424e-13,
2.634996214279095e-11,
1.940126725941127e-09,
6.535556984488267e-09,
4.630183633421228e-15,
2.0969738532411464e-12,
2.5965000387704643e-19,
1.0,
2.8750441697505957e-12,
4.534635333897086e-09
],
7,
[
2.8162902240315424e-13,
2.634996214279095e-11,
1.940126725941127e-09,
6.535556984488267e-09,
4.630183633421228e-15,
2.0969738532411464e-12,
2.5965000387704643e-19,
1.0,
2.8750441697505957e-12,
4.534635333897086e-09
]
],
[
[
9.832916815846748e-13,
1.2868907788288197e-06,
0.9999985694885254,
1.0709795361663055e-07,
1.9135524797852587e-17,
3.4615290633865925e-09,
3.791623265358979e-12,
1.2983511835096273e-12,
1.8498602649685836e-09,
2.8920135803820433e-15
],
2,
[
9.832916815846748e-13,
1.2868907788288197e-06,
0.9999985694885254,
1.0709795361663055e-07,
1.9135524797852587e-17,
3.4615290633865925e-09,
3.791623265358979e-12,
1.2983511835096273e-12,
1.8498602649685836e-09,
2.8920135803820433e-15
]
]
]
}
]
}
###Markdown
Listing all deployments <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Deployments/deployments_list" target="_blank" rel="noopener no referrer">List deployments details
###Code
%%bash
curl -sk -X GET \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
"$WML_ENDPOINT_URL/ml/v4/deployments?space_id=$SPACE_ID&version=2020-08-01" \
| python -m json.tool
###Output
_____no_output_____
###Markdown
7. Cleaning sectionBelow section is useful when you want to clean all of your previous work within this notebook.Just convert below cells into the `code` and run them. Delete training run**Tip:** You can completely delete a training run with its metadata. <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Trainings/trainings_delete" target="_blank" rel="noopener no referrer">Deleting training
###Code
%%bash
%env TRAINING_ID_TO_DELETE=...
curl -sk -X DELETE \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
"$WML_ENDPOINT_URL/ml/v4/trainings/$TRAINING_ID_TO_DELETE?space_id=$SPACE_ID&version=2020-08-01&hard_delete=true"
###Output
_____no_output_____
###Markdown
Deleting deployment**Tip:** You can delete existing deployment by calling DELETE method. <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Deployments/deployments_delete" target="_blank" rel="noopener no referrer">Delete deployment
###Code
%%bash
curl -sk -X DELETE \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
"$WML_ENDPOINT_URL/ml/v4/deployments/$DEPLOYMENT_ID?space_id=$SPACE_ID&version=2020-08-01"
###Output
_____no_output_____
###Markdown
Delete model from repository**Tip:** If you want to completely remove your stored model and model metadata, just use a DELETE method. <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Models/models_delete" target="_blank" rel="noopener no referrer">Delete model from repository
###Code
%%bash
curl -sk -X DELETE \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
"$WML_ENDPOINT_URL/ml/v4/models/$MODEL_ID?space_id=$SPACE_ID&version=2020-08-01"
###Output
_____no_output_____
###Markdown
Delete model definition**Tip:** If you want to completely remove your model definition, just use a DELETE method. <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Model%20Definitions/model_definitions_delete" target="_blank" rel="noopener no referrer">Delete model definition
###Code
%%bash
curl -sk -X DELETE \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
"$WML_ENDPOINT_URL/ml/v4/model_definitions/$MODEL_DEFINITION_ID?space_id=$SPACE_ID&version=2020-08-01"
###Output
_____no_output_____
###Markdown
Use Keras to recognize hand-written digits with Watson Machine Learning REST API This notebook contains steps and code to demonstrate support of Keras Deep Learning experiments in Watson Machine Learning Service. It introduces commands for getting data, training experiments, persisting pipelines, publishing models, deploying models and scoring.Some familiarity with cURL is helpful. This notebook uses cURL examples. Learning goalsThe learning goals of this notebook are:- Working with Watson Machine Learning experiments to train Deep Learning models.- Downloading computed models to local storage.- Online deployment and scoring of trained model. ContentsThis notebook contains the following parts:1. [Setup](setup) 2. [Experiment definition](experiment_definition) 3. [Model definition](model_definition) 4. [Experiment Run](run) 5. [Historical runs](runs) 6. [Deploy and Score](deploy_and_score) 7. [Cleaning](cleaning) 8. [Summary and next steps](summary) 1. Set up the environmentBefore you use the sample code in this notebook, you must perform the following setup tasks:- Create a Watson Machine Learning (WML) Service instance (a free plan is offered and information about how to create the instance can be found here).- Create a Cloud Object Storage (COS) instance (a lite plan is offered and information about how to order storage can be found here). **Note: When using Watson Studio, you already have a COS instance associated with the project you are running the notebook in.** You can find your COS credentials in COS instance dashboard under the **Service credentials** tab.Go to the **Endpoint** tab in the COS instance's dashboard to get the endpoint information.Authenticate the Watson Machine Learning service on IBM Cloud.Your Cloud API key can be generated by going to the [**Users** section of the Cloud console](https://cloud.ibm.com/iam/users). From that page, click your name, scroll down to the **API Keys** section, and click **Create an IBM Cloud API key**. Give your key a name and click **Create**, then copy the created key and paste it below. **NOTE:** You can also get service specific apikey by going to the [**Service IDs** section of the Cloud Console](https://cloud.ibm.com/iam/serviceids). From that page, click **Create**, then copy the created key and paste it below.
###Code
%env API_KEY=...
%env WML_ENDPOINT_URL=...
%env WML_INSTANCE_CRN="fill out only if you want to create a new space"
%env WML_INSTANCE_NAME=...
%env COS_CRN="fill out only if you want to create a new space"
%env COS_ENDPOINT=...
%env COS_BUCKET=...
%env COS_ACCESS_KEY_ID=...
%env COS_SECRET_ACCESS_KEY=...
%env COS_API_KEY=...
%env SPACE_ID="fill out only if you have space already created"
%env DATAPLATFORM_URL=https://api.dataplatform.cloud.ibm.com
%env AUTH_ENDPOINT=https://iam.cloud.ibm.com/oidc/token
###Output
_____no_output_____
###Markdown
Getting WML authorization token for further cURL calls Example of cURL call to get WML token
###Code
%%bash --out token
curl -sk -X POST \
--header "Content-Type: application/x-www-form-urlencoded" \
--header "Accept: application/json" \
--data-urlencode "grant_type=urn:ibm:params:oauth:grant-type:apikey" \
--data-urlencode "apikey=$API_KEY" \
"$AUTH_ENDPOINT" \
| cut -d '"' -f 4
%env TOKEN=$token
###Output
env: TOKEN=eyJraWQiOiIyMDIxMDUyMDE4MzYiLCJhbGciOiJSUzI1NiJ9.eyJpYW1faWQiOiJJQk1pZC01NTAwMDA5MVZDIiwiaWQiOiJJQk1pZC01NTAwMDA5MVZDIiwicmVhbG1pZCI6IklCTWlkIiwianRpIjoiMzllMmRiMjQtZjFiNy00MWYyLTg2ODItZGRiZjRlZTlmZjM1IiwiaWRlbnRpZmllciI6IjU1MDAwMDkxVkMiLCJnaXZlbl9uYW1lIjoiV01MIiwiZmFtaWx5X25hbWUiOiJXTUwtQmV0YSIsIm5hbWUiOiJXTUwgV01MLUJldGEiLCJlbWFpbCI6IldNTC1CZXRhQHBsLmlibS5jb20iLCJzdWIiOiJXTUwtQmV0YUBwbC5pYm0uY29tIiwiYXV0aG4iOnsic3ViIjoiV01MLUJldGFAcGwuaWJtLmNvbSIsImlhbV9pZCI6ImlhbS01NTAwMDA5MVZDIiwibmFtZSI6IldNTCBXTUwtQmV0YSIsImdpdmVuX25hbWUiOiJXTUwiLCJmYW1pbHlfbmFtZSI6IldNTC1CZXRhIiwiZW1haWwiOiJXTUwtQmV0YUBwbC5pYm0uY29tIn0sImFjY291bnQiOnsiYm91bmRhcnkiOiJnbG9iYWwiLCJ2YWxpZCI6dHJ1ZSwiYnNzIjoiZTBmN2VjM2FjMWIyNGVjOWFlNzcxZWZkNzcyNTM4YTIiLCJpbXNfdXNlcl9pZCI6IjcxNzY5NDMiLCJmcm96ZW4iOnRydWUsImltcyI6IjE2ODQ3ODMifSwiaWF0IjoxNjIzODM3MTc1LCJleHAiOjE2MjM4NDA3NzUsImlzcyI6Imh0dHBzOi8vaWFtLmNsb3VkLmlibS5jb20vaWRlbnRpdHkiLCJncmFudF90eXBlIjoidXJuOmlibTpwYXJhbXM6b2F1dGg6Z3JhbnQtdHlwZTphcGlrZXkiLCJzY29wZSI6ImlibSBvcGVuaWQiLCJjbGllbnRfaWQiOiJkZWZhdWx0IiwiYWNyIjoxLCJhbXIiOlsicHdkIl19.ddBw_llqrcMW51a-iL6IyexaWGOsnkfH8RucO4gLhCebp6DPOQYD-zMT_iiebMFSxqq-gJ-72oDDQQclEzjp79iHQxrlmRRastkSrRlFYKqiVi-y0l_4Ka7PpCzYz9p5HdnHjfDQsOZIMGcKz-Y4nDZvqeTAnbYOshbgcUJuzO7GyFKKEFtbmi-ttRDu1Ex6fgbLCYTYTkUvRdxcoGIXQwMYnoU75yqr3CHSCAGnwhGDgKu5eoqVKZFV9WQSZMYYQuotxFzyaqXDcJfE71R1W764ajlHYcYykHCL6kSG7ksNNZJGyAjDks_WaU1mF8UBen_iC8_k-axJmx5AJFDQmQ
###Markdown
Space creation**Tip:** If you do not have `space` already created, please convert below three cells to `code` and run them.First of all, you need to create a `space` that will be used in all of your further cURL calls. If you do not have `space` already created, below is the cURL call to create one. <a href="https://cpd-spaces-api.eu-gb.cf.appdomain.cloud//Spaces/spaces_create" target="_blank" rel="noopener no referrer">Space creation
###Code
%%bash --out space_id
curl -sk -X POST \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
--data '{"name": "curl_DL", "storage": {"type": "bmcos_object_storage", "resource_crn": "'"$COS_CRN"'"}, "compute": [{"name": "'"$WML_INSTANCE_NAME"'", "crn": "'"$WML_INSTANCE_CRN"'", "type": "machine_learning"}]}' \
"$DATAPLATFORM_URL/v2/spaces" \
| grep '"id": ' | awk -F '"' '{ print $4 }'
space_id = space_id.split('\n')[1]
%env SPACE_ID=$space_id
###Output
_____no_output_____
###Markdown
Space creation is asynchronous. This means that you need to check space creation status after creation call.Make sure that your newly created space is `active`. <a href="https://cpd-spaces-api.eu-gb.cf.appdomain.cloud//Spaces/spaces_get" target="_blank" rel="noopener no referrer">Get space information
###Code
%%bash
curl -sk -X GET \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
"$DATAPLATFORM_URL/v2/spaces/$SPACE_ID"
###Output
_____no_output_____
###Markdown
2. Experiment / optimizer configuration Training data connectionDefine connection information to COS bucket and training data npz file. This example uses the MNIST dataset. The dataset can be downloaded from [here](https://s3.amazonaws.com/img-datasets/mnist.npz). You can also download it to local filesystem by running the cell below.**Action**: Upload training data to COS bucket and enter location information in the next cURL examples.
###Code
%%bash
wget -q https://s3.amazonaws.com/img-datasets/mnist.npz \
-O mnist.npz
###Output
_____no_output_____
###Markdown
Get COS tokenRetrieve COS token for further authentication calls. <a href="https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-curlcurl-token" target="_blank" rel="noopener no referrer">Retrieve COS authentication token
###Code
%%bash --out cos_token
curl -s -X "POST" "$AUTH_ENDPOINT" \
-H 'Accept: application/json' \
-H 'Content-Type: application/x-www-form-urlencoded' \
--data-urlencode "apikey=$COS_API_KEY" \
--data-urlencode "response_type=cloud_iam" \
--data-urlencode "grant_type=urn:ibm:params:oauth:grant-type:apikey" \
| cut -d '"' -f 4
%env COS_TOKEN=$cos_token
###Output
env: COS_TOKEN=eyJraWQiOiIyMDIxMDUyMDE4MzYiLCJhbGciOiJSUzI1NiJ9.eyJpYW1faWQiOiJpYW0tU2VydmljZUlkLTU3MzMwOTM3LWJkNTQtNDYyYi04ODUxLWY3OTFhZDYyYzY1OSIsImlkIjoiaWFtLVNlcnZpY2VJZC01NzMzMDkzNy1iZDU0LTQ2MmItODg1MS1mNzkxYWQ2MmM2NTkiLCJyZWFsbWlkIjoiaWFtIiwianRpIjoiZTZhOWJjY2UtM2EzMS00MjhmLThiYjktNjgwNTgxYzAyOGQ5IiwiaWRlbnRpZmllciI6IlNlcnZpY2VJZC01NzMzMDkzNy1iZDU0LTQ2MmItODg1MS1mNzkxYWQ2MmM2NTkiLCJuYW1lIjoiY3JlZGVudGlhbHNfMSIsInN1YiI6IlNlcnZpY2VJZC01NzMzMDkzNy1iZDU0LTQ2MmItODg1MS1mNzkxYWQ2MmM2NTkiLCJzdWJfdHlwZSI6IlNlcnZpY2VJZCIsImF1dGhuIjp7InN1YiI6ImlhbS1TZXJ2aWNlSWQtNTczMzA5MzctYmQ1NC00NjJiLTg4NTEtZjc5MWFkNjJjNjU5IiwiaWFtX2lkIjoiaWFtLWlhbS1TZXJ2aWNlSWQtNTczMzA5MzctYmQ1NC00NjJiLTg4NTEtZjc5MWFkNjJjNjU5Iiwic3ViX3R5cGUiOiIxIiwibmFtZSI6ImNyZWRlbnRpYWxzXzEifSwiYWNjb3VudCI6eyJib3VuZGFyeSI6Imdsb2JhbCIsInZhbGlkIjp0cnVlLCJic3MiOiJlMGY3ZWMzYWMxYjI0ZWM5YWU3NzFlZmQ3NzI1MzhhMiIsImZyb3plbiI6dHJ1ZX0sImlhdCI6MTYyMzgzNzE4MSwiZXhwIjoxNjIzODQwNzgxLCJpc3MiOiJodHRwczovL2lhbS5jbG91ZC5pYm0uY29tL2lkZW50aXR5IiwiZ3JhbnRfdHlwZSI6InVybjppYm06cGFyYW1zOm9hdXRoOmdyYW50LXR5cGU6YXBpa2V5Iiwic2NvcGUiOiJpYm0gb3BlbmlkIiwiY2xpZW50X2lkIjoiZGVmYXVsdCIsImFjciI6MSwiYW1yIjpbInB3ZCJdfQ.aN1pGaXxDcCGsoT6AtIsD-IRWSew2rc5S4JNFRT-KI-axDy2P834ZNdqwjwaguXz0tL4N6CQsy-NN8OSUs7Uy0bo8l1wPJUL42tDdKS5Wq5F8redL90wt1RQ2feRzbX18YvcnIrcygZRPZ3HRSSoBhiN0lm3OxXihwCotEf1dPu1z9Vfp1NEv6k1BCHMoHvbtwkxYXaoCZfbEm3Nvss0MKnDzr9-YNcpgLcVjvPpBpMUt1fuKzjNnjHQzy7dGAOlL6nRgqasxx8k7VC4JQ5DaYH7JSPSphbGhkB1Ut46oL4svsXpN7Jx3BUzp67y2pJ5AVXnlImRlhD7R9YKOiPSJw
###Markdown
Upload file to COSUpload your local dataset into your COS bucket <a href="https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-curlcurl-put-object" target="_blank" rel="noopener no referrer">Upload file to COS
###Code
%%bash
curl -sk -X PUT \
--header "Authorization: Bearer $COS_TOKEN" \
--header "Content-Type: application/octet-stream" \
--data-binary "@mnist.npz" \
"$COS_ENDPOINT/$COS_BUCKET/mnist.npz"
###Output
_____no_output_____
###Markdown
There should be an empty response when upload finished succesfully. Create connection to COSCreated connections will be used in training for pointing to given COS location.
###Code
%%bash --out connection_payload
CONNECTION_PAYLOAD='{"name": "REST COS connection", "datasource_type": "193a97c1-4475-4a19-b90c-295c4fdc6517", "properties": {"bucket": "'"$COS_BUCKET"'", "access_key": "'"$COS_ACCESS_KEY_ID"'", "secret_key": "'"$COS_SECRET_ACCESS_KEY"'", "iam_url": "'"$AUTH_ENDPOINT"'", "url": "'"$COS_ENDPOINT"'"}, "origin_country": "US"}'
echo $CONNECTION_PAYLOAD | python -m json.tool
%env CONNECTION_PAYLOAD=$connection_payload
%%bash --out connection_id
CONNECTION_ID=$(curl -sk -X POST \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
--data "$CONNECTION_PAYLOAD" \
"$DATAPLATFORM_URL/v2/connections?space_id=$SPACE_ID")
CONNECTION_ID=${CONNECTION_ID#*asset_id\":\"}
CONNECTION_ID=${CONNECTION_ID%%\"*}
echo $CONNECTION_ID
%env CONNECTION_ID=$connection_id
###Output
env: CONNECTION_ID=65c9a953-8816-413b-9af7-a264bb5c610e
###Markdown
3. Model definition This section provides samples about how to store model definition via cURL calls. <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Model%20Definitions/model_definitions_create" target="_blank" rel="noopener no referrer">Store a model definition for Deep Learning experiment
###Code
%%bash --out model_definition_payload
MODEL_DEFINITION_PAYLOAD='{"name": "mlp-model-definition", "space_id": "'"$SPACE_ID"'", "description": "mlp-model-definition", "tags": ["DL", "MNIST"], "version": "2.0", "platform": {"name": "python", "versions": ["3.7"]}, "command": "python3 mnist_mlp.py"}'
echo $MODEL_DEFINITION_PAYLOAD | python -m json.tool
%env MODEL_DEFINITION_PAYLOAD=$model_definition_payload
%%bash --out model_definition_id
curl -sk -X POST \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
--data "$MODEL_DEFINITION_PAYLOAD" \
"$WML_ENDPOINT_URL/ml/v4/model_definitions?version=2020-08-01"| grep '"id": ' | awk -F '"' '{ print $4 }'
%env MODEL_DEFINITION_ID=$model_definition_id
###Output
env: MODEL_DEFINITION_ID=7883edea-3751-47bd-9c47-f8fae0b837da
###Markdown
Model preparationDownload files with keras code. You can either download it via link below or run the cell below the link. <a href="https://github.com/IBM/watson-machine-learning-samples/raw/master/cloud/definitions/keras/mnist/MNIST.zip" target="_blank" rel="noopener no referrer">Download MNIST.zip
###Code
%%bash
wget https://github.com/IBM/watson-machine-learning-samples/raw/master/cloud/definitions/keras/mnist/MNIST.zip \
-O MNIST.zip
###Output
--2020-09-14 15:52:14-- https://github.com/IBM/watson-machine-learning-samples/raw/master/cloud/definitions/keras/mnist/MNIST.zip
Resolving github.com... 140.82.121.4
Connecting to github.com|140.82.121.4|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://raw.githubusercontent.com/IBM/watson-machine-learning-samples/master/cloud/definitions/keras/mnist/MNIST.zip [following]
--2020-09-14 15:52:15-- https://raw.githubusercontent.com/IBM/watson-machine-learning-samples/master/cloud/definitions/keras/mnist/MNIST.zip
Resolving raw.githubusercontent.com... 151.101.112.133
Connecting to raw.githubusercontent.com|151.101.112.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 3836 (3.7K) [application/zip]
Saving to: 'MNIST.zip'
0K ... 100% 7.83M=0s
2020-09-14 15:52:15 (7.83 MB/s) - 'MNIST.zip' saved [3836/3836]
###Markdown
**Tip**: Convert below cell to code and run it to see model deinition's code.
###Code
!unzip -oqd . MNIST.zip && cat mnist_mlp.py
###Output
_____no_output_____
###Markdown
Upload model for the model definition <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Model%20Definitions/model_definitions_upload_model" target="_blank" rel="noopener no referrer">Upload model for the model definition
###Code
%%bash
curl -sk -X PUT \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
--data-binary "@MNIST.zip" \
"$WML_ENDPOINT_URL/ml/v4/model_definitions/$MODEL_DEFINITION_ID/model?version=2020-08-01&space_id=$SPACE_ID" \
| python -m json.tool
###Output
{
"attachment_id": "189ef773-bc16-4a35-bca3-569020476c49",
"content_format": "native",
"persisted": true
}
###Markdown
4. Experiment runThis section provides samples about how to trigger Deep Learning experiment via cURL calls. <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Trainings/trainings_create" target="_blank" rel="noopener no referrer">Schedule a training job for Deep Learning experiment
###Code
%%bash --out training_payload
TRAINING_PAYLOAD='{"training_data_references": [{"name": "training_input_data", "type": "connection_asset", "connection": {"id": "'"$CONNECTION_ID"'"}, "location": {"bucket": "'"$COS_BUCKET"'", "file_name": "."}, "schema": {"id": "idmlp_schema", "fields": [{"name": "text", "type": "string"}]}}], "results_reference": {"name": "MNIST results", "connection": {"id": "'"$CONNECTION_ID"'"}, "location": {"bucket": "'"$COS_BUCKET"'", "file_name": "."}, "type": "connection_asset"}, "tags": [{"value": "tags_mnist", "description": "dome MNIST"}], "name": "MNIST mlp", "description": "test training modeldef MNIST", "model_definition": {"id": "'"$MODEL_DEFINITION_ID"'", "hardware_spec": {"name": "K80", "nodes": 1}, "software_spec": {"name": "tensorflow_2.4-py3.7"}, "parameters": {"name": "MNIST mlp", "description": "Simple MNIST mlp model"}}, "space_id": "'"$SPACE_ID"'"}'
echo $TRAINING_PAYLOAD | python -m json.tool
%env TRAINING_PAYLOAD=$training_payload
%%bash --out training_id
curl -sk -X POST \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
--data "$TRAINING_PAYLOAD" \
"$WML_ENDPOINT_URL/ml/v4/trainings?version=2020-08-01" | awk -F'"id":' '{print $2}' | cut -c2-37
%env TRAINING_ID=$training_id
###Output
env: TRAINING_ID=7709e2d9-1d6b-4185-91d0-c926cb4b8185
###Markdown
Get training detailsTreining is an asynchronous endpoint. In case you want to monitor training status and details,you need to use a GET method and specify which training you want to monitor by usage of training ID. <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Trainings/trainings_get" target="_blank" rel="noopener no referrer">Get information about training job
###Code
%%bash
curl -sk -X GET \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
"$WML_ENDPOINT_URL/ml/v4/trainings/$TRAINING_ID?space_id=$SPACE_ID&version=2020-08-01" \
| python -m json.tool
###Output
_____no_output_____
###Markdown
Get training status
###Code
%%bash
STATUS=$(curl -sk -X GET\
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
"$WML_ENDPOINT_URL/ml/v4/trainings/$TRAINING_ID?space_id=$SPACE_ID&version=2020-08-01")
STATUS=${STATUS#*state\":\"}
STATUS=${STATUS%%\"*}
echo $STATUS
###Output
completed
###Markdown
Please make sure that training is completed before you go to the next sections.Monitor `state` of your training by running above cell couple of times. Get selected modelGet a Keras saved model location in COS from the Deep Learning training job.
###Code
%%bash --out model_name
PATH=$(curl -sk -X GET \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
"$WML_ENDPOINT_URL/ml/v4/trainings/$TRAINING_ID?space_id=$SPACE_ID&version=2020-08-01")
PATH=${PATH#*logs\":\"}
MODEL_NAME=${PATH%%\"*}
echo $MODEL_NAME
%env MODEL_NAME=$model_name
###Output
env: MODEL_NAME=training-RuDTVvdMR
###Markdown
5. Historical runsIn this section you will see cURL examples describing how to get historical training runs information. Output should be similar to the output from training creation but you should see more trainings entries. Listing trainings: <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Trainings/trainings_list" target="_blank" rel="noopener no referrer">Get list of historical training jobs information
###Code
%%bash
HISTORICAL_TRAINING_LIMIT_TO_GET=2
curl -sk -X GET \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
"$WML_ENDPOINT_URL/ml/v4/trainings?space_id=$SPACE_ID&version=2020-08-01&limit=$HISTORICAL_TRAINING_LIMIT_TO_GET" \
| python -m json.tool
###Output
_____no_output_____
###Markdown
Cancel training run**Tip:** If you want to cancel your training, please convert below cell to `code`, specify training ID and run. <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Trainings/trainings_delete" target="_blank" rel="noopener no referrer">Canceling training
###Code
%%bash
TRAINING_ID_TO_CANCEL=...
curl -sk -X DELETE \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
"$WML_ENDPOINT_URL/ml/v4/trainings/$TRAINING_ID_TO_DELETE?space_id=$SPACE_ID&version=2020-08-01"
###Output
_____no_output_____
###Markdown
--- 6. Deploy and ScoreIn this section you will learn how to deploy and score pipeline model as webservice using WML instance. Before deployment creation, you need store your model in WML repository.Please see below cURL call example how to do it. Remember that you need tospecify where your chosen model is stored in COS. Store Deep Learning modelStore information about your model to WML repository. <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Models/models_create" target="_blank" rel="noopener no referrer">Model storing
###Code
%%bash --out model_payload
MODEL_PAYLOAD='{"space_id": "'"$SPACE_ID"'", "name": "MNIST mlp", "description": "This is description", "type": "tensorflow_2.4", "software_spec": {"name": "default_py3.7_opence"}, "content_location": { "type": "s3", "contents": [ { "content_format": "native", "file_name": "'"$MODEL_NAME.zip"'", "location": "'"$TRAINING_ID/assets/$TRAINING_ID/resources/wml_model/$MODEL_NAME.zip"'"}],"connection": {"endpoint_url": "'"$COS_ENDPOINT"'", "access_key_id": "'"$COS_ACCESS_KEY_ID"'", "secret_access_key": "'"$COS_SECRET_ACCESS_KEY"'"}, "location": {"bucket": "'"$COS_BUCKET"'"}}}'
echo $MODEL_PAYLOAD | python -m json.tool
%env MODEL_PAYLOAD=$model_payload
%%bash --out model_id
curl -sk -X POST \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
--data "$MODEL_PAYLOAD" \
"$WML_ENDPOINT_URL/ml/v4/models?version=2020-08-01" | grep '"id": ' | awk -F '"' '{ print $4 }' | sed -n 2p
%env MODEL_ID=$model_id
###Output
env: MODEL_ID=e5417dbf-77b4-4e72-a47e-0be8aa2ecd83
###Markdown
Download model contentIf you want to download your saved model, please make the following call. <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Models/models_filtered_download" target="_blank" rel="noopener no referrer">Download model content
###Code
%%bash
curl -sk -X GET \
--header "Authorization: Bearer $TOKEN" \
--output "mnist_cnn.h5.tar.gz" \
"$WML_ENDPOINT_URL/ml/v4/models/$MODEL_ID/download?space_id=$SPACE_ID&version=2020-08-01"
!ls -l mnist_cnn.h5.tar.gz
###Output
-rw-r--r-- 1 jansoltysik staff 4765639 Sep 14 16:00 mnist_cnn.h5.tar.gz
###Markdown
Deployment creationAn Deep Learning online deployment creation is presented below. <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Deployments/deployments_create" target="_blank" rel="noopener no referrer">Create deployment
###Code
%%bash --out deployment_payload
DEPLOYMENT_PAYLOAD='{"space_id": "'"$SPACE_ID"'","name": "MNIST deployment", "description": "This is description","online": {},"hardware_spec": {"name": "S"},"asset": {"id": "'"$MODEL_ID"'"}}'
echo $DEPLOYMENT_PAYLOAD | python -m json.tool
%env DEPLOYMENT_PAYLOAD=$deployment_payload
%%bash --out deployment_id
curl -sk -X POST \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
--data "$DEPLOYMENT_PAYLOAD" \
"$WML_ENDPOINT_URL/ml/v4/deployments?version=2020-08-01" | grep '"id": ' | awk -F '"' '{ print $4 }' | sed -n 3p
%env DEPLOYMENT_ID=$deployment_id
###Output
env: DEPLOYMENT_ID=c7568bf5-9c70-41bf-83c1-ed23fbda2ccc
###Markdown
Get deployment detailsAs deployment API is asynchronous, please make sure your deployment is in `ready` state before going to the next points. <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Deployments/deployments_get" target="_blank" rel="noopener no referrer">Get deployment details
###Code
%%bash
curl -sk -X GET \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
"$WML_ENDPOINT_URL/ml/v4/deployments/$DEPLOYMENT_ID?space_id=$SPACE_ID&version=2020-08-01" \
| python -m json.tool
###Output
{
"entity": {
"asset": {
"id": "e5417dbf-77b4-4e72-a47e-0be8aa2ecd83"
},
"custom": {},
"deployed_asset_type": "model",
"description": "This is description",
"hardware_spec": {
"id": "e7ed1d6c-2e89-42d7-aed5-863b972c1d2b",
"name": "S",
"num_nodes": 1
},
"name": "MNIST deployment",
"online": {},
"space_id": "d70a423e-bab5-4b24-943a-3b0b29ad7527",
"status": {
"online_url": {
"url": "https://yp-qa.ml.cloud.ibm.com/ml/v4/deployments/c7568bf5-9c70-41bf-83c1-ed23fbda2ccc/predictions"
},
"state": "initializing"
}
},
"metadata": {
"created_at": "2020-09-14T14:00:55.027Z",
"description": "This is description",
"id": "c7568bf5-9c70-41bf-83c1-ed23fbda2ccc",
"modified_at": "2020-09-14T14:00:55.027Z",
"name": "MNIST deployment",
"owner": "IBMid-55000091VC",
"space_id": "d70a423e-bab5-4b24-943a-3b0b29ad7527"
}
}
###Markdown
Prepare scoring input data**Hint:** You may need to install numpy using following command `!pip install numpy`
###Code
import numpy as np
mnist_dataset = np.load('mnist.npz')
test_mnist = mnist_dataset['x_test']
image_1 = (test_mnist[0].ravel() / 255).tolist()
image_2 = (test_mnist[1].ravel() / 255).tolist()
%matplotlib inline
import matplotlib.pyplot as plt
for i, image in enumerate([test_mnist[0], test_mnist[1]]):
plt.subplot(2, 2, i + 1)
plt.axis('off')
plt.imshow(image, cmap=plt.cm.gray_r, interpolation='nearest')
###Output
_____no_output_____
###Markdown
Scoring of a webserviceIf you want to make a `score` call on your deployment, please follow a below method: <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Deployment%20Jobs/deployment_jobs_create" target="_blank" rel="noopener no referrer">Create deployment job
###Code
%%bash -s "$image_1" "$image_2"
curl -sk -X POST \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
--data '{"space_id": "$SPACE_ID","input_data": [{"values": ['"$1"', '"$2"']}]}' \
"$WML_ENDPOINT_URL/ml/v4/deployments/$DEPLOYMENT_ID/predictions?version=2020-08-01" \
| python -m json.tool
###Output
{
"predictions": [
{
"fields": [
"prediction",
"prediction_classes",
"probability"
],
"values": [
[
[
2.8162902240315424e-13,
2.634996214279095e-11,
1.940126725941127e-09,
6.535556984488267e-09,
4.630183633421228e-15,
2.0969738532411464e-12,
2.5965000387704643e-19,
1.0,
2.8750441697505957e-12,
4.534635333897086e-09
],
7,
[
2.8162902240315424e-13,
2.634996214279095e-11,
1.940126725941127e-09,
6.535556984488267e-09,
4.630183633421228e-15,
2.0969738532411464e-12,
2.5965000387704643e-19,
1.0,
2.8750441697505957e-12,
4.534635333897086e-09
]
],
[
[
9.832916815846748e-13,
1.2868907788288197e-06,
0.9999985694885254,
1.0709795361663055e-07,
1.9135524797852587e-17,
3.4615290633865925e-09,
3.791623265358979e-12,
1.2983511835096273e-12,
1.8498602649685836e-09,
2.8920135803820433e-15
],
2,
[
9.832916815846748e-13,
1.2868907788288197e-06,
0.9999985694885254,
1.0709795361663055e-07,
1.9135524797852587e-17,
3.4615290633865925e-09,
3.791623265358979e-12,
1.2983511835096273e-12,
1.8498602649685836e-09,
2.8920135803820433e-15
]
]
]
}
]
}
###Markdown
Listing all deployments <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Deployments/deployments_list" target="_blank" rel="noopener no referrer">List deployments details
###Code
%%bash
curl -sk -X GET \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
"$WML_ENDPOINT_URL/ml/v4/deployments?space_id=$SPACE_ID&version=2020-08-01" \
| python -m json.tool
###Output
_____no_output_____
###Markdown
7. Cleaning sectionBelow section is useful when you want to clean all of your previous work within this notebook.Just convert below cells into the `code` and run them. Delete training run**Tip:** You can completely delete a training run with its metadata. <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Trainings/trainings_delete" target="_blank" rel="noopener no referrer">Deleting training
###Code
%%bash
%env TRAINING_ID_TO_DELETE=...
curl -sk -X DELETE \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
"$WML_ENDPOINT_URL/ml/v4/trainings/$TRAINING_ID_TO_DELETE?space_id=$SPACE_ID&version=2020-08-01&hard_delete=true"
###Output
_____no_output_____
###Markdown
Deleting deployment**Tip:** You can delete existing deployment by calling DELETE method. <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Deployments/deployments_delete" target="_blank" rel="noopener no referrer">Delete deployment
###Code
%%bash
curl -sk -X DELETE \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
"$WML_ENDPOINT_URL/ml/v4/deployments/$DEPLOYMENT_ID?space_id=$SPACE_ID&version=2020-08-01"
###Output
_____no_output_____
###Markdown
Delete model from repository**Tip:** If you want to completely remove your stored model and model metadata, just use a DELETE method. <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Models/models_delete" target="_blank" rel="noopener no referrer">Delete model from repository
###Code
%%bash
curl -sk -X DELETE \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
"$WML_ENDPOINT_URL/ml/v4/models/$MODEL_ID?space_id=$SPACE_ID&version=2020-08-01"
###Output
_____no_output_____
###Markdown
Delete model definition**Tip:** If you want to completely remove your model definition, just use a DELETE method. <a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html/Model%20Definitions/model_definitions_delete" target="_blank" rel="noopener no referrer">Delete model definition
###Code
%%bash
curl -sk -X DELETE \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
"$WML_ENDPOINT_URL/ml/v4/model_definitions/$MODEL_DEFINITION_ID?space_id=$SPACE_ID&version=2020-08-01"
###Output
_____no_output_____ |
video inference/get_keypoint_json-Copy1.ipynb | ###Markdown
Import all Libraries
###Code
# AWS Rekognition to get bbox
import numpy as np
import boto3
from PIL import Image, ImageDraw, ExifTags, ImageColor, ImageFont
from matplotlib import pyplot as plt
from utils.rekognition import determine_color, draw_animal_count
import cv2
import time
import math
import os
import io
import json
from utils.config import *
from utils.fix_annotation import *
# process whole image.py to get key points
import mmcv
from mmcv.parallel import collate, scatter
from mmcv.runner import load_checkpoint
import torch as tr
#from torchvision import transforms
from mmpose.apis import (inference, inference_top_down_pose_model, init_pose_model,
vis_pose_result)
from mmpose.models import build_posenet
from mmpose.datasets.pipelines import Compose
FNT = ImageFont.truetype('/usr/share/fonts/default/Type1/n019004l.pfb', 23)
###Output
_____no_output_____
###Markdown
Get Bounding Boxes from Video Frames
###Code
class LoadImage:
"""A simple pipeline to load image."""
def __init__(self, color_type='color', channel_order='rgb'):
self.color_type = color_type
self.channel_order = channel_order
def __call__(self, results):
"""Call function to load images into results.
Args:
results (dict): A result dict contains the img_or_path.
Returns:
dict: ``results`` will be returned containing loaded image.
"""
if isinstance(results['img_or_path'], str):
results['image_file'] = results['img_or_path']
img = mmcv.imread(results['img_or_path'], self.color_type,
self.channel_order)
elif isinstance(results['img_or_path'], np.ndarray):
results['image_file'] = ''
if self.color_type == 'color' and self.channel_order == 'rgb':
img = cv2.cvtColor(results['img_or_path'], cv2.COLOR_BGR2RGB)
else:
raise TypeError('"img_or_path" must be a numpy array or a str or '
'a pathlib.Path object')
results['img'] = img
return results
def init_pose_model(config, checkpoint=None, device='cuda:0'):
"""Initialize a pose model from config file.
Args:
config (str or :obj:`mmcv.Config`): Config file path or the config
object.
checkpoint (str, optional): Checkpoint path. If left as None, the model
will not load any weights.
Returns:
nn.Module: The constructed detector.
"""
if isinstance(config, str):
config = mmcv.Config.fromfile(config)
elif not isinstance(config, mmcv.Config):
raise TypeError('config must be a filename or Config object, '
f'but got {type(config)}')
config.model.pretrained = None
model = build_posenet(config.model)
if checkpoint is not None:
# load model checkpoint
load_checkpoint(model, checkpoint, map_location=device)
# save the config in the model for convenience
model.cfg = config
model.to(device)
model.eval()
return model
def _box2cs(cfg, box):
"""This encodes bbox(x,y,w,h) into (center, scale)
Args:
x, y, w, h
Returns:
tuple: A tuple containing center and scale.
- np.ndarray[float32](2,): Center of the bbox (x, y).
- np.ndarray[float32](2,): Scale of the bbox w & h.
"""
x, y, w, h = box[:4]
input_size = cfg.data_cfg['image_size']
aspect_ratio = input_size[0] / input_size[1]
center = np.array([x + w * 0.5, y + h * 0.5], dtype=np.float32)
if w > aspect_ratio * h:
h = w * 1.0 / aspect_ratio
elif w < aspect_ratio * h:
w = h * aspect_ratio
# pixel std is 200.0
scale = np.array([w / 200.0, h / 200.0], dtype=np.float32)
scale = scale * 1.25
return center, scale
def process_model(model, dataset, person_results, img_or_path):
bboxes = np.array([box['bbox'] for box in person_results])
cfg = model.cfg
flip_pairs = None
device = next(model.parameters()).device
channel_order = cfg.test_pipeline[0].get('channel_order', 'rgb')
test_pipeline = [LoadImage(channel_order=channel_order)] + cfg.test_pipeline[1:]
test_pipeline = Compose(test_pipeline)
if dataset == 'AnimalHorse10Dataset':
flip_pairs = []
else:
raise NotImplementedError()
batch_data = []
for bbox in bboxes:
center, scale = _box2cs(cfg, bbox)
# prepare data
data = {
'img_or_path':
img_or_path,
'center':
center,
'scale':
scale,
'bbox_score':
bbox[4] if len(bbox) == 5 else 1,
'bbox_id':
0, # need to be assigned if batch_size > 1
'dataset':
dataset,
'joints_3d':
np.zeros((cfg.data_cfg.num_joints, 3), dtype=np.float32),
'joints_3d_visible':
np.zeros((cfg.data_cfg.num_joints, 3), dtype=np.float32),
'rotation':
0,
'ann_info': {
'image_size': np.array(cfg.data_cfg['image_size']),
'num_joints': cfg.data_cfg['num_joints'],
'flip_pairs': flip_pairs
}
}
data = test_pipeline(data)
batch_data.append(data)
batch_data = collate(batch_data, samples_per_gpu=1)
if next(model.parameters()).is_cuda:
# scatter not work so just move image to cuda device
batch_data['img'] = batch_data['img'].to(device)
# get all img_metas of each bounding box
batch_data['img_metas'] = [
img_metas[0] for img_metas in batch_data['img_metas'].data
]
with tr.no_grad():
result = model(
img=batch_data['img'],
#img = torch_data,
img_metas=batch_data['img_metas'],
return_loss=False,
return_heatmap=False)
return result['preds'], result['output_heatmap']
# device = tr.device("cuda:0" if tr.cuda.is_available() else "cpu")
# model_head = init_pose_model(config='../myConfigs/train_head_resnet.py', checkpoint='../temp_logs/cattle_head/resnet/best.pth', device = device)
# model_spine = init_pose_model(config='../myConfigs/train_spine_hrnet.py', checkpoint='../temp_logs/cattle_spine/hrnet/best.pth', device = device)
# model_tail = init_pose_model(config='../myConfigs/train_tail_ori_hrnet.py', checkpoint='../temp_logs/cattle_tail_ori/hrnet/best.pth', device = device)
# model_leg_front = init_pose_model(config='../myConfigs/train_leg_front_hrnet.py', checkpoint='../temp_logs/cattle_leg_front/hrnet/best.pth', device = device)
# model_leg_back = init_pose_model(config='../myConfigs/train_leg_back_hrnet.py', checkpoint='../temp_logs/cattle_leg_back/hrnet/best.pth', device = device)
# dataset_head = model_head.cfg.data['test']['type']
# dataset_spine = model_spine.cfg.data['test']['type']
# dataset_tail = model_tail.cfg.data['test']['type']
# dataset_leg_front = model_leg_front.cfg.data['test']['type']
# dataset_leg_back = model_leg_back.cfg.data['test']['type']
device = tr.device("cuda:0" if tr.cuda.is_available() else "cpu")
model_head = init_pose_model(config='../myConfigs/horse/train_head_resnet.py', checkpoint='../temp_logs/cattle_head/resnet/best.pth', device = device)
model_spine = init_pose_model(config='../myConfigs/horse/train_spine_resnet.py', checkpoint='../temp_logs/cattle_spine/hrnet/best.pth', device = device)
model_tail = init_pose_model(config='../myConfigs/horse/train_tail_ori_resnet.py', checkpoint='../temp_logs/cattle_tail_ori/hrnet/best.pth', device = device)
model_leg_front = init_pose_model(config='../myConfigs/horse/train_leg_front_resnet.py', checkpoint='../temp_logs/cattle_leg_front/hrnet/best.pth', device = device)
model_leg_back = init_pose_model(config='../myConfigs/horse/train_leg_back_resnet.py', checkpoint='../temp_logs/cattle_leg_back/hrnet/best.pth', device = device)
dataset_head = model_head.cfg.data['test']['type']
dataset_spine = model_spine.cfg.data['test']['type']
dataset_tail = model_tail.cfg.data['test']['type']
dataset_leg_front = model_leg_front.cfg.data['test']['type']
dataset_leg_back = model_leg_back.cfg.data['test']['type']
def get_kp_color(label):
# BGR
color = (0, 0, 255)
if label == 'Head':
color = ['#EC51F8', '#74F54B',
'#EC51F8', '#74F54B',
'#4394F9', '#F49736',
'#F49736', '#FFFB56',
'#FFFB56', '#4394F9',
'#07178D']
elif label == 'Spine':
color = ['#4394F9', '#4394F9', '#4394F9',
'#4394F9', '#4394F9', '#4394F9',
'#4394F9', '#4394F9', '#24518D']
elif label == 'Tail':
color = ['#EC51F8', '#EC51F8',
'#EC51F8', '#EC51F8',
'#EC51F8', '#892B8E']
elif label == 'Leg_front':
color = ['#F49736', '#F49736',
'#F49736', '#F49736',
'#F49736', '#F49736',
'#F49736', '#F49736',
'#F49736', '#8C551E']
elif label == 'Leg_back':
color = ['#74F54B', '#74F54B',
'#74F54B', '#74F54B',
'#74F54B', '#74F54B',
'#74F54B', '#74F54B',
'#74F54B', '#3F8D28',]
return color
def get_skeleton(label):
skeleton_list = []
if label == 'Head':
skeleton_list = [[4, 0], [4, 2], [0, 2], [1, 3],
[5, 6], [7, 8], [0, 1], [1, 5],
[5, 7], [7, 9], [2, 3], [3, 6],
[6, 8], [8, 9], [4, 9]]
elif label == 'Spine':
skeleton_list = [[0, 1], [1, 2], [2, 3], [3, 4],
[4, 5], [5, 6], [6, 7]]
elif label == 'Tail':
skeleton_list = [[0, 1], [1, 2], [2, 3], [3, 4]]
elif label == 'Leg_front':
skeleton_list = [[0, 1], [1, 2], [2, 3], [3, 4],
[4, 5], [5, 6], [6, 7], [7, 8]]
elif label == 'Leg_back':
skeleton_list = [[0, 1], [1, 2], [2, 3], [3, 4],
[4, 5], [5, 6], [6, 7], [7, 8]]
return skeleton_list
def rgb_to_bgr(color):
color = list(color)
temp_r = color[0]
color[0] = color[2]
color[2] = temp_r
return tuple(color)
def vis_pose(points, draw, label):
points = points[0]
# if label == 'Tail' or label == 'Leg_front' or label == 'Leg_back':
# print(label)
# print(points)
# if label == 'Leg_front' or label == 'Leg_back':
# return draw
CS_THR = 0.4
# keypoints
kp_color = get_kp_color(label)
# connect line
skeleton_list = get_skeleton(label)
for ske in skeleton_list:
#print(points)
fir_pt_x, fir_pt_y, fir_pt_p = points[ske[0]]
sec_pt_x, sec_pt_y, sec_pt_p = points[ske[1]]
if fir_pt_p > CS_THR and sec_pt_p > CS_THR:
shape = [(fir_pt_x, fir_pt_y), (sec_pt_x, sec_pt_y)]
draw.line(shape, fill=kp_color[-1], width=8)
for i, point in enumerate(points):
x, y, p = point
if p > CS_THR:
x = int(x)
y = int(y)
draw.ellipse([(x-11, y-11), (x+11, y+11)], fill=kp_color[-1], outline=None)
draw.ellipse([(x-6, y-6), (x+6, y+6)], fill=kp_color[i], outline=None)
#draw.text((x-40, y-40), '{}%'.format(int(p*100)), font=FNT, fill=(255, 255, 255))
return draw
def extend_bbox(left, top, width, height, extend_rate):
temp_left = left - left * extend_rate
temp_top = top - top * extend_rate
temp_width = width * extend_rate + width
temp_height = height * extend_rate + height
return temp_left, temp_top, temp_width, temp_height
def get_color(name):
if name == 'Cow':
color = '#FF9300'
elif name == 'Head' or name == "Head Left" or name == "Head Right":
color = '#0096FF'
elif name == 'Tag':
color = '#00FFFF'
elif name == 'Knee':
color = '#FFFB00'
elif name == 'Hoof':
color = '#00F900'
elif name == 'Tail':
color = '#FF40FF'
elif name == 'Side Left' or name == 'Side Right':
color = '#FF2600'
elif name == 'Udder':
color = '#9437FF'
elif name == 'Teat':
color = '#FF2F92'
else:
color = '#000000'
return color
def get_opacity(name):
if name == 'Cow':
opacity = 0.3
elif name == 'Tag':
opacity = 0.3
elif name == 'Head' or name == "Head Left" or name == "Head Right":
opacity = 0.45
elif name == 'Knee':
opacity = 0.3
elif name == 'Hoof':
opacity = 0.35
elif name == 'Tail':
opacity = 0.3
elif name == 'Side Left' or name == 'Side Right':
opacity = 0.3
elif name == 'Udder':
opacity = 0.35
elif name == 'Teat':
opacity = 0.3
else:
opacity = 0.0
return opacity
def get_confidence_cut_off(name):
if name == 'Cow':
confidence = 79.4
elif name == 'Tag':
confidence = 86.9
elif name == 'Head':
confidence = 92.5
elif name == 'Knee':
confidence = 78.0
elif name == 'Hoof':
confidence = 92.9
elif name == 'Tail':
confidence = 73.5
elif name == 'Udder':
confidence = 35.0
elif name == 'Teat':
confidence = 73.0
else:
confidence = 80.0
return confidence
tail_count = 0
#draw response
def draw_response(image, response, animal_target, draw_boundary=True, fill=True, draw_btn=True):
global tail_count
tail_check = False
temp_image = image.copy()
b, g, r = image.split()
image = Image.merge("RGB", (r, g, b))
# original image size
draw = ImageDraw.Draw(image, mode='RGBA')
# bbox
for customLabel in response['CustomLabels']:
if 'Geometry' in customLabel:
box = customLabel['Geometry']['BoundingBox']
left, top, width, height = extend_bbox(box['Left'], box['Top'], box['Width'], box['Height'], 0)
label = customLabel['Name']
if label == 'Udder':
print('Udder')
elif label == 'Teat':
print('Teat')
conf_cut = get_confidence_cut_off(label)
# skip current label
if customLabel['Confidence'] < conf_cut:
continue
#draw bbox
color = get_color(label)
opacity = round(get_opacity(label) * 255)
if draw_boundary and fill:
draw.rectangle(xy=[(left, top), (left+width, top+height)], outline=color, fill=color+f'{opacity:0>2X}', width=3)
elif fill:
draw.rectangle(xy=[(left, top), (left+width, top+height)], outline=None, fill=color+f'{opacity:0>2X}', width=3)
elif draw_boundary:
draw.rectangle(xy=[(left, top), (left+width, top+height)], outline=color, fill=None, width=3)
if draw_btn:
text_width, text_height = FNT.getsize(label)
draw.rectangle(xy=[(left, top), (left+text_width, top+text_height)], outline=None, fill=color, width=3)
draw.text((left, top), label, fill='#000000', font=FNT)
#keypoints
for customLabel in response['CustomLabels']:
if 'Geometry' in customLabel:
box = customLabel['Geometry']['BoundingBox']
left, top, width, height = extend_bbox(box['Left'], box['Top'], box['Width'], box['Height'], 0)
label = customLabel['Name']
conf_cut = get_confidence_cut_off(label)
# skip current label
if customLabel['Confidence'] < conf_cut:
continue
#***** Keypoints
if label == 'Head':
extend_rate = 0.01
np_image = np.array(temp_image)
head_bbox = list(extend_bbox(box['Left'], box['Top'], box['Width'], box['Height'], extend_rate))
head_result = []
head_result.append({'bbox': head_bbox})
preds, _ = process_model(model_head, dataset_head, head_result, np_image)
draw = vis_pose(preds, draw, 'Head')
elif label == 'Cow':
extend_rate = 0.01
np_image = np.array(temp_image)
cow_bbox = list(extend_bbox(box['Left'], box['Top'], box['Width'], box['Height'], extend_rate))
cow_result = []
cow_result.append({'bbox': cow_bbox})
# spine
preds, _ = process_model(model_spine, dataset_spine, cow_result, np_image)
draw = vis_pose(preds, draw, 'Spine')
# leg front
preds, _ = process_model(model_leg_front, dataset_leg_front, cow_result, np_image)
draw = vis_pose(preds, draw, 'Leg_front')
# leg back
preds, _ = process_model(model_leg_back, dataset_leg_back, cow_result, np_image)
draw = vis_pose(preds, draw, 'Leg_back')
elif label == 'Tail':
extend_rate = 0.20
np_image = np.array(temp_image)
tail_bbox = list(extend_bbox(box['Left'], box['Top'], box['Width'], box['Height'], extend_rate))
tail_result = []
tail_result.append({'bbox': tail_bbox})
preds, _ = process_model(model_tail, dataset_tail, tail_result, np_image)
draw = vis_pose(preds, draw, 'Tail')
tail_check = True
#*****
img = np.asarray(image)[:,:,::-1].copy()
inferred_frame = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
return inferred_frame
def analyzeVideo(src_video, src_bbox_json, src_img_dir, output_file, fps=5):
start = time.time()
#imgWidth, imgHeight = image.size
with Image.open(src_img_dir+'0.jpg') as img:
imgWidth, imgHeight = img.size
imgSize = (imgWidth, imgHeight)
img.close()
cap = cv2.VideoCapture(src_video)
frameRate = cap.get(fps) #frame rate
print('FrameRate:', frameRate)
fourcc = cv2.VideoWriter_fourcc(*'mp4v')
videoWriter = cv2.VideoWriter(output_file, fourcc, frameRate, imgSize)
with open(src_bbox_json) as bbox_json:
bbox_frames = json.load(bbox_json)
for frameId, bbox_data in enumerate(bbox_frames['Frames']):
# get each image frame
with Image.open(src_img_dir+str(frameId)+'.jpg') as img:
inferred_frame = draw_response(img, bbox_data, animal_target='cow')
inferred_frame = cv2.cvtColor(inferred_frame, cv2.COLOR_BGR2RGB)
# check each 50 frame
if frameId % 50 == 0:
print("Finish Processing {} frame".format(frameId))
plt.imshow(inferred_frame)
plt.title("Frame {}".format(int(frameId)))
plt.savefig('debug_imgs/check_{}.jpg'.format(frameId), dpi=200)
lap = time.time()
print('lap time: ', lap - start)
videoWriter.write(inferred_frame)
img.close()
videoWriter.release()
cv2.destroyAllWindows()
bbox_json.close()
#end time
end = time.time()
print('total time lapse', end - start)
#, 'cattle_multi_1'
video_name_list = ['cattle_multi_1']
video_format = ['.mov']
for v_idx, video in enumerate(video_name_list):
src_video = 'video_data/input_video/'+video+video_format[v_idx]
src_bbox_json = 'json_data/'+video+'_new_bbox.json'
src_img_dir = 'frame_img/'+video+'/'
output_video = 'video_data/inferred_video/inferred_fixed_'+video+'.mp4'
print(output_video)
analyzeVideo(src_video, src_bbox_json, src_img_dir, output_video)
print('finished analyzing the video '+video)
print()
###Output
video_data/inferred_video/inferred_fixed_cattle_multi_1.mp4
FrameRate: 30.006466910972193
Finish Processing 0 frame
lap time: 3.503347635269165
Finish Processing 50 frame
lap time: 109.85330510139465
Finish Processing 100 frame
lap time: 226.45562505722046
Finish Processing 150 frame
lap time: 333.6754539012909
Finish Processing 200 frame
lap time: 435.8921248912811
Finish Processing 250 frame
lap time: 540.8767602443695
Finish Processing 300 frame
lap time: 666.1351451873779
Finish Processing 350 frame
lap time: 785.4227914810181
Finish Processing 400 frame
lap time: 889.8876004219055
Finish Processing 450 frame
lap time: 1017.3974685668945
Finish Processing 500 frame
lap time: 1160.2743360996246
Finish Processing 550 frame
lap time: 1297.996387720108
Finish Processing 600 frame
lap time: 1404.5304102897644
Finish Processing 650 frame
lap time: 1485.1052141189575
Finish Processing 700 frame
lap time: 1569.8113861083984
Finish Processing 750 frame
lap time: 1669.618066072464
Finish Processing 800 frame
lap time: 1800.6987073421478
Finish Processing 850 frame
lap time: 1930.0983774662018
Finish Processing 900 frame
lap time: 2078.31814289093
Finish Processing 950 frame
lap time: 2175.210619211197
Finish Processing 1000 frame
lap time: 2239.294077396393
Finish Processing 1050 frame
lap time: 2356.8199532032013
Finish Processing 1100 frame
lap time: 2534.4868001937866
Finish Processing 1150 frame
lap time: 2717.6881663799286
Finish Processing 1200 frame
lap time: 2875.3263347148895
Finish Processing 1250 frame
lap time: 3054.7248158454895
Finish Processing 1300 frame
lap time: 3200.637379169464
Finish Processing 1350 frame
lap time: 3325.262978076935
Finish Processing 1400 frame
lap time: 3514.033369779587
Finish Processing 1450 frame
lap time: 3701.9489555358887
Finish Processing 1500 frame
lap time: 3859.1659972667694
Finish Processing 1550 frame
lap time: 4017.1473603248596
Udder
Finish Processing 1600 frame
lap time: 4154.535991430283
Finish Processing 1650 frame
lap time: 4294.26210641861
Finish Processing 1700 frame
lap time: 4415.1077399253845
Finish Processing 1750 frame
lap time: 4619.905319452286
Finish Processing 1800 frame
lap time: 4859.329822778702
Finish Processing 1850 frame
lap time: 5039.599325895309
total time lapse 5055.3730454444885
finished analyzing the video cattle_multi_1
|
docs/examples/notebooks/example-tensorflow.ipynb | ###Markdown
TensorFlow
###Code
%pylab inline
import pyhf
from pyhf import Model
from pyhf.simplemodels import uncorrelated_background
import tensorflow as tf
source = {
"binning": [2, -0.5, 1.5],
"bindata": {
"data": [120.0, 180.0],
"bkg": [100.0, 150.0],
"bkgerr": [10.0, 10.0],
"sig": [30.0, 95.0],
},
}
pdf = uncorrelated_background(
source['bindata']['sig'], source['bindata']['bkg'], source['bindata']['bkgerr']
)
data = source['bindata']['data'] + pdf.config.auxdata
init_pars = pdf.config.suggested_init()
par_bounds = pdf.config.suggested_bounds()
print('---\nas tensorflow\n-----')
import tensorflow as tf
pyhf.tensorlib = pyhf.tensorflow_backend()
v = pdf.logpdf(init_pars, data)
pyhf.tensorlib.session = tf.Session()
print(type(v), pyhf.tensorlib.tolist(v))
from pathlib import Path
tf.summary.FileWriter(Path.cwd(), pyhf.tensorlib.session.graph)
x = tf.Variable([1.0])
y = tf.Variable([2.0])
z = x**2 * y + y**3 * x
hessian = tf.hessians(z, [x, y])
sess = tf.Session()
sess.run(tf.global_variables_initializer())
sess.run(hessian)
x = tf.cast([1.0], tf.float64)
y = tf.cast([2.0], tf.float64)
z = x**2 * y + y**3 * x
hessian = tf.hessians(z, [x, y])
sess = tf.Session()
sess.run(hessian)
###Output
_____no_output_____
###Markdown
TensorFlow
###Code
%pylab inline
import pyhf
from pyhf import Model
from pyhf.simplemodels import hepdata_like
import tensorflow as tf
source = {
"binning": [2, -0.5, 1.5],
"bindata": {
"data": [120.0, 180.0],
"bkg": [100.0, 150.0],
"bkgerr": [10.0, 10.0],
"sig": [30.0, 95.0],
},
}
pdf = hepdata_like(
source['bindata']['sig'], source['bindata']['bkg'], source['bindata']['bkgerr']
)
data = source['bindata']['data'] + pdf.config.auxdata
init_pars = pdf.config.suggested_init()
par_bounds = pdf.config.suggested_bounds()
print('---\nas tensorflow\n-----')
import tensorflow as tf
pyhf.tensorlib = pyhf.tensorflow_backend()
v = pdf.logpdf(init_pars, data)
pyhf.tensorlib.session = tf.Session()
print(type(v), pyhf.tensorlib.tolist(v))
from pathlib import Path
tf.summary.FileWriter(Path.cwd(), pyhf.tensorlib.session.graph)
x = tf.Variable([1.0])
y = tf.Variable([2.0])
z = x ** 2 * y + y ** 3 * x
hessian = tf.hessians(z, [x, y])
sess = tf.Session()
sess.run(tf.global_variables_initializer())
sess.run(hessian)
x = tf.cast([1.0], tf.float64)
y = tf.cast([2.0], tf.float64)
z = x ** 2 * y + y ** 3 * x
hessian = tf.hessians(z, [x, y])
sess = tf.Session()
sess.run(hessian)
###Output
_____no_output_____
###Markdown
TensorFlow
###Code
%pylab inline
import pyhf
from pyhf import Model
from pyhf.simplemodels import uncorrelated_background
import tensorflow as tf
source = {
"binning": [2, -0.5, 1.5],
"bindata": {
"data": [120.0, 180.0],
"bkg": [100.0, 150.0],
"bkgerr": [10.0, 10.0],
"sig": [30.0, 95.0],
},
}
pdf = uncorrelated_background(
source['bindata']['sig'], source['bindata']['bkg'], source['bindata']['bkgerr']
)
data = source['bindata']['data'] + pdf.config.auxdata
init_pars = pdf.config.suggested_init()
par_bounds = pdf.config.suggested_bounds()
print('---\nas tensorflow\n-----')
import tensorflow as tf
pyhf.tensorlib = pyhf.tensorflow_backend()
v = pdf.logpdf(init_pars, data)
pyhf.tensorlib.session = tf.Session()
print(type(v), pyhf.tensorlib.tolist(v))
from pathlib import Path
tf.summary.FileWriter(Path.cwd(), pyhf.tensorlib.session.graph)
x = tf.Variable([1.0])
y = tf.Variable([2.0])
z = x ** 2 * y + y ** 3 * x
hessian = tf.hessians(z, [x, y])
sess = tf.Session()
sess.run(tf.global_variables_initializer())
sess.run(hessian)
x = tf.cast([1.0], tf.float64)
y = tf.cast([2.0], tf.float64)
z = x ** 2 * y + y ** 3 * x
hessian = tf.hessians(z, [x, y])
sess = tf.Session()
sess.run(hessian)
###Output
_____no_output_____
###Markdown
TensorFlow
###Code
%pylab inline
import pyhf
from pyhf import Model
from pyhf.simplemodels import hepdata_like
import tensorflow as tf
source = {
"binning": [2,-0.5,1.5],
"bindata": {
"data": [120.0, 180.0],
"bkg": [100.0, 150.0],
"bkgerr": [10.0, 10.0],
"sig": [30.0, 95.0]
}
}
pdf = hepdata_like(source['bindata']['sig'], source['bindata']['bkg'], source['bindata']['bkgerr'])
data = source['bindata']['data'] + pdf.config.auxdata
init_pars = pdf.config.suggested_init()
par_bounds = pdf.config.suggested_bounds()
print '---\nas tensorflow\n-----'
import tensorflow as tf
pyhf.tensorlib = pyhf.tensorflow_backend()
v = pdf.logpdf(init_pars,data)
pyhf.tensorlib.session = tf.Session()
print type(v),pyhf.tensorlib.tolist(v)
from pathlib import Path
tf.summary.FileWriter(Path.cwd(), pyhf.tensorlib.session.graph)
x = tf.Variable([1.])
y = tf.Variable([2.])
z = x**2*y + y**3*x
hessian = tf.hessians(z,[x,y])
sess = tf.Session()
sess.run(tf.global_variables_initializer())
sess.run(hessian)
x = tf.cast([1.], tf.float64)
y = tf.cast([2.], tf.float64)
z = x**2*y + y**3*x
hessian = tf.hessians(z,[x,y])
sess = tf.Session()
sess.run(hessian)
###Output
_____no_output_____
###Markdown
TensorFlow
###Code
%pylab inline
import pyhf
from pyhf import Model
from pyhf.simplemodels import hepdata_like
import tensorflow as tf
source = {
"binning": [2,-0.5,1.5],
"bindata": {
"data": [120.0, 180.0],
"bkg": [100.0, 150.0],
"bkgerr": [10.0, 10.0],
"sig": [30.0, 95.0]
}
}
pdf = hepdata_like(source['bindata']['sig'], source['bindata']['bkg'], source['bindata']['bkgerr'])
data = source['bindata']['data'] + pdf.config.auxdata
init_pars = pdf.config.suggested_init()
par_bounds = pdf.config.suggested_bounds()
print '---\nas tensorflow\n-----'
import tensorflow as tf
pyhf.tensorlib = pyhf.tensorflow_backend()
v = pdf.logpdf(init_pars,data)
pyhf.tensorlib.session = tf.Session()
print type(v),pyhf.tensorlib.tolist(v)
import os
tf.summary.FileWriter(os.getcwd(), pyhf.tensorlib.session.graph)
x = tf.Variable([1.])
y = tf.Variable([2.])
z = x**2*y + y**3*x
hessian = tf.hessians(z,[x,y])
sess = tf.Session()
sess.run(tf.global_variables_initializer())
sess.run(hessian)
x = tf.cast([1.], tf.float64)
y = tf.cast([2.], tf.float64)
z = x**2*y + y**3*x
hessian = tf.hessians(z,[x,y])
sess = tf.Session()
sess.run(hessian)
###Output
_____no_output_____ |
raw/Python4Maths-master/Intro-to-Python/03_python4math.ipynb | ###Markdown
All of these python notebooks are available at [https://gitlab.erc.monash.edu.au/andrease/Python4Maths.git] Data Structures In simple terms, It is the the collection or group of data in a particular structure. Lists Lists are the most commonly used data structure. Think of it as a sequence of data that is enclosed in square brackets and data are separated by a comma. Each of these data can be accessed by calling it's index value.Lists are declared by just equating a variable to '[ ]' or list.
###Code
a = []
type(a)
###Output
_____no_output_____
###Markdown
One can directly assign the sequence of data to a list x as shown.
###Code
x = ['apple', 'orange']
###Output
_____no_output_____
###Markdown
Indexing In python, indexing starts from 0 as already seen for strings. Thus now the list x, which has two elements will have apple at 0 index and orange at 1 index.
###Code
x[0]
###Output
_____no_output_____
###Markdown
Indexing can also be done in reverse order. That is the last element can be accessed first. Here, indexing starts from -1. Thus index value -1 will be orange and index -2 will be apple.
###Code
x[-1]
###Output
_____no_output_____
###Markdown
As you might have already guessed, x[0] = x[-2], x[1] = x[-1]. This concept can be extended towards lists with more many elements.
###Code
y = ['carrot','potato']
###Output
_____no_output_____
###Markdown
Here we have declared two lists x and y each containing its own data. Now, these two lists can again be put into another list say z which will have it's data as two lists. This list inside a list is called as nested lists and is how an array would be declared which we will see later.
###Code
z = [x,y]
print( z )
###Output
[['apple', 'orange'], ['carrot', 'potato']]
###Markdown
Indexing in nested lists can be quite confusing if you do not understand how indexing works in python. So let us break it down and then arrive at a conclusion.Let us access the data 'apple' in the above nested list.First, at index 0 there is a list ['apple','orange'] and at index 1 there is another list ['carrot','potato']. Hence z[0] should give us the first list which contains 'apple' and 'orange'. From this list we can take the second element (index 1) to get 'orange'
###Code
print(z[0][1])
###Output
orange
###Markdown
Lists do not have to be homogenous. Each element can be of a different type:
###Code
["this is a valid list",2,3.6,(1+2j),["a","sublist"]]
###Output
_____no_output_____
###Markdown
Slicing Indexing was only limited to accessing a single element, Slicing on the other hand is accessing a sequence of data inside the list. In other words "slicing" the list.Slicing is done by defining the index values of the first element and the last element from the parent list that is required in the sliced list. It is written as parentlist[ a : b ] where a,b are the index values from the parent list. If a or b is not defined then the index value is considered to be the first value for a if a is not defined and the last value for b when b is not defined.
###Code
num = [0,1,2,3,4,5,6,7,8,9]
print(num[0:4])
print(num[4:])
###Output
[0, 1, 2, 3]
[4, 5, 6, 7, 8, 9]
###Markdown
You can also slice a parent list with a fixed length or step length.
###Code
num[:9:3]
###Output
_____no_output_____
###Markdown
Built in List Functions To find the length of the list or the number of elements in a list, **len( )** is used.
###Code
len(num)
###Output
_____no_output_____
###Markdown
If the list consists of all integer elements then **min( )** and **max( )** gives the minimum and maximum value in the list. Similarly **sum** is the sum
###Code
print("min =",min(num)," max =",max(num)," total =",sum(num))
max(num)
###Output
_____no_output_____
###Markdown
Lists can be concatenated by adding, '+' them. The resultant list will contain all the elements of the lists that were added. The resultant list will not be a nested list.
###Code
[1,2,3] + [5,4,7]
###Output
_____no_output_____
###Markdown
There might arise a requirement where you might need to check if a particular element is there in a predefined list. Consider the below list.
###Code
names = ['Earth','Air','Fire','Water']
###Output
_____no_output_____
###Markdown
To check if 'Fire' and 'Rajath' is present in the list names. A conventional approach would be to use a for loop and iterate over the list and use the if condition. But in python you can use 'a in b' concept which would return 'True' if a is present in b and 'False' if not.
###Code
'Fire' in names
'Space' in names
###Output
_____no_output_____
###Markdown
In a list with string elements, **max( )** and **min( )** are still applicable and return the first/last element in lexicographical order.
###Code
mlist = ['bzaa','ds','nc','az','z','klm']
print("max =",max(mlist))
print("min =",min(mlist))
###Output
max = z
min = az
###Markdown
Here the first index of each element is considered and thus z has the highest ASCII value thus it is returned and minimum ASCII is a. But what if numbers are declared as strings?
###Code
nlist = ['1','94','93','1000']
print("max =",max(nlist))
print('min =',min(nlist))
###Output
max = 94
min = 1
###Markdown
Even if the numbers are declared in a string the first index of each element is considered and the maximum and minimum values are returned accordingly. But if you want to find the **max( )** string element based on the length of the string then another parameter `key` can be used to specify the function to use for generating the value on which to sort. Hence finding the longest and shortest string in `mlist` can be doen using the `len` function:
###Code
print('longest =',max(mlist, key=len))
print('shortest =',min(mlist, key=len))
###Output
longest = bzaa
shortest = z
###Markdown
Any other built-in or user defined function can be used.A string can be converted into a list by using the **list()** function, or more usefully using the **split()** method, which breaks strings up based on spaces.
###Code
print(list('hello world !'),'Hello World !!'.split())
###Output
['h', 'e', 'l', 'l', 'o', ' ', 'w', 'o', 'r', 'l', 'd', ' ', '!'] ['Hello', 'World', '!!']
###Markdown
**append( )** is used to add a single element at the end of the list.
###Code
lst = [1,1,4,8,7]
lst.append(1)
print(lst)
###Output
[1, 1, 4, 8, 7, 1]
###Markdown
Appending a list to a list would create a sublist. If a nested list is not what is desired then the **extend( )** function can be used.
###Code
lst.extend([10,11,12])
print(lst)
###Output
[1, 1, 4, 8, 7, 1, 10, 11, 12]
###Markdown
**count( )** is used to count the number of a particular element that is present in the list.
###Code
lst.count(1)
###Output
_____no_output_____
###Markdown
**index( )** is used to find the index value of a particular element. Note that if there are multiple elements of the same value then the first index value of that element is returned.
###Code
lst.index(1)
###Output
_____no_output_____
###Markdown
**insert(x,y)** is used to insert a element y at a specified index value x. **append( )** function made it only possible to insert at the end.
###Code
lst.insert(5, 'name')
print(lst)
###Output
[1, 1, 4, 8, 7, 'name', 1, 10, 11, 12]
###Markdown
**insert(x,y)** inserts but does not replace element. If you want to replace the element with another element you simply assign the value to that particular index.
###Code
lst[5] = 'Python'
print(lst)
###Output
[1, 1, 4, 8, 7, 'Python', 1, 10, 11, 12]
###Markdown
**pop( )** function return the last element in the list. This is similar to the operation of a stack. Hence it wouldn't be wrong to tell that lists can be used as a stack.
###Code
lst.pop()
###Output
_____no_output_____
###Markdown
Index value can be specified to pop a ceratin element corresponding to that index value.
###Code
lst.pop(0)
###Output
_____no_output_____
###Markdown
**pop( )** is used to remove element based on it's index value which can be assigned to a variable. One can also remove element by specifying the element itself using the **remove( )** function.
###Code
lst.remove('Python')
print(lst)
###Output
[1, 4, 8, 7, 1, 10, 11]
###Markdown
Alternative to **remove** function but with using index value is **del**
###Code
del lst[1]
print(lst)
###Output
[1, 8, 7, 1, 10, 11]
###Markdown
The entire elements present in the list can be reversed by using the **reverse()** function.
###Code
lst.reverse()
print(lst)
###Output
[11, 10, 1, 7, 8, 1]
###Markdown
Note that the nested list [5,4,2,8] is treated as a single element of the parent list lst. Thus the elements inside the nested list is not reversed.Python offers built in operation **sort( )** to arrange the elements in ascending order. Alternatively **sorted()** can be used to construct a copy of the list in sorted order
###Code
lst.sort()
print(lst)
print(sorted([3,2,1])) # another way to sort
###Output
[1, 1, 7, 8, 10, 11]
[1, 2, 3]
###Markdown
For descending order, By default the reverse condition will be False for reverse. Hence changing it to True would arrange the elements in descending order.
###Code
lst.sort(reverse=True)
print(lst)
###Output
[11, 10, 8, 7, 1, 1]
###Markdown
Similarly for lists containing string elements, **sort( )** would sort the elements based on it's ASCII value in ascending and by specifying reverse=True in descending.
###Code
names.sort()
print(names)
names.sort(reverse=True)
print(names)
###Output
['Air', 'Earth', 'Fire', 'Water']
['Water', 'Fire', 'Earth', 'Air']
###Markdown
To sort based on length key=len should be specified as shown.
###Code
names.sort(key=len)
print(names)
print(sorted(names,key=len,reverse=True))
###Output
['Air', 'Fire', 'Water', 'Earth']
['Water', 'Earth', 'Fire', 'Air']
###Markdown
Copying a list Assignment of a list does not imply copying. It simply creates a second reference to the same list. Most of new python programmers get caught out by this initially. Consider the following,
###Code
lista= [2,1,4,3]
listb = lista
print(listb)
###Output
[2, 1, 4, 3]
###Markdown
Here, We have declared a list, lista = [2,1,4,3]. This list is copied to listb by assigning it's value and it get's copied as seen. Now we perform some random operations on lista.
###Code
lista.sort()
lista.pop()
lista.append(9)
print("A =",lista)
print("B =",listb)
###Output
A = [1, 2, 3, 9]
B = [1, 2, 3, 9]
###Markdown
listb has also changed though no operation has been performed on it. This is because you have assigned the same memory space of lista to listb. So how do fix this?If you recall, in slicing we had seen that parentlist[a:b] returns a list from parent list with start index a and end index b and if a and b is not mentioned then by default it considers the first and last element. We use the same concept here. By doing so, we are assigning the data of lista to listb as a variable.
###Code
lista = [2,1,4,3]
listb = lista[:] # make a copy by taking a slice from beginning to end
print("Starting with:")
print("A =",lista)
print("B =",listb)
lista.sort()
lista.pop()
lista.append(9)
print("Finnished with:")
print("A =",lista)
print("B =",listb)
###Output
Starting with:
A = [2, 1, 4, 3]
B = [2, 1, 4, 3]
Finnished with:
A = [1, 2, 3, 9]
B = [2, 1, 4, 3]
###Markdown
List comprehensionA very powerful concept in Python (that also applies to Tuples, sets and dictionaries as we will see below), is the ability to define lists using list comprehension (looping) expression. For example:
###Code
[i**2 for i in [1,2,3]]
###Output
_____no_output_____
###Markdown
As can be seen this constructs a new list by taking each element of the original `[1,2,3]` and squaring it. We can have multiple such implied loops to get for example:
###Code
[10*i+j for i in [1,2,3] for j in [5,7]]
###Output
_____no_output_____
###Markdown
Finally the looping can be filtered using an **if** expression with the **for** - **in** construct.
###Code
[10*i+j for i in [1,2,3] if i%2==1 for j in [4,5,7] if j >= i+4] # keep odd i and j larger than i+3 only
###Output
_____no_output_____
###Markdown
Tuples Tuples are similar to lists but only big difference is the elements inside a list can be changed but in tuple it cannot be changed. Think of tuples as something which has to be True for a particular something and cannot be True for no other values. For better understanding, Recall **divmod()** function.
###Code
xyz = divmod(10,3)
print(xyz)
print(type(xyz))
###Output
(3, 1)
<class 'tuple'>
###Markdown
Here the quotient has to be 3 and the remainder has to be 1. These values cannot be changed whatsoever when 10 is divided by 3. Hence divmod returns these values in a tuple. To define a tuple, A variable is assigned to paranthesis ( ) or tuple( ).
###Code
tup = ()
tup2 = tuple()
###Output
_____no_output_____
###Markdown
If you want to directly declare a tuple it can be done by using a comma at the end of the data.
###Code
27,
###Output
_____no_output_____
###Markdown
27 when multiplied by 2 yields 54, But when multiplied with a tuple the data is repeated twice.
###Code
2*(27,)
###Output
_____no_output_____
###Markdown
Values can be assigned while declaring a tuple. It takes a list as input and converts it into a tuple or it takes a string and converts it into a tuple.
###Code
tup3 = tuple([1,2,3])
print(tup3)
tup4 = tuple('Hello')
print(tup4)
###Output
(1, 2, 3)
('H', 'e', 'l', 'l', 'o')
###Markdown
It follows the same indexing and slicing as Lists.
###Code
print(tup3[1])
tup5 = tup4[:3]
print(tup5)
###Output
2
('H', 'e', 'l')
###Markdown
Mapping one tuple to anotherTupples can be used as the left hand side of assignments and are matched to the correct right hand side elements - assuming they have the right length
###Code
(a,b,c)= ('alpha','beta','gamma') # are optional
a,b,c= 'alpha','beta','gamma' # The same as the above
print(a,b,c)
a,b,c = ['Alpha','Beta','Gamma'] # can assign lists
print(a,b,c)
[a,b,c]=('this','is','ok') # even this is OK
print(a,b,c)
###Output
alpha beta gamma
Alpha Beta Gamma
this is ok
###Markdown
More complex nexted unpackings of values are also possible
###Code
(w,(x,y),z)=(1,(2,3),4)
print(w,x,y,z)
(w,xy,z)=(1,(2,3),4)
print(w,xy,z) # notice that xy is now a tuple
###Output
1 2 3 4
1 (2, 3) 4
###Markdown
Built In Tuple functions **count()** function counts the number of specified element that is present in the tuple.
###Code
d=tuple('a string with many "a"s')
d.count('a')
###Output
_____no_output_____
###Markdown
**index()** function returns the index of the specified element. If the elements are more than one then the index of the first element of that specified element is returned
###Code
d.index('a')
###Output
_____no_output_____
###Markdown
Sets Sets are mainly used to eliminate repeated numbers in a sequence/list. It is also used to perform some standard set operations.Sets are declared as set() which will initialize a empty set. Also `set([sequence])` can be executed to declare a set with elements
###Code
set1 = set()
print(type(set1))
set0 = set([1,2,2,3,3,4])
set0 = {1,2,2,3,3,4} # equivalent to the above
print(set0)
###Output
{1, 2, 3, 4}
###Markdown
elements 2,3 which are repeated twice are seen only once. Thus in a set each element is distinct.However be warned that **{}** is **NOT** a set, but a dictionary (see next chapter of this tutorial)
###Code
type({})
###Output
_____no_output_____
###Markdown
Built-in Functions
###Code
set1 = set([1,2,3])
set2 = set([2,3,4,5])
###Output
_____no_output_____
###Markdown
**union( )** function returns a set which contains all the elements of both the sets without repition.
###Code
set1.union(set2)
###Output
_____no_output_____
###Markdown
**add( )** will add a particular element into the set. Note that the index of the newly added element is arbitrary and can be placed anywhere not neccessarily in the end.
###Code
set1.add(0)
set1
###Output
_____no_output_____
###Markdown
**intersection( )** function outputs a set which contains all the elements that are in both sets.
###Code
set1.intersection(set2)
###Output
_____no_output_____
###Markdown
**difference( )** function ouptuts a set which contains elements that are in set1 and not in set2.
###Code
set1.difference(set2)
###Output
_____no_output_____
###Markdown
**symmetric_difference( )** function ouputs a function which contains elements that are in one of the sets.
###Code
set2.symmetric_difference(set1)
###Output
_____no_output_____
###Markdown
**issubset( ), isdisjoint( ), issuperset( )** is used to check if the set1/set2 is a subset, disjoint or superset of set2/set1 respectively.
###Code
set1.issubset(set2)
set2.isdisjoint(set1)
set2.issuperset(set1)
###Output
_____no_output_____
###Markdown
**pop( )** is used to remove an arbitrary element in the set
###Code
set1.pop()
print(set1)
###Output
{1, 2, 3}
###Markdown
**remove( )** function deletes the specified element from the set.
###Code
set1.remove(2)
set1
###Output
_____no_output_____
###Markdown
**clear( )** is used to clear all the elements and make that set an empty set.
###Code
set1.clear()
set1
###Output
_____no_output_____ |
notebooks/tutorials/audio_signal_clustering/K_means_audio.ipynb | ###Markdown
Audio Features- Zero Cross Rate- Energy- Entropy of Energy- Spectral Centroid- Spectral Spread- Spectral Entropy- Spectral Flux- Spectral Roll off- MFCC- Chroma Vector- Chroma Deviation Utility function to extract features from wav files* Pre-process data and transform double channel to single channel* Extract the chromagraph from the audio file* Find note frequency
###Code
# audioFeatureExtraction.stFeatureExtraction?
def preProcess(fileName):
# Extracting wav file data
[Fs, x] = audioBasicIO.readAudioFile(fileName); #Fs - Framerate/ sampling frequency in Hz
# If double/stereo channel data then take mean
if( len( x.shape ) > 1 and x.shape[1] == 2 ):
x = np.mean( x, axis = 1, keepdims = True )
else:
x = x.reshape( x.shape[0], 1 )
# Extract the raw chromagram data, expected dimention is [ m, ] not [ m, 1 ]
F, f_names = audioFeatureExtraction.stFeatureExtraction(
x[ :, 0 ], #signal
Fs, #the sampling freq (in Hz)
0.050*Fs, #the short-term window size (in samples)
0.025*Fs #the short-term window step (in samples)
)
return (f_names, F)
feature_name, features = preProcess( "./sample_data_audio/c1_2.wav" )
print("Avalable features: ", feature_name)
print("-------------------------------------------------")
print("Total features : ", len(feature_name))
print("Shapes of features: ", features.shape)
###Output
Avalable features: ['zcr', 'energy', 'energy_entropy', 'spectral_centroid', 'spectral_spread', 'spectral_entropy', 'spectral_flux', 'spectral_rolloff', 'mfcc_1', 'mfcc_2', 'mfcc_3', 'mfcc_4', 'mfcc_5', 'mfcc_6', 'mfcc_7', 'mfcc_8', 'mfcc_9', 'mfcc_10', 'mfcc_11', 'mfcc_12', 'mfcc_13', 'chroma_1', 'chroma_2', 'chroma_3', 'chroma_4', 'chroma_5', 'chroma_6', 'chroma_7', 'chroma_8', 'chroma_9', 'chroma_10', 'chroma_11', 'chroma_12', 'chroma_std']
-------------------------------------------------
Total features : 34
Shapes of features: (34, 229)
###Markdown
Chroma VectorIt is a representation of how humans relate colors to notes. In other words we think of same notes but from two different octaves to be of the same color. Thus we have 12 possible values at each window. A, A, B, C, C, D, D, E, F, F, G and G. Of course they are not mutually exclusive hence for a given time fame one can have more than one note. But to keep things simple we will only select the most prominent note for a particular window. Let us visualize the chroma vector using a chromagram.
###Code
print("Feature 21: ", feature_name[21])
print("Feature 21 shape: ", features[21].shape)
def getChromagram(audioData):
# chronograph_1
temp_data = audioData[21].reshape(1, audioData[21].shape[0])
chronograph = temp_data
# looping through the next 11 stacking them vertically
for i in range(22, 33): #chroma_1 to chroma_11
temp_data = audioData[i].reshape(1, audioData[i].shape[0])
chronograph = np.vstack([chronograph, temp_data])
return chronograph
chromagram = getChromagram(features)
chromagram.shape
def getNoteFrequency(chromagram):
# Total number of time frames in the current sample
numberOfWindows = chromagram.shape[1] #12
# Taking the note with the highest amplitude
freqVal = chromagram.argmax( axis = 0 )
# Converting the freqVal vs time to freq count
histogram, bin = np.histogram( freqVal, bins = 12 )
# Normalizing the distribution by the number of time frames
normalized_hist = histogram.reshape( 1, 12 ).astype( float ) / numberOfWindows
return normalized_hist
noteFrequency = getNoteFrequency(chromagram)
noteFrequency.shape
###Output
_____no_output_____
###Markdown
Utility function to plot a heat map and frequency of each note
###Code
def plotHeatmap(chromagraph, smallSample = True):
notesLabels = ["G#", "G", "F#", "F", "E", "D#", "D", "C#", "C", "B", "A#", "A"]
fig, axis = plt.subplots()
if smallSample:
im = axis.imshow(chromagram[:, 0 : 25], cmap = "YlGn")
else:
im = axis.imshow(chromagram)
cbar = axis.figure.colorbar(im, ax = axis, cmap = "YlGn")
cbar.ax.set_ylabel("Amplitude", rotation=-90, va="bottom")
axis.set_yticks(np.arange(len(notesLabels)))
axis.set_yticklabels(notesLabels)
axis.set_title("chromagram")
fig.tight_layout()
_ = plt.show()
plotHeatmap(chromagram)
###Output
_____no_output_____
###Markdown
As we can see, there is indeed more than one note being hit in the same time window.
###Code
def noteFrequencyPlot(noteFrequency, smallSample=True):
fig, axis = plt.subplots(1, 1, sharey=True)
axis.plot(np.arange(1, 13), noteFrequency[0, :])
_ = plt.show()
noteFrequencyPlot(noteFrequency)
###Output
_____no_output_____
###Markdown
For example in this image, the 12th note was hit the most as compared to other notes. So finally we will be left with a (1x12) vector representing a data point. This will form the basic unit of our data set. Dataset generatorThis function iterates over all the available files and converts them into note frequency arrays which is out feature set for each audio file.
###Code
fileList = []
def getDataset(filePath):
X = pd.DataFrame()
columns=["G#", "G", "F#", "F", "E", "D#", "D", "C#", "C", "B", "A#", "A"]
for root, dirs, filenames in os.walk(filePath):
for file in sorted(filenames):
print("Workin on file: ", file)
fileList.append(file)
feature_name, features = preProcess(filePath + file)
chromagram = getChromagram(features)
noteFrequency = getNoteFrequency(chromagram) # 1x12
x_new = pd.Series(noteFrequency[0, :])
X = pd.concat([X, x_new], axis=1) # 12 x 10 ; 10-files
data = X.T.copy() # 10 x 12
data.columns = columns
data.index = [i for i in range(0, data.shape[0])]
return data
data = getDataset("./sample_data_audio/")
###Output
Workin on file: c1_1.wav
Workin on file: c1_2.wav
Workin on file: c2_1.wav
Workin on file: c2_2.wav
Workin on file: c2_3.wav
Workin on file: c2_4.wav
Workin on file: c3_1.wav
Workin on file: c3_2.wav
Workin on file: c4_1.wav
Workin on file: c4_2.wav
###Markdown
A peek into the datasetEach row represents a file and each column represents the frequency distribution of notesIn my case, I have 10 files in total.
###Code
data
###Output
_____no_output_____
###Markdown
K-means explanationThe algorithm in itself is pretty simple:1. Initialize all k centroids.2. Loop step 3 and 4 for given number of epochs3. Label the data-points with the closest centroid.4. Recalculate the centroids by taking the mean of all the data-sets with same labels. $$p = a_1i + b_1j + ... + l_1t \\q = a_2i + b_2j + ... + l_2t \\distance(p,q) = \sqrt{(a_1-a_2)^2 + (b_1-b_2)^2 + ... + (l_1-l_2)^2}$$Now to visualize what K-means is doing in each iteration, let us consider the data set to be 2D. Let k = 3. The algorithm first initializes 3 random centroids. Now the training loop starts and for each iteration, it paints each point in the data set with the color of the centroid that is closest (least distance) to it. When that is done, new centroids are calculated by taking the mean of the points with the same color. This animation shows the algorithm at work.  Hyper-parameters
###Code
# Number of cluster we wish to divide the data into( user tunable )
k = 4
# Max number of allowed iterations for the algorithm( user tunable )
epochs = 2000
###Output
_____no_output_____
###Markdown
K-means utility functions
###Code
def initilizeCentroids(data, k):
'''
Initilize cluster centroids( assuming random k data points to be centroid return them )
Given data and k, return the first k data points and these k points will act as initial centroids.
'''
centroids = data[:k] #this supposed to be random points, for simplicity we taking from data. If you get Nan go for random!
# centroids = np.random.rand(k, data.shape[1]) #again when I triwe get NaN still, so I soted the files such that we have some good initial points
return centroids
centroids = initilizeCentroids(data, k)
centroids
X = tf.placeholder(dtype=tf.float32, shape=data.shape) # 10x12
C = tf.placeholder(dtype=tf.float32, shape=(k,data.shape[1])) # kx12
C_labels = tf.placeholder(dtype=tf.int32) #
# utility to assign centroids to examples
# https://www.coursera.org/lecture/neural-networks-deep-learning/broadcasting-in-python-uBuTv
expanded_vectors = tf.expand_dims(X, 0) # 1X10x12
expanded_centroids = tf.expand_dims(C, 1) # kx1x12
print(expanded_vectors)
print(expanded_centroids)
###Output
Tensor("ExpandDims:0", shape=(1, 10, 12), dtype=float32)
Tensor("ExpandDims_1:0", shape=(4, 1, 12), dtype=float32)
###Markdown
$$p = a_1i + b_1j + ... + l_1t \\q = a_2i + b_2j + ... + l_2t \\distance(p,q) = \sqrt{(a_1-a_2)^2 + (b_1-b_2)^2 + ... + (l_1-l_2)^2}$$We wish to subtract each data point with each centroid to find the distance between them, then select the centroid that gave the least distance for each data point.Thus we will do (Kx10) distance calculations, and for each calculation, we will process 12 features for each vector. Notice that the 3rd dimension is same for both — 12. Thus each data point is copied K times in the 1st dimension, and each centroid is copied 19 times in the 2nd dimension. Thus subtracting the two on axis=2, will give us (Kx10) elements each having 12 features in the 3rd dimension.
###Code
distance = tf.reduce_sum(tf.square(tf.subtract(expanded_vectors, expanded_centroids)), axis = 2) # kx10
getCentroidsOp = tf.argmin(distance, 0)
print(distance)
print(getCentroidsOp)
###Output
Tensor("Sum:0", shape=(4, 10), dtype=float32)
Tensor("ArgMin:0", shape=(10,), dtype=int64)
###Markdown
https://www.tensorflow.org/api_docs/python/tf/math/unsorted_segment_sum
###Code
# utility to recalculate centroids
# This adds up the feature vectors of the data points with same label id. Will end up giving K feature vectors.
sums = tf.unsorted_segment_sum(X, C_labels, k) # K
# This will do the same thing as the above step but instead of adding the feature vectors of data points, we add ones.
#This will be used to take the mean of the K feature vectors generated in the above step finally giving us the new centroids.
counts = tf.unsorted_segment_sum(tf.ones_like(X), C_labels, k )
# Dividing the two tensors to generate the new centroids.
reCalculateCentroidsOp = tf.divide(sums, counts)
print(sums)
print(counts)
print(reCalculateCentroidsOp)
###Output
Tensor("UnsortedSegmentSum:0", dtype=float32)
Tensor("UnsortedSegmentSum_1:0", dtype=float32)
Tensor("truediv:0", dtype=float32)
###Markdown
Driver function
###Code
centroids = []
data_labels = []
with tf.Session() as sess:
# Initilize all tensor flow variables
sess.run(tf.global_variables_initializer())
# Get initial list of k centroids
centroids = initilizeCentroids(data, k)
for epoch in tqdm(range(epochs)):
data_labels = sess.run(getCentroidsOp, feed_dict={X:data, C:centroids})
centroids = sess.run( reCalculateCentroidsOp, feed_dict={X:data, C_labels:data_labels})
print(data_labels)
print("------------------------")
print(centroids)
final_labels = pd.DataFrame( { "Labels": data_labels, "File Names": fileList } )
final_labels
###Output
_____no_output_____ |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.