path
stringlengths
7
265
concatenated_notebook
stringlengths
46
17M
notebooks/basic-model/002-schelling.ipynb
###Markdown CSCS530 Winter 2015 Complex Systems 530 - Computer Modeling of Complex Systems (Winter 2015) * Course ID: CMPLXSYS 530 * Course Title: Computer Modeling of Complex Systems * Term: Winter 2015 * Schedule: Wednesdays and Friday, 1:00-2:30PM ET * Location: 120 West Hall (http://www.lsa.umich.edu/cscs/research/computerlab) * Teachers: [Mike Bommarito](https://www.linkedin.com/in/bommarito) and [Sarah Cherng](https://www.linkedin.com/pub/sarah-cherng/35/1b7/316) [View this repository on NBViewer](http://nbviewer.ipython.org/github/mjbommar/cscs-530-w2015/tree/master/) Schelling Model (basic) __TODO__: Describe Schelling model and reference Think Complexity chapter. Imports In this model, we'll be importing a few extra libraries that we haven't seen before: * [copy](https://docs.python.org/2/library/copy.html) * [itertools](https://docs.python.org/2/library/itertools.html) ###Code %matplotlib inline # Imports import copy import itertools import numpy import matplotlib.pyplot as plt import pandas import seaborn; seaborn.set() # Import widget methods from IPython.html.widgets import * ###Output _____no_output_____ ###Markdown Building a grid In the sample below, we'll create a simple square grid and fill the grid with households. The parameters below will guide our model as follows: * __``grid_size``__: the number of cells per row or column; the total number of cells is $grid\_size^2$. * __``group_proportion``__: the percentage of households that will be of type 1 * __``density``__: the percentage of grid cells that will be populated with a household The logic for our grid initialization can be described as follows: * For each cell in every row and column * Draw a random value on $[0, 1)$ and compare to $density$ to determine if we will fill this cell * If the cell will be filled, draw a random value on $[0, 1)$ and compare to $group\_proportion$ to determine whether the household will be 1 or 2 ###Code # Set parameters grid_size = 20 group_proportion = 0.25 density = 0.5 # Create the space and activate random cells space = numpy.zeros((grid_size, grid_size), dtype=numpy.int8) # Now sample the agents. for row_id in range(grid_size): for col_id in range(grid_size): # Determine if this cell will be populated if numpy.random.random() <= density: # Determine this cell's initial group if numpy.random.random() <= group_proportion: cell_type = 1 else: cell_type = 2 # Set the space space[row_id, col_id] = cell_type # Now show the space f = plt.figure() p = plt.pcolor(space, snap=True) c = plt.colorbar() ###Output _____no_output_____ ###Markdown Initialization method Below, we wrap the test method above in a method named ``initialize_space``. We need to setup the following parameters: * __``grid_size``__: number of cells in each row or column * __``group_proportion``__: percentage of initial population that will be of group 1 * __``density``__: percentage of cells that will be occupied in the space ###Code def initialize_space(grid_size, group_proportion, density): """ Initialize a space. """ # Create the space and activate random cells space = numpy.zeros((grid_size, grid_size), dtype=numpy.int8) # Now sample the agents. for row_id in range(grid_size): for col_id in range(grid_size): # Determine if this cell will be populated if numpy.random.random() <= density: # Determine this cell's initial group if numpy.random.random() <= group_proportion: cell_type = 1 else: cell_type = 2 # Set the cell space[row_id, col_id] = cell_type return space ###Output _____no_output_____ ###Markdown Testing out space initialization Let's test out our ``initialize_space`` method by visualizing for given parameters below. ###Code # Set parameters grid_size = 10 group_proportion = 0.25 happy_proportion = 0.5 density = 0.5 window = 1 def display_space(grid_size=10, group_proportion=0.5, density=0.5): # Check assert(grid_size > 1) assert(group_proportion >= 0.0) assert(group_proportion <= 1.0) assert(density >= 0.0) assert(density <= 1.0) # Initialize space space = initialize_space(grid_size, group_proportion, density) # Plot f = plt.figure() p = plt.pcolor(space) c = plt.colorbar() # Setup widget interact(display_space, grid_size=IntSliderWidget(min=2, max=100, value=10), group_proportion=FloatSliderWidget(min=0.0, max=1.0, value=0.5), density=FloatSliderWidget(min=0.0, max=1.0, value=0.5)) # Pick a random household household_list = numpy.column_stack(numpy.where(space > 0)) household_id = numpy.random.choice(range(len(household_list))) # Check if the household is happy row, col = household_list[household_id] household_type = space[row, col] # Get the set of positions with grid wrapping for neighbors neighbor_pos = [(x % grid_size, y % grid_size) for x, y in itertools.product(range(row-window, row+window+1), range(col-window, col+window+1))] neighborhood = numpy.reshape([space[x, y] for x, y in neighbor_pos], (2*window+1, 2*window+1)) # Count the number of neighbors of same type neighbor_count = len(numpy.where(neighborhood == household_type)[0]) - 1 neighbor_fraction = float(neighbor_count) / ((2 * window + 1) **2 - 1) # Output counts print("Household type: {0}".format(household_type)) print("Neighborhood:") print(neighborhood) print("Number of similar neighbors:") print(neighbor_count) print("Fraction of similar neighbors:") print(neighbor_fraction) def run_model_step(space, happy_proportion, window): """ Run a step of the model. """ space = copy.copy(space) grid_size = space.shape[0] # Get list of empty and occupied household_list = numpy.column_stack(numpy.where(space > 0)) # Pick a random house household_id = numpy.random.choice(range(len(household_list))) # Check if the household is happy row, col = household_list[household_id] household_type = space[row, col] # Get the set of positions with grid wrapping for neighbors neighbor_pos = [(x % grid_size, y % grid_size) for x, y in itertools.product(range(row-window, row+window+1), range(col-window, col+window+1))] neighborhood = numpy.reshape([space[x, y] for x, y in neighbor_pos], (2*window+1, 2*window+1)) # Count the number of neighbors of same type neighbor_count = len(numpy.where(neighborhood == household_type)[0]) - 1 neighbor_fraction = float(neighbor_count) / ((2 * window + 1) **2 - 1) # If the house is unhappy, move. if neighbor_fraction < happy_proportion: # Get empty cells empty_list = numpy.column_stack(numpy.where(space == 0)) # Get empty target cell target_cell_id = numpy.random.choice(range(len(empty_list))) target_row, target_col = empty_list[target_cell_id] # Move the agent space[row, col] = 0 space[target_row, target_col] = household_type return space # Set parameters grid_size = 50 group_proportion = 0.33 happy_proportion = 0.33 density = 0.5 window = 3 max_steps = 100000 # Initialize space space = initialize_space(grid_size, group_proportion, density) # Setup space space_history = [space] # Iterate for i in range(max_steps): # Append step history space_history.append(run_model_step(space_history[-1], happy_proportion, window)) def display_space_step(step=1): f = plt.figure() plt.pcolor(space_history[step]) ax = f.gca() ax.set_aspect(1./ax.get_data_ratio()) interact(display_space_step, step=IntSliderWidget(min=1, max=len(space_history)-1, step=1)) ###Output _____no_output_____ ###Markdown Automate simulation ###Code %%time def run_model_simulation(grid_size = 50, group_proportion = 0.33, happy_proportion = 0.33, density = 0.5, window = 1, max_steps = 100000): """ Run a full model simulation. """ # Initialize space space = initialize_space(grid_size, group_proportion, density) # Setup space space_history = [space] # Iterate for i in range(max_steps): # Append step history space_history.append(run_model_step(space_history[-1], happy_proportion, window)) return space_history # Run the simulation and output space_history = run_model_simulation(grid_size=25, happy_proportion=0.25, window=1, max_steps=10000) interact(display_space_step, step=IntSliderWidget(min=1, max=len(space_history)-1, step=1)) def get_neighbor_distribution(space, window=1): """ Get distribution of neighbor fractions. """ fractions = numpy.full(space.shape, numpy.nan) grid_size = space.shape[0] # Get a measure of clustering for row in range(grid_size): for col in range(grid_size): # Check if cell is occupied if space[row, col] == 0: continue else: household_type = space[row, col] neighbor_pos = [1 for x, y in itertools.product(range(row-window, row+window+1), range(col-window, col+window+1)) if space[x % grid_size, y % grid_size] == household_type] fractions[row, col] = float(sum(neighbor_pos)-1) / ((2 * window + 1) **2 - 1) return fractions # Get the full happiness history happy_history = [] happy_mean_ts = [] for step in range(len(space_history)): happy_history.append(get_neighbor_distribution(space_history[step])) happy_mean_ts.append(numpy.nanmean(happy_history[-1])) # Method to plot happiness surface def display_happy_step(step=1): # Create figures f = plt.figure() plt.pcolor(happy_history[step]) plt.colorbar() # Create widget interact(display_happy_step, step=IntSliderWidget(min=1, max=len(happy_history)-1, step=1)) # Plot the average time series f = plt.figure() plt.plot(range(len(happy_mean_ts)), happy_mean_ts) ###Output _____no_output_____
notebook/2019-03-14_evolution.ipynb
###Markdown Parse New Gene Table **from:** Maria D. VibranovskiHere attached is a list from Yong Zhang group based on our paper from 2010. But this is a still not published updated version that he shared with me but you can use.If you need details about the columns, please look at https://genome.cshlp.org/content/suppl/2010/08/27/gr.107334.110.DC1/SupplementalMaterial.pdf table 2a.But mainly, what you need to select is the child genes with:gene_type = D or R or DL or RLm_type= Mnote that contains "chrX-"D and R stands for DNA-based Duplication and RNA-based duplicationL means that the assignment of the parental genes is less reliable.M indicates that is between chromosome movement.Hope it helps. If you need I can parse for you. please, do not hesitate to ask. But I thought you would prefer a complete list where you can look at subsets.cheersMaria ###Code import os import sys from pathlib import Path import re from IPython.display import display, HTML, Markdown import numpy as np import pandas as pd from scipy.stats import fisher_exact, chi2_contingency from scipy.stats.contingency import margins import statsmodels.formula.api as smf import matplotlib as mpl import matplotlib.pyplot as plt import seaborn as sns # Project level imports sys.path.insert(0, '../lib') from larval_gonad.notebook import Nb from larval_gonad.plotting import make_figs from larval_gonad.config import memory # Setup notebook nbconfig = Nb.setup_notebook(seurat_dir='../output/scrnaseq-wf/scrnaseq_combine_force') def adjusted_residuals(observed, expected): resid = (observed - expected) / np.sqrt(expected) n = observed.sum().sum() rsum, csum = margins(observed) v = csum * rsum * (n - rsum) * (n - csum) / n**3 return (observed - expected) / np.sqrt(v) ###Output _____no_output_____ ###Markdown Import data from Maria FBgn sanitizer I don't know where these FBgns are from, so I need to sanitize them to my current annotation. ###Code assembly = nbconfig.assembly tag = nbconfig.tag pth = Path(os.environ['REFERENCES_DIR'], f'{assembly}/{tag}/fb_annotation/{assembly}_{tag}.fb_annotation') # Create an FBgn mapper = {} for record in pd.read_csv(pth, sep='\t').to_records(): mapper[record.primary_FBgn] = record.primary_FBgn try: for g in record.secondary_FBgn.split(','): mapper[g] = record.primary_FBgn except AttributeError: pass autosomes = ['chr2L', 'chr2R', 'chr3L', 'chr3R'] movement = ( pd.read_excel('../data/external/maria/dm6_ver78_genetype.new.xlsx') .query('gene_type == ["D", "R", "Dl", "Rl"] and m_type == "M"') .assign(child_chrom = lambda df: df.note.str.extract('(chr.*?)-')) .assign(parent_chrom = lambda df: df.note.str.extract('-(chr.*?)[:;]')) .assign(FBgn = lambda df: df.child_id.map(mapper)) .assign(parent_FBgn = lambda df: df.parent_id.map(mapper)) .drop(['child_id', 'parent_id', 'note', 'm_type'], axis=1) .dropna() .set_index('FBgn') .assign(moved_x_to_a = lambda df: (df.parent_chrom == 'chrX') & df.child_chrom.isin(autosomes)) .assign(moved_a_to_a = lambda df: df.parent_chrom.isin(autosomes) & df.child_chrom.isin(autosomes)) .assign(moved_a_to_x = lambda df: df.parent_chrom.isin(autosomes) & (df.child_chrom == 'chrX')) .query('moved_x_to_a | moved_a_to_a | moved_a_to_x') ) movement.head() biomarkers = ( nbconfig.seurat.get_biomarkers('res.0.6') .cluster.map(nbconfig.short_cluster_annot) .pipe(lambda x: x[x != 'UNK']) .to_frame() .reset_index() .groupby('FBgn') .apply(lambda x: '|'.join(x.cluster)) .rename('biomakrer_cluster') ) germ_comp = ( pd.read_csv('../output/scrnaseq-wf/germcell_deg/gonia_vs_cytes.tsv', sep='\t') .assign(FBgn = lambda df: df.primary_FBgn) .assign(gonia = lambda df: df.avg_logFC > 0) .assign(cyte = lambda df: df.avg_logFC < 0) .set_index('FBgn') .loc[:, ['gonia', 'cyte']] .idxmax(axis=1) .rename('bias_gonia_vs_cyte') ) biomarkers.head() df = ( movement.join(biomarkers, how='left') .join(germ_comp.rename('bias_gonia_vs_cyte_child'), how='left') .join(germ_comp.rename('bias_gonia_vs_cyte_parent'), on='parent_FBgn', how='left') ) df out_order = [ 'child_chrom', 'parent_chrom', 'parent_FBgn', 'gene_type', 'moved_x_to_a', 'moved_a_to_a', 'moved_a_to_x', 'biomakrer_cluster', 'bias_gonia_vs_cyte_child', 'bias_gonia_vs_cyte_parent' ] df.reindex(columns=out_order).reset_index().rename({'FBgn': 'child_FBgn'}, axis=1).fillna('nan').to_csv('../output/notebook/2019-03-14_movement_data.csv', index=None) print('\n'.join(out_order)) ###Output child_chrom parent_chrom parent_FBgn gene_type moved_x_to_a moved_a_to_a moved_a_to_x biomakrer_cluster bias_gonia_vs_cyte_child bias_gonia_vs_cyte_parent
robrisk/demo_static.ipynb
###Markdown Demo: competing methods for CVaR estimationThis demo covers the numerical tests of different methods for estimating the CVaR of loss functions, as done in the following paper.- Learning with risk-averse feedback under potentially heavy tails (cf. Sections 2.2 and 3). Matthew J. Holland and El Mehdi Haress. AISTATS 2021.The contents of this demo notebook are as follows:- Testing performance over sample size $n$- Testing performance over confidence parameter $\alpha$Assuming the user has read the README associated with this repository and done the required setup, all tests here can be easily run within this notebook. We make use of our `mml` repository just for M-estimation sub-routines. Testing performance over sample size $n$ ###Code ## External modules. import matplotlib.pyplot as plt import numpy as np import os ## Internal modules. from mml.utils.mest import est_loc_fixedpt, inf_gudermann, est_scale_chi_fixedpt, chi_geman_quad from setup_results import my_fontsize, my_ext, export_legend mth_dict = {"empirical": {"simple": "Empirical", "colour": "black"}, "catoni": {"simple": "Cat-12", "colour": "tab:green"}, "mom": {"simple": "MoM", "colour": "tab:orange"}, "rtrunc": {"simple": "R-Trunc", "colour": "tab:red"}} ## First, set task and data distribution. task_name = "STATIC-N" data_dist = "pareto" # set this freely. ss = np.random.SeedSequence() rg = np.random.default_rng(seed=ss) def x_gen(n, data_dist): if data_dist == "lognormal": return rg.lognormal(mean=0.0, sigma=1.95, size=n) elif data_dist == "fnorm": return np.abs(rg.normal(loc=0.0, scale=3.9, size=n)) elif data_dist == "pareto": return rg.pareto(a=2.15, size=n) * 3.8 ## Next, set the method names. mth_names_raw = ["empirical", "catoni", "mom", "rtrunc"] ## Next, set other experiment-related parameters. num_trials = 10000 alpha = 0.05 n_verify = 100000000 ## Catoni parameters and estimator prep. _thres_Mest = 1e-03 # threshold value for M-estimator computations. _iters_Mest = 50 # number of iterations for M-estimator computations. _delta = 0.02 # confidence level parameter. _s_min = 0.001 # to prevent overflow when dividing by small numbers. est_loc = lambda X, s: est_loc_fixedpt(X=X, s=s, inf_fn=inf_gudermann, thres=_thres_Mest, iters=_iters_Mest) est_scale = lambda X: est_scale_chi_fixedpt(X=X, chi_fn=chi_geman_quad) ## MoM parameters. _mom_k = 1+np.ceil(3.5*np.log(1/_delta)) print("MoM k value = {}".format(_mom_k)) ## Get an accurate estimate of the true CVaR value. x_verify = np.sort(x_gen(n=n_verify, data_dist=data_dist)) # first sample. var = x_verify[int(np.floor((1-alpha)*n_verify))] x_verify = x_gen(n=n_verify, data_dist=data_dist) # second sample. cvar = np.mean(x_verify[x_verify >= var]) / alpha del x_verify print(cvar) ###Output 496.25993489960194 ###Markdown Main test over $n$ ###Code n_samples = [750, 1000, 1500, 2500, 4500, 8500, 16500, 33000] n_tocheck = 8500 aves_emp = [] aves_cat12 = [] aves_mom = [] aves_rtrunc = [] sds_emp = [] sds_cat12 = [] sds_mom = [] sds_rtrunc = [] tocheck_dict = {} for idx_n in range(len(n_samples)): n_sample = n_samples[idx_n] print("Working (n = {})".format(n_sample)) diffs_emp = [] diffs_cat12 = [] diffs_mom = [] diffs_rtrunc = [] for t in range(num_trials): x_var = x_gen(n=int(np.floor(n_sample/2)), data_dist=data_dist) x_est = x_gen(n=int(np.floor(n_sample/2)), data_dist=data_dist) idx_break = int(np.floor(n_sample/2)) var_hat = np.sort(x_var)[int(np.floor((1-alpha)*x_var.size))] x_est_cond = x_est[x_est >= var_hat] ## Empirical. if x_est_cond.size < 1: cvar_hat_emp = np.max(x_est) / alpha else: cvar_hat_emp = np.mean(x_est_cond) / alpha diffs_emp += [np.abs(cvar_hat_emp-cvar)] ## Catoni. if x_est_cond.size < 2: cvar_hat_cat12 = np.max(x_est) / alpha else: s_est = np.sqrt(x_est.size/np.log(1/_delta)) s_est *= est_scale(x_est_cond-x_est_cond.mean()) s_est = np.maximum(s_est, _s_min) # ensure not too small. cvar_hat_cat12 = est_loc(X=x_est_cond, s=s_est).item() / alpha diffs_cat12 += [np.abs(cvar_hat_cat12-cvar)] ## Median-of-means. if x_est_cond.size < 2*_mom_k: if x_est_cond.size < 1: cvar_hat_mom = np.max(x_est) / alpha else: cvar_hat_mom = np.mean(x_est_cond) / alpha else: cvar_hat_mom = np.median([ val.mean() for val in np.array_split(x_est_cond, _mom_k) ]) / alpha diffs_mom += [np.abs(cvar_hat_mom-cvar)] ## Random truncation. if x_est_cond.size < 1: cvar_hat_rtrunc = np.max(x_est) / alpha else: u_hat = np.mean(x_est**2) b = np.sqrt(u_hat*(np.arange(x_est.size)+1)/np.log(1/_delta)) b_cond = b[x_est >= var_hat] x_est_cond_b = x_est_cond[x_est_cond <= b_cond] if x_est_cond_b.size < 1: cvar_hat_rtrunc = 0.0 else: cvar_hat_rtrunc = np.mean(x_est_cond_b) / alpha diffs_rtrunc += [np.abs(cvar_hat_rtrunc-cvar)] aves_emp += [np.mean(np.array(diffs_emp))] aves_cat12 += [np.mean(np.array(diffs_cat12))] aves_mom += [np.mean(np.array(diffs_mom))] aves_rtrunc += [np.mean(np.array(diffs_rtrunc))] sds_emp += [np.std(np.array(diffs_emp))] sds_cat12 += [np.std(np.array(diffs_cat12))] sds_mom += [np.std(np.array(diffs_mom))] sds_rtrunc += [np.std(np.array(diffs_rtrunc))] if n_sample == n_tocheck: tocheck_dict["empirical"] = np.array(diffs_emp) tocheck_dict["catoni"] = np.array(diffs_cat12) tocheck_dict["mom"] = np.array(diffs_mom) tocheck_dict["rtrunc"] = np.array(diffs_rtrunc) aves_dict = {"empirical": np.array(aves_emp), "catoni": np.array(aves_cat12), "mom": np.array(aves_mom), "rtrunc": np.array(aves_rtrunc)} sds_dict = {"empirical": np.array(sds_emp), "catoni": np.array(sds_cat12), "mom": np.array(sds_mom), "rtrunc": np.array(sds_rtrunc)} # Select methods for visualization. mth_todo_idx = [0, 1, 2, 3] mth_todo = [ mth_names_raw[m] for m in mth_todo_idx ] print("Methods to check (raw):", mth_todo) mth_todo_simple = [ mth_dict[mth]["simple"] for mth in mth_todo ] print("Methods to check (simplified):", mth_todo_simple) mth_todo_colours = [ mth_dict[mth]["colour"] for mth in mth_todo ] # Boxplot of all methods for particular "n" value. myfig, ax = plt.subplots(1, 1, figsize=(10,4)) perf_tostack = [] perf_tostack = [] for m in range(len(mth_todo)): perf_tostack += [tocheck_dict[mth_todo[m]]] ax.boxplot(x=np.vstack(perf_tostack).T, notch=False, labels=mth_todo_simple) ax.tick_params(labelsize=my_fontsize) ax.set_title("CVaR estimation error ({}, n={})".format(data_dist, n_tocheck), size=my_fontsize) fname = os.path.join("img", "{}_box_{}.{}".format(task_name, data_dist, my_ext)) plt.savefig(fname=fname, bbox_inches="tight") plt.show() # Do the visualization. myfig, (ax1, ax2) = plt.subplots(1, 2, figsize=(10,5), sharey=True) for m in range(len(mth_todo_idx)): xvals = np.array(n_samples)[2:] yvals = aves_dict[mth_todo[m]][2:] ax1.plot(xvals, yvals, label=mth_todo_simple[m], color=mth_todo_colours[m]) ax1.tick_params(labelsize=my_fontsize) ax1.legend(loc=0, ncol=1, fontsize=my_fontsize) ax1.set_title("Averages", size=my_fontsize) for m in range(len(mth_todo)): xvals = np.array(n_samples)[2:] yvals = sds_dict[mth_todo[m]][2:] ax2.plot(xvals, yvals, label=mth_todo_simple[m], color=mth_todo_colours[m]) ax2.tick_params(labelsize=my_fontsize) ax2.set_title("Std Deviations", size=my_fontsize) fname = os.path.join("img", "{}_plot_{}.{}".format(task_name, data_dist, my_ext)) plt.savefig(fname=fname, bbox_inches="tight") plt.show() ###Output _____no_output_____ ###Markdown ___ Testing performance over confidence parameter $\alpha$ ###Code ## External modules. import matplotlib.pyplot as plt import numpy as np import os ## Internal modules. from mml.utils.mest import est_loc_fixedpt, inf_gudermann, est_scale_chi_fixedpt, chi_geman_quad from setup_results import my_fontsize, my_ext, export_legend mth_dict = {"empirical": {"simple": "Empirical", "colour": "black"}, "catoni": {"simple": "Cat-12", "colour": "tab:green"}, "mom": {"simple": "MoM", "colour": "tab:orange"}, "rtrunc": {"simple": "R-Trunc", "colour": "tab:red"}} ## First, set task and data distribution. task_name = "STATIC-A" data_dist = "pareto" # set this freely. ss = np.random.SeedSequence() rg = np.random.default_rng(seed=ss) def x_gen(n, data_dist): if data_dist == "lognormal": return rg.lognormal(mean=0.0, sigma=1.95, size=n) elif data_dist == "fnorm": return np.abs(rg.normal(loc=0.0, scale=3.9, size=n)) elif data_dist == "pareto": return rg.pareto(a=2.15, size=n) * 3.8 ## Next, set the method names. mth_names_raw = ["empirical", "catoni", "mom", "rtrunc"] ## Next, set other experiment-related parameters. num_trials = 10000 n_sample = 10000 n_verify = 100000000 ## Catoni parameters and estimator prep. _thres_Mest = 1e-03 # threshold value for M-estimator computations. _iters_Mest = 50 # number of iterations for M-estimator computations. _delta = 0.02 # confidence level parameter. _s_min = 0.001 # to prevent overflow when dividing by small numbers. est_loc = lambda X, s: est_loc_fixedpt(X=X, s=s, inf_fn=inf_gudermann, thres=_thres_Mest, iters=_iters_Mest) est_scale = lambda X: est_scale_chi_fixedpt(X=X, chi_fn=chi_geman_quad) ## MoM parameters. _mom_k = 1+np.ceil(3.5*np.log(1/_delta)) print("MoM k value = {}".format(_mom_k)) ###Output MoM k value = 15.0 ###Markdown Main test over $\alpha$ ###Code a_values = [0.15, 0.125, 0.10, 0.075, 0.05, 0.025] alpha_tocheck = 0.075 aves_emp = [] aves_cat12 = [] aves_mom = [] aves_rtrunc = [] sds_emp = [] sds_cat12 = [] sds_mom = [] sds_rtrunc = [] tocheck_dict = {} for idx_a in range(len(a_values)): alpha = a_values[idx_a] ## Get an accurate estimate of the true CVaR value. x_verify = np.sort(x_gen(n=n_verify, data_dist=data_dist)) # first sample. var = x_verify[int(np.floor((1-alpha)*n_verify))] x_verify = x_gen(n=n_verify, data_dist=data_dist) # second sample. cvar = np.mean(x_verify[x_verify >= var]) / alpha del x_verify print("Alpha = {}, CVaR = {}.".format(alpha,cvar)) diffs_emp = [] diffs_cat12 = [] diffs_mom = [] diffs_rtrunc = [] for t in range(num_trials): x_var = x_gen(n=int(np.floor(n_sample/2)), data_dist=data_dist) x_est = x_gen(n=int(np.floor(n_sample/2)), data_dist=data_dist) idx_break = int(np.floor(n_sample/2)) var_hat = np.sort(x_var)[int(np.floor((1-alpha)*x_var.size))] x_est_cond = x_est[x_est >= var_hat] ## Empirical. if x_est_cond.size < 1: cvar_hat_emp = np.max(x_est) / alpha else: cvar_hat_emp = np.mean(x_est_cond) / alpha diffs_emp += [np.abs(cvar_hat_emp-cvar)] ## Catoni. if x_est_cond.size < 2: cvar_hat_cat12 = np.max(x_est) / alpha else: s_est = np.sqrt(x_est.size/np.log(1/_delta)) s_est *= est_scale(x_est_cond-x_est_cond.mean()) s_est = np.maximum(s_est, _s_min) # ensure not too small. cvar_hat_cat12 = est_loc(X=x_est_cond, s=s_est).item() / alpha diffs_cat12 += [np.abs(cvar_hat_cat12-cvar)] ## Median-of-means. if x_est_cond.size < 2*_mom_k: if x_est_cond.size < 1: cvar_hat_mom = np.max(x_est) / alpha else: cvar_hat_mom = np.mean(x_est_cond) / alpha else: cvar_hat_mom = np.median([ val.mean() for val in np.array_split(x_est_cond, _mom_k) ]) / alpha diffs_mom += [np.abs(cvar_hat_mom-cvar)] ## Random truncation. if x_est_cond.size < 1: cvar_hat_rtrunc = np.max(x_est) / alpha else: u_hat = np.mean(x_est**2) b = np.sqrt(u_hat*(np.arange(x_est.size)+1)/np.log(1/_delta)) b_cond = b[x_est >= var_hat] x_est_cond_b = x_est_cond[x_est_cond <= b_cond] if x_est_cond_b.size < 1: cvar_hat_rtrunc = 0.0 else: cvar_hat_rtrunc = np.mean(x_est_cond_b) / alpha diffs_rtrunc += [np.abs(cvar_hat_rtrunc-cvar)] aves_emp += [np.mean(np.array(diffs_emp))] aves_cat12 += [np.mean(np.array(diffs_cat12))] aves_mom += [np.mean(np.array(diffs_mom))] aves_rtrunc += [np.mean(np.array(diffs_rtrunc))] sds_emp += [np.std(np.array(diffs_emp))] sds_cat12 += [np.std(np.array(diffs_cat12))] sds_mom += [np.std(np.array(diffs_mom))] sds_rtrunc += [np.std(np.array(diffs_rtrunc))] if alpha == alpha_tocheck: tocheck_dict["empirical"] = np.array(diffs_emp) tocheck_dict["catoni"] = np.array(diffs_cat12) tocheck_dict["mom"] = np.array(diffs_mom) tocheck_dict["rtrunc"] = np.array(diffs_rtrunc) aves_dict = {"empirical": np.array(aves_emp), "catoni": np.array(aves_cat12), "mom": np.array(aves_mom), "rtrunc": np.array(aves_rtrunc)} sds_dict = {"empirical": np.array(sds_emp), "catoni": np.array(sds_cat12), "mom": np.array(sds_mom), "rtrunc": np.array(sds_rtrunc)} # Select methods for visualization. mth_todo_idx = [0, 1, 2, 3] mth_todo = [ mth_names_raw[m] for m in mth_todo_idx ] print("Methods to check (raw):", mth_todo) mth_todo_simple = [ mth_dict[mth]["simple"] for mth in mth_todo ] print("Methods to check (simplified):", mth_todo_simple) mth_todo_colours = [ mth_dict[mth]["colour"] for mth in mth_todo ] # Boxplot of all methods for particular "n" value. myfig, ax = plt.subplots(1, 1, figsize=(10,4)) perf_tostack = [] perf_tostack = [] for m in range(len(mth_todo)): perf_tostack += [tocheck_dict[mth_todo[m]]] ax.boxplot(x=np.vstack(perf_tostack).T, notch=False, labels=mth_todo_simple) ax.tick_params(labelsize=my_fontsize) ax.set_title("CVaR estimation error ({}, alpha={})".format(data_dist, alpha_tocheck), size=my_fontsize) fname = os.path.join("img", "{}_box_{}.{}".format(task_name, data_dist, my_ext)) plt.savefig(fname=fname, bbox_inches="tight") plt.show() # Do the visualization. myfig, (ax1, ax2) = plt.subplots(1, 2, figsize=(10,5), sharey=True) for m in range(len(mth_todo_idx)): xvals = np.array(a_values) yvals = aves_dict[mth_todo[m]] ax1.plot(xvals, yvals, label=mth_todo_simple[m], color=mth_todo_colours[m]) ax1.tick_params(labelsize=my_fontsize) ax1.legend(loc=0, ncol=1, fontsize=my_fontsize) ax1.set_title("Averages", size=my_fontsize) for m in range(len(mth_todo)): xvals = np.array(a_values) yvals = sds_dict[mth_todo[m]] ax2.plot(xvals, yvals, label=mth_todo_simple[m], color=mth_todo_colours[m]) ax2.tick_params(labelsize=my_fontsize) ax2.set_title("Std Deviations", size=my_fontsize) fname = os.path.join("img", "{}_plot_{}.{}".format(task_name, data_dist, my_ext)) plt.savefig(fname=fname, bbox_inches="tight") plt.show() ###Output _____no_output_____
2- Convolutional Neural Networks in TensorFlow/Exercise_2_Cats_vs_Dogs_using_augmentation_Question-FINAL.ipynb
###Markdown NOTE:In the cell below you **MUST** use a batch size of 10 (`batch_size=10`) for the `train_generator` and the `validation_generator`. Using a batch size greater than 10 will exceed memory limits on the Coursera platform. ###Code TRAINING_DIR = "/tmp/cats-v-dogs/training" train_datagen = ImageDataGenerator(rescale=1/255., width_shift_range=0.2, height_shift_range=0.2, shear_range=0.2, zoom_range=0.2, horizontal_flip=True, fill_mode='nearest') # NOTE: YOU MUST USE A BATCH SIZE OF 10 (batch_size=10) FOR THE # TRAIN GENERATOR. train_generator = train_datagen.flow_from_directory(TRAINING_DIR, batch_size=10, target_size=(150,150), class_mode='binary') VALIDATION_DIR = '/tmp/cats-v-dogs/testing/' validation_datagen = ImageDataGenerator(rescale=1/255., width_shift_range=0.2, height_shift_range=0.2, shear_range=0.2, zoom_range=0.2, horizontal_flip=True, fill_mode='nearest') # NOTE: YOU MUST USE A BACTH SIZE OF 10 (batch_size=10) FOR THE # VALIDATION GENERATOR. validation_generator = validation_datagen.flow_from_directory(VALIDATION_DIR, batch_size=10, target_size=(150,150), class_mode='binary') # Expected Output: # Found 2700 images belonging to 2 classes. # Found 300 images belonging to 2 classes. history = model.fit_generator(train_generator, epochs=2, verbose=1, validation_data=validation_generator) # PLOT LOSS AND ACCURACY %matplotlib inline import matplotlib.image as mpimg import matplotlib.pyplot as plt #----------------------------------------------------------- # Retrieve a list of list results on training and test data # sets for each training epoch #----------------------------------------------------------- acc=history.history['acc'] val_acc=history.history['val_acc'] loss=history.history['loss'] val_loss=history.history['val_loss'] epochs=range(len(acc)) # Get number of epochs #------------------------------------------------ # Plot training and validation accuracy per epoch #------------------------------------------------ plt.plot(epochs, acc, 'r', "Training Accuracy") plt.plot(epochs, val_acc, 'b', "Validation Accuracy") plt.title('Training and validation accuracy') plt.figure() #------------------------------------------------ # Plot training and validation loss per epoch #------------------------------------------------ plt.plot(epochs, loss, 'r', "Training Loss") plt.plot(epochs, val_loss, 'b', "Validation Loss") plt.title('Training and validation loss') # Desired output. Charts with training and validation metrics. No crash :) ###Output _____no_output_____
MSEIR_Comparison.ipynb
###Markdown Gekko model ###Code tLat = city_args['tLat'] tInf = city_args['tInf'] tHosp = city_args['tHosp'] pMild = city_args['pMild'] pFatal= city_args['pFatal'] R0 = city_args['R_0'] N = city_args['N'] beta = R0/tInf # fraction of infected and recovered individuals e_initial = city_args['E0'] i_initial = city_args['I0'] h_initial = city_args['H0'] r_initial = city_args['R0'] d_initial = city_args['D0'] s_initial = city_args['S0'] m = GEKKO() u = m.MV(0,lb=0.0,ub=0.8) s,e,i,h,r,d = m.Array(m.Var,6) s.value = s_initial e.value = e_initial i.value = i_initial h.value = h_initial r.value = r_initial d.value = d_initial m.Equations([s.dt()== -(1 - u)*beta*s*i/N,\ e.dt()== (1 - u)*beta*s*i/N - e/tLat,\ i.dt()== e/tLat - i/tInf,\ h.dt()==(1-pMild)*i/tInf - h/tHosp,\ r.dt()== pMild*i/tInf + (1-pFatal)*h/tHosp,\ d.dt()== pFatal*h/tHosp]) t = np.linspace(0, city_args['n_periods'], int(city_args['n_periods']/2+1)) #t = np.insert(t,1,[0.001,0.002,0.004,0.008,0.02,0.04,0.08, 0.2,0.4,0.8]) m.time = t # initialize with simulation m.options.IMODE=7 m.options.NODES=3 m.solve(disp=False) max(h.value) # optimize m.options.IMODE=6 m.options.MAX_ITER = 2000 h.UPPER = city_args['Q'] u.STATUS = 1 m.options.SOLVER = 3 m.options.TIME_SHIFT = 1 s.value = s.value.value e.value = e.value.value i.value = i.value.value h.value = h.value.value r.value = r.value.value d.value = d.value.value m.Minimize(u) m.solve(disp=True) # plot the optimized response plt.figure(figsize=(16,10)) plt.subplot(3,1,1) plt.plot(m.time, s.value, color='blue', lw=3, ls='--', label='Optimal Susceptible') plt.plot(m.time, r.value, color='red', lw=3, ls='--', label='Optimal Recovered') plt.plot(m.time, d.value, color='black', lw=3, ls='--', label='Optimal Deceased') plt.ylabel('Fraction') plt.legend() plt.subplot(3,1,2) plt.plot(m.time, e.value, color='purple', ls='--', lw=3, label='Optimal Exposed') plt.plot(m.time, i.value, color='orange', ls='--', lw=3, label='Optimal Infected') plt.plot(m.time, h.value, color='dodgerblue', ls='--', lw=3, label='Optimal Hosp') plt.ylabel('Fraction') plt.legend() plt.subplot(3,1,3) plt.plot(m.time, u.value, 'k:', lw=3, label='Optimal (0=None, 1=No Interaction)') plt.ylabel('Social Distancing') plt.legend() plt.xlabel('Time (days)') plt.show() ###Output _____no_output_____ ###Markdown MSEIR model ###Code start = timer() model = MSEIR(**city_args) mseirRES = model.solve(U=0, optimize=True, solver='SLSQP', freq=1, hor=2.5, bounds=(0,0.8))[::20] fig0 = model.plot(mseirRES, comps='HD', size=(700,900), title='Model comparison: MSEIR vs Gekko+IPOPT') print(timer() - start) ###Output _____no_output_____ ###Markdown Comparison ###Code res_np = np.asarray([s,e,i,h,r,d,t,u,np.zeros(len(t)),np.zeros(len(t)),city_args['Q'] * np.ones(len(t))]).T df_names = ['S', 'E', 'I', 'H', 'R', 'D', 't', 'Uf', 'mInf', 'rInf', 'Q'] gekkoRES = pd.DataFrame(res_np, columns=df_names) fig1 = model.plot(gekkoRES, comps='HD') simps(mseirRES['Uf']) - simps(gekkoRES['Uf']) fig1['data'][1]['line']['color']='dodgerblue' fig1['data'][3]['line']['color']='dodgerblue' fig1['data'][2]['line']['color']='dodgerblue' figF = make_subplots(rows=2, cols=1, shared_xaxes=True, horizontal_spacing=0.01, vertical_spacing=0.01, row_heights=[0.2, 0.8], ) figF.add_trace(fig0['data'][1], row=1, col=1) figF.add_trace(fig0['data'][3], row=2, col=1) figF.add_trace(fig0['data'][2], row=2, col=1) figF.add_trace(fig1['data'][1], row=1, col=1) figF.add_trace(fig1['data'][3], row=2, col=1) figF.add_trace(fig1['data'][2], row=2, col=1) figF figF.update_yaxes(title_text="Control", row=1, col=1, nticks=4, showgrid=False) figF.update_yaxes(title_text="Compartments", row=2, col=1, nticks=4,showgrid=False) figF.update_layout(height=800, width=900, title='Model comparison: MSEIR vs Gekko+IPOPT', legend_orientation="h", legend={'bgcolor': 'rgba(0,0,0,0)', 'itemsizing': 'constant'}, title_x=0.45, title_y=0.93) ###Output _____no_output_____ ###Markdown Summary ###Code xT = pd.date_range(start = '2020-01-01', end = datetime.strptime('2020-01-01', "%Y-%m-%d")+timedelta(days=200), periods=730) maxI1 = (mseirRES['D'].max() + mseirRES['R'].max()) maxI2 = (gekkoRES['D'].max() + gekkoRES['R'].max()) maxD1 = mseirRES['D'].max().round(0) maxD2 = gekkoRES['D'].max().round(0) perD1 = mseirRES['D'].max() / (mseirRES['D'].max() + mseirRES['R'].max()) perD2 = gekkoRES['D'].max() / (gekkoRES['D'].max() + gekkoRES['R'].max()) perI1 = maxI1/mseirRES['S'].max() perI2 = maxI2/gekkoRES['S'].max() idx = mseirRES['Uf'].to_numpy().nonzero()[0] i = mseirRES['H'] - mseirRES['Q'] min_U1 = xT[idx[-1] if idx.size > 0 else 0] min_Ul1 = idx.size area_U1 = round(simps(mseirRES['Uf']),2) cost_U1 = round(simps((i+abs(i))/2) + simps(mseirRES['Uf']),2) idx = gekkoRES['Uf'].to_numpy().nonzero()[0] i = gekkoRES['H'] - gekkoRES['Q'] min_U2 = xT[idx[-1] if idx.size > 0 else 0].date() min_Ul2 = idx.size area_U2 = round(simps(gekkoRES['Uf']),2) cost_U2 = round(simps((i+abs(i))/2) + simps(gekkoRES['Uf']),2) total__ = [[area_U1, min_U1, min_Ul1, maxI1, perI1, maxD1, perD1, cost_U1], [area_U2, min_U2, min_Ul2, maxI2, perI2, maxD2, perD2, cost_U2]] cols = ['Control strength', 'Control release date', 'Control duration', 'Total Infected', 'Total Infected (% population)', 'Total Deceased', 'Total Deceased (% infected)', 'Final value of cost function'] table_summary = pd.DataFrame(total__, columns=cols) table_summary.index = ['Scenario 1', 'Scenario 2'] table_summary.T ###Output _____no_output_____
SO_survey_analysis.ipynb
###Markdown In this notebook we'll explore the results of stackoverflow surveys from 2019, 2020, and 2021 Tech careers are becoming more and more desired over the years, promising good salaries and flexible work arrangement. This is an investigation related to tech careers salaries, in which we want to answer:**1.** Are the programming languages students are yearning to learn the same as those who are already developers and have a good salary over time?**2.** Do programmers who are compensated in USD earn higher salaries than those who earn in BRL?&emsp;**2.1** Are Brazilians' salaries from those who earn in USD higher than those who are compensated in BRL?**3.** Are salaries growing faster in Brazil or in the US? ###Code # Installing libraries !pip install -r requirements.txt from statistics import mean, median import plotly.graph_objects as go import pandas as pd pd.options.mode.chained_assignment = None pd.set_option('display.float_format', lambda x: '%.2f' % x) pd.set_option('display.max_rows', None) pd.set_option('display.max_columns', None) # Reading data df_answers_2019 = pd.read_csv('survey_results_public_2019.csv') df_answers_2019.name = '2019' df_answers_2020 = pd.read_csv('survey_results_public_2020.csv') df_answers_2020.name = '2020' df_answers = pd.read_csv('survey_results_public_2021.csv') df_answers.name = '2021' df_answers.head() ###Output _____no_output_____ ###Markdown 1. Are the programming languages students are yearning to learn the same as those who are already developers and have a good salary? ###Code def get_cols_names(df): ''' INPUT df - a dataframe with answers to the StackOverflow survey OUTPUT want_to_learn_col, worked_with_col, salary_col, currency_col - the names of the columns of interest, which may vary depending on the year of the survey ''' if df.name in ['2019','2020']: want_to_learn_col = 'LanguageDesireNextYear' worked_with_col = 'LanguageWorkedWith' salary_col = 'ConvertedComp' currency_col = 'CurrencySymbol' elif df.name == '2021': want_to_learn_col = 'LanguageWantToWorkWith' worked_with_col = 'LanguageHaveWorkedWith' salary_col = 'ConvertedCompYearly' currency_col = 'Currency' return want_to_learn_col, worked_with_col, salary_col, currency_col def drop_outliers(df,column,trim_pct=.15): ''' INPUT df - a dataframe with answers to the StackOverflow survey OUTPUT trim_pct - percentage to trim from the top of the input dataframe column - the column to sort the value in the dataframe before trimming it df - the input dataframe trimmed with top `trim_pct` of rows according to `column` ''' return df.sort_values(column,ascending=False).drop([x for x in range(int(len(df)*trim_pct))]) def create_df_by_main_branch(df,main_branch): return df[df['MainBranch'] == main_branch] def create_wanted_languages_df(df,df_type,want_to_learn_col,worked_with_col,salary_col): ''' INPUT df - a dataframe with answers to the StackOverflow survey df_type - if creating a dataframe from students answers to the survey we must not require they have already worked with any programming language, which is not the case with professional devs want_to_learn_col - the column with the programming languages the respondent want to learn/work with in the future worked_with_col - the column with the programming languages the respondent already learned/worked with salary_col - the column with the salaries from the respondents OUTPUT df_wanted_languages - a dataframe with how many respondents want to learn some programming languages and their salaries ''' wanted_languages = {} if df_type == 'students': df[worked_with_col] = '' for i, row in df.iterrows(): p_languages = row[want_to_learn_col].split(';') for pl in p_languages: if pl not in wanted_languages and pl not in row[worked_with_col]: wanted_languages[pl] = {'respondents':1, 'salaries':[row[salary_col]]} continue if pl in wanted_languages and pl not in row[worked_with_col]: wanted_languages[pl]['respondents'] += 1 wanted_languages[pl]['salaries'].append(row[salary_col]) df_wanted_languages = pd.DataFrame.from_dict(wanted_languages, orient='index').reset_index()\ .rename(columns={'index': want_to_learn_col, 0: 'count'}) return df_wanted_languages def calculate_salaries_statistics(df,trim_pct=.1): ''' INPUT df - a dataframe with answers to the StackOverflow survey trim_pct - percentage to trim from bottom and top of the input dataframe to calculate statistics OUTPUT df - a dataframe with salaries statistics ''' df['respondents_for_calc'] = df['salaries'].apply(lambda x: len(x[int(len(x)*trim_pct):-int(len(x)*trim_pct)])) df = df.loc[df['respondents_for_calc'] >= 1, :] df['mean_salary'] = df['salaries'].apply(lambda x: mean(sorted(x)[int(len(x)*trim_pct):-int(len(x)*trim_pct)])) df['median_salary'] = df['salaries'].apply(lambda x: median(sorted(x)[int(len(x)*trim_pct):-int(len(x)*trim_pct)])) df['median_pct_of_mean'] = round(df['median_salary'] / df['mean_salary'],2) df = df.drop('salaries',axis=1) return df def filter_df_languages_devs(df,min_median_mean_ratio=.7,min_resp=100,n_head=10): ''' INPUT df - a dataframe with answers to the StackOverflow survey min_median_mean_ratio - the minimum median to mean of salaries ratio, the closer to 1 the value, the more closely distributed the values are min_resp - the minimum number of respondents that want to learn a new programming language n_head - the top n programming languages based on the number of respondents which want to learn them OUTPUT df - the input dataframe filtered based on `min_median_mean_ratio`, `min_resp` and `n_head` ''' df = df[(df['median_pct_of_mean'] >= min_median_mean_ratio) & (df['respondents_for_calc'] >= min_resp)]\ .sort_values('respondents_for_calc',ascending=False).head(n_head) return df dfs = [ df_answers_2019, df_answers_2020, df_answers ] df_devs_all = pd.DataFrame() df_students_all = pd.DataFrame() for df in dfs: # Getting proper column names for each dataframe want_to_learn_col, worked_with_col, salary_col, currency_col = get_cols_names(df) # Droping rows with top 15% salaries (maybe they were typed incorrectly or are much higher than typical salaries) df_answers_clean = drop_outliers(df,salary_col) # Since we want to know what devs with top earnings want to learn next, let's analyze the answers from devs # which salaries are amongst the top 20% df_answers_devs = create_df_by_main_branch(df_answers_clean,'I am a developer by profession')\ .dropna(subset=[worked_with_col,want_to_learn_col]) df_answers_devs = df_answers_devs.sort_values(salary_col,ascending=False).head(int(len(df_answers_devs)*.2)) # Creating a dataframe with the languages well compensated devs want to learn df_languages_devs = create_wanted_languages_df(df_answers_devs,'devs',want_to_learn_col,worked_with_col,salary_col) df_languages_devs = calculate_salaries_statistics(df_languages_devs,trim_pct =.1) # # Since we want to know what is a typical salary from each programming language, we'll filter our data removing # # those languages which had the median/mean ratio much less than 1, because this shows the salaries are in these cases # # vary a lot between the respondents df_languages_devs = filter_df_languages_devs(df_languages_devs,min_median_mean_ratio=.6,min_resp=100,n_head=5) df_languages_devs['year'] = df.name df_languages_devs.rename(columns={want_to_learn_col:'desired_language'},inplace=True) df_devs_all = pd.concat([df_devs_all,df_languages_devs]) # Creating a dataframe with the languages students most want to learn df_answers_students = create_df_by_main_branch(df_answers_clean, 'I am a student who is learning to code')\ .dropna(subset=[want_to_learn_col]) df_languages_students = create_wanted_languages_df(df_answers_students, 'students', want_to_learn_col, worked_with_col, salary_col) df_languages_students = df_languages_students.sort_values('respondents',ascending=False).head(5) df_languages_students['year'] = df.name df_languages_students.rename(columns={want_to_learn_col:'desired_language'},inplace=True) df_students_all = pd.concat([df_students_all,df_languages_students]) # Plotting top 5 programming languages desired by students over years fig = go.Figure() fig.add_trace(go.Bar(x=df_students_all['year'], y=df_students_all['respondents'], name='Respondents', text=df_students_all['desired_language'], textposition='inside', textfont_color='ghostwhite', marker={ 'color': df_students_all['respondents'], 'colorscale': 'redor'} )) fig.update_layout( title='Most Wanted Programming Languages by Students', xaxis_tickfont_size=16, yaxis=dict( title='Respondents', titlefont_size=16, tickfont_size=14, ), ) fig.show() # Plotting top 5 programming languages desired by well compensated programmers over years fig = go.Figure() fig.add_trace(go.Bar(x=df_devs_all['year'], y=df_devs_all['respondents'], name='Respondents', text=df_devs_all['desired_language'], textposition='inside', textfont_color='ghostwhite', marker={ 'color': df_devs_all['respondents'], 'colorscale': 'burg'} )) fig.update_layout( title='Most Wanted Programming Languages by Well Compensated Devs', xaxis_tickfont_size=16, yaxis=dict( title='Respondents', titlefont_size=16, tickfont_size=14, ), ) fig.show() ###Output _____no_output_____ ###Markdown As we can see, the programming languages students to learn are in general not the ones well compensated devs do. 2. Do programmers who are compensated in USD earn higher salaries than those who earn in BRL? ###Code def create_dev_types_df(df,salary_col): ''' INPUT df - a dataframe with answers to the StackOverflow survey salary_col - the name of the column with respondents salaries in df OUTPUT df_dev_types - a dataframe with how many respondents are from each dev type and their salaries ''' dev_types = {} for i, row in df.iterrows(): types = row['DevType'].split(';') for t in types: if t not in dev_types: dev_types[t] = {'respondents':1, 'salaries':[row[salary_col]]} continue elif t in dev_types: dev_types[t]['respondents'] += 1 dev_types[t]['salaries'].append(row[salary_col]) df_dev_types = pd.DataFrame.from_dict(dev_types, orient='index').reset_index()\ .rename(columns={'index': 'dev_type', 0: 'count'}) return df_dev_types def create_df_by_country_currency(df,country,currency_col,currency): ''' INPUT df - a dataframe with answers to the StackOverflow survey country - the country you want to create a new dataframe only with respondents from it currency_col - the name of the column with the country's currency currency - the name of the currency itself OUTPUT df - a new dataframe filtered based on `country`, `currency_col` and `currency` ''' return df[(df['Country'] == country) & (df[currency_col] == currency)] # Diffent country/salary_currency combinations we want to explore in the 2021 survey results countries_currencies = [ ['Brazil','BRL\tBrazilian real'], ['Brazil','USD\tUnited States dollar'], ['United States of America','USD\tUnited States dollar'], ] df_devs_salaries = pd.DataFrame() for i, cc in enumerate(countries_currencies): country, currency_col = cc[0], cc[1] # Creating initial dataframe based on country and currency df = create_df_by_country_currency(df_answers,country,'Currency',currency_col) # Dropping NaN's because we want to calculate statistics based only on answers # and rows with top and bottom 15% of salaries to obtain typical salaries df.dropna(subset=['DevType','ConvertedCompYearly'],inplace=True) df = drop_outliers(df.reset_index(drop=True),'ConvertedCompYearly') # Creating a dataframe with salaries statistics df = create_dev_types_df(df,'ConvertedCompYearly') df = calculate_salaries_statistics(df,trim_pct =.1) df.drop(['respondents','respondents_for_calc','mean_salary','median_pct_of_mean'],axis=1,inplace=True) # Merging dataframes from different years if i == 0: df_devs_salaries = df else: df_devs_salaries = pd.merge(df_devs_salaries,df,on='dev_type',how='left') df_devs_salaries.rename(columns={ 'median_salary_x':'median_salary_br_brl', 'median_salary_y':'median_salary_br_usd', 'median_salary':'median_salary_us_usd' },inplace=True) # Removing `Others` from dev types to mantain only explicit professional types df_devs_salaries = df_devs_salaries.loc[df_devs_salaries['dev_type'] != 'Other (please specify):',:] # Plotting salaries in Brazil and in the US for different dev types df_devs_salaries.sort_values('median_salary_br_brl',ascending=False,inplace=True) fig = go.Figure() fig.add_trace(go.Bar(x=df_devs_salaries['dev_type'].head(10), y=df_devs_salaries['median_salary_br_brl'].head(10), name='Brazil', marker_color='rgb(234, 129, 113)' )) fig.add_trace(go.Bar(x=df_devs_salaries['dev_type'].head(10), y=df_devs_salaries['median_salary_us_usd'].head(10), name='US', marker_color='rgb(202, 82, 104)' )) fig.update_layout( height=700, title={ 'text':'Salaries By Dev Type', 'font':{'size':20} }, xaxis_tickfont_size=12, yaxis=dict( title='Salary (USD)', titlefont_size=16, tickfont_size=12, ), legend=dict( x=.9, y=1.015, bgcolor='rgba(255, 255, 255, 0)', bordercolor='rgba(255, 255, 255, 0)', borderwidth=10 ), barmode='group', bargap=0.15, bargroupgap=0.1, font={'size':13} ) fig.show() ###Output _____no_output_____ ###Markdown 2.1 Are Brazilians' salaries from those who earn in USD higher than those who are compensated in BRL? ###Code # Plotting salaries for Brazilian professionals who are compensated in BRL and in USD df_devs_salaries_brazil = df_devs_salaries[['dev_type','median_salary_br_brl','median_salary_br_usd']].dropna() fig = go.Figure() fig.add_trace(go.Bar(x=df_devs_salaries_brazil['dev_type'], y=df_devs_salaries_brazil['median_salary_br_brl'], name='Brazil-BRL', marker_color='rgb(242, 185, 196)' )) fig.add_trace(go.Bar(x=df_devs_salaries_brazil['dev_type'], y=df_devs_salaries_brazil['median_salary_br_usd'], name='Brazil-USD', marker_color='rgb(229, 151, 185)' )) fig.update_layout( height=700, title={ 'text':'Salaries By Dev Type (Brazil)', 'font':{'size':20} }, xaxis_tickfont_size=12, yaxis=dict( title='Salary (USD)', titlefont_size=16, tickfont_size=12, ), legend=dict( x=.88, y=1.015, bgcolor='rgba(255, 255, 255, 0)', bordercolor='rgba(255, 255, 255, 0)', borderwidth=10 ), barmode='group', bargap=0.15, bargroupgap=0.1, font={'size':13} ) fig.show() ###Output _____no_output_____ ###Markdown As we can see, indeed American tech professionals who earn in USD are much better compensated than Brazilian tech professionals who earn in BRL in the same type of job 3. Are salaries growing faster in Brazil or in the US? ###Code def alter_fields_cols(df): ''' INPUT df - a dataframe with answers to the StackOverflow survey OUTPUT df - a new dataframe with necessary column's and field's names changed to proper concat ''' df.rename(columns={salary_col:'compensation_usd',currency_col:'currency'},inplace=True) df['Country'].replace('United States of America','United States',inplace=True) df['currency'].replace('USD\tUnited States dollar','USD',inplace=True) df['currency'].replace('BRL\tBrazilian real','BRL',inplace=True) return df df_all_years = pd.DataFrame() for df in dfs: df['year'] = df.name salary_col, currency_col = get_cols_names(df)[2], get_cols_names(df)[3] # Removing some unimportant columns df = df[['year','Country',currency_col,salary_col]] # Droping rows with top 15% salaries (maybe they were typed incorrectly or are much higher than typical salaries) df_answers_clean = drop_outliers(df.reset_index(drop=True),salary_col).dropna(subset=[salary_col]) # Different values in fields and columns in different datasets represent the same thing, so we must choose # standard values for they in order to properly concat and filter df_answers_clean = alter_fields_cols(df_answers_clean) df_all_years = pd.concat([df_all_years,df_answers_clean]) # Creating dataframes based on country and currency to plot median salaries over years # (filling NaN with 0 because the first year is the baseline, which should be 0 instead of NaN) df_all_years_br = create_df_by_country_currency(df_all_years,'Brazil','currency','BRL')\ .groupby('year').median('compensation_usd').reset_index() df_all_years_br['pct_change'] = df_all_years_br['compensation_usd'].pct_change().fillna(0)*100 df_all_years_us = create_df_by_country_currency(df_all_years,'United States','currency','USD')\ .groupby('year').median('compensation_usd').reset_index() df_all_years_us['pct_change'] = df_all_years_us['compensation_usd'].pct_change().fillna(0)*100 # Plotting % differences in median salaries in Brazil and in the US over years fig = go.Figure() fig.add_trace(go.Scatter(x=df_all_years_br['year'], y=df_all_years_br['pct_change'], name='Brazil-BRL', marker_color='rgb(245, 183, 142)', connectgaps=False )) fig.add_trace(go.Scatter(x=df_all_years_us['year'], y=df_all_years_us['pct_change'], name='United States-USD', marker_color='rgb(221, 104, 108)', connectgaps=False )) fig.update_layout( height=700, title={ 'text':'Salaries Growth Over Years', 'font':{'size':20} }, xaxis_tickfont_size=12, xaxis = dict( tickmode = 'array', tickvals = ['2019', '2020','2021'], ticktext = ['2019', '2020','2021'] ), yaxis=dict( title='% Change', titlefont_size=16, tickfont_size=12, ), legend=dict( x=0, y=1.015, bgcolor='rgba(255, 255, 255, 0)', bordercolor='rgba(255, 255, 255, 0)', borderwidth=10 ), barmode='group', bargap=0.15, bargroupgap=0.1, font={'size':13} ) fig.show() ###Output _____no_output_____
computational-linguistics/ass-1/statistical_dependence.ipynb
###Markdown In this question, the goal is the compute Pointwise mutual information (PMI) score for each token. It is measure of statistical dependence of the events $ X_t = w_1$ and $X_{t+1} = w_2$. It is given by-$$pmi(w1 , w2) = log(\frac{C(w1 w2)*N)}{(C(w1)*C(w2)})$$To begin with, we first tokenize the corpora as usual and strip the punctuations so as not to obtain a biased score that the punctuation tokens would otherwise introduce. As requested in the question, we also remove tokens with a net count of less than 10. Next, we also prepare the bigrams from the tokens obtained.Now the process is simple- we iterate through each of the bigrams and get the frequency their frequency as well as the frequency of both of the tokens that form the bigram. Once we have the counts we can compute the PMI value. ###Code #!/usr/bin/python3 # -*- coding: utf-8 -*- # author : Sangeet Sagar # e-mail : [email protected] # Organization: Universität des Saarlandes """ Calculate the pmi for all successive pairs (w1 , w2 ) of words in a corpus pmi(w1 , w2) = log[(C(w1 w2)*N) / (C(w1)*C(w2))] """ import nltk import math import string import operator import itertools import collections from nltk.util import ngrams import matplotlib.pyplot as plt def data_prep(filename): """Perform pre-processing steps in the input file and tokenize it. Args: filename (str): path to file Returns: list:tokens- list containing tokenized words of the input text file """ file_content = open(filename, 'r', encoding='utf-8-sig').read() file_content = file_content.lower() # Strip punctuations. Reference: https://stackoverflow.com/questions/265960/best-way-to-strip-punctuation-from-a-string file_content = file_content.translate( str.maketrans('', '', string.punctuation)) tokens_list = nltk.word_tokenize(file_content) # Remove tokens with frequnecy less than 10 tokens_list = [item for item in tokens_list if collections.Counter(tokens_list)[ item] >= 10] return tokens_list def compute_pmi(word_pair, N, global_dict): """Givem word-pairs and tokens from the corpora, compute the PMI score Args: word_pair (tuple): tuple of word pairs- (w1, w2) N (list): length of corpors global_dict (dict): dict consisting frequency of each tokens and token_pairs Returns: float: PMI score of the given word-pair """ counts = lookup(word_pair, global_dict) return math.log(((counts[2] * N))/(counts[0] * counts[1])) def lookup(word_pair, global_dict): """Compute counts of each word in the word pair and the word pair itself in the list of all token pairs Args: word_pair (tuple): tuple of word pairs- (w1, w2) global_dict (dict): dict consisting frequency of each tokens and token_pairs Returns: list: list containing counts of w1, w2 (in the tokens list) and counts of word_pair (in the token_pairs list) """ Cw1 = global_dict.get(word_pair[0]) Cw2 = global_dict.get(word_pair[1]) Cw1_w2 = global_dict.get(word_pair) return [Cw1, Cw2, Cw1_w2] def print_scores(sort_tok_dict, l, rev=False): """Print PMI scores in a tabulated format Args: sort_tok_dict (dict): dictionary containing word-pairs are key and PMI scores as values l (int): maximum word-pairs for which PMI scores have to be printed rev (bool): Choice to reverse the dict. Defaults to False. """ # References: https://www.geeksforgeeks.org/python-get-first-n-keyvalue-pairs-in-given-dictionary/ if rev: out = dict(itertools.islice(sort_tok_dict.items(), len(sort_tok_dict)-l, len(sort_tok_dict))) out = dict(sorted(out.items(), key=operator.itemgetter(1), reverse=False)) else: out = dict(itertools.islice(sort_tok_dict.items(), l)) dash = '-' * 32 print(dash) print('{:<10s}{:>10s}{:>12s}'.format("w1", "w2", "pmi")) print(dash) for key, value in out.items(): print('{:<10s}{:>10s}{:>12s}'.format( key[0], key[1], str(format(value, ".3f")))) if __name__ == "__main__": """main function""" filename = "data/junglebook.txt" tokens = data_prep(filename) N = len(tokens) token_pairs = list(ngrams(tokens, 2)) # Get a dict with combined counts of unigrams and bigrams global_dict = nltk.FreqDist(tokens + token_pairs) # create a dict with keys= word pairs, and value= None tok_dict = dict.fromkeys(token_pairs) for word_pair, pmi_score in tok_dict.items(): pmi_score = compute_pmi(word_pair, N, global_dict) tok_dict[word_pair] = pmi_score sort_tok_dict = dict( sorted(tok_dict.items(), key=operator.itemgetter(1), reverse=True)) ###Output _____no_output_____ ###Markdown PMI scores are useful in a way that they help us interpret what words in the corpora carry the most context. Words with highest PMI scores have higher chances of occuring in pairs and thus these words carry more meaning. A good example can be `united states`, `fore paws`. ###Code print_scores(sort_tok_dict, l=20) ###Output -------------------------------- w1 w2 pmi -------------------------------- machua appa 8.287 literary archive 8.130 united states 7.987 darzees wife 7.699 archive foundation 7.604 cold lairs 7.448 gutenberg literary 7.293 stretched myself 7.188 petersen sahib 7.131 hind legs 6.988 fore paws 6.910 twenty yoke 6.850 whole line 6.718 electronic works 6.706 hind flippers 6.687 master words 6.669 years ago 6.641 bring news 6.623 mans cub 6.606 council rock 6.505 ###Markdown As can be seen the words with the lowest PMI score can combine with any word and therefore do not carry meaning. These are generally pronouns and prepositions like `he`, `of`, `and` etc. They can combine with any words to complete the sentence grammatically. ###Code print_scores(sort_tok_dict, l=20, rev=True) ###Output -------------------------------- w1 w2 pmi -------------------------------- he of -3.490 his the -3.318 the not -3.298 little the -3.001 the a -2.956 the be -2.849 a his -2.841 said of -2.602 he he -2.571 the no -2.538 in in -2.524 and is -2.493 a the -2.486 the if -2.477 they of -2.449 of they -2.449 very the -2.448 do the -2.404 to they -2.383 the could -2.365
matplotlib/gallery_jupyter/images_contours_and_fields/image_masked.ipynb
###Markdown Image Maskedimshow with masked array input and out-of-range colors.The second subplot illustrates the use of BoundaryNorm toget a filled contour effect. ###Code from copy import copy import numpy as np import matplotlib.pyplot as plt import matplotlib.colors as colors # compute some interesting data x0, x1 = -5, 5 y0, y1 = -3, 3 x = np.linspace(x0, x1, 500) y = np.linspace(y0, y1, 500) X, Y = np.meshgrid(x, y) Z1 = np.exp(-X**2 - Y**2) Z2 = np.exp(-(X - 1)**2 - (Y - 1)**2) Z = (Z1 - Z2) * 2 # Set up a colormap: # use copy so that we do not mutate the global colormap instance palette = copy(plt.cm.gray) palette.set_over('r', 1.0) palette.set_under('g', 1.0) palette.set_bad('b', 1.0) # Alternatively, we could use # palette.set_bad(alpha = 0.0) # to make the bad region transparent. This is the default. # If you comment out all the palette.set* lines, you will see # all the defaults; under and over will be colored with the # first and last colors in the palette, respectively. Zm = np.ma.masked_where(Z > 1.2, Z) # By setting vmin and vmax in the norm, we establish the # range to which the regular palette color scale is applied. # Anything above that range is colored based on palette.set_over, etc. # set up the Axes objects fig, (ax1, ax2) = plt.subplots(nrows=2, figsize=(6, 5.4)) # plot using 'continuous' color map im = ax1.imshow(Zm, interpolation='bilinear', cmap=palette, norm=colors.Normalize(vmin=-1.0, vmax=1.0), aspect='auto', origin='lower', extent=[x0, x1, y0, y1]) ax1.set_title('Green=low, Red=high, Blue=masked') cbar = fig.colorbar(im, extend='both', shrink=0.9, ax=ax1) cbar.set_label('uniform') for ticklabel in ax1.xaxis.get_ticklabels(): ticklabel.set_visible(False) # Plot using a small number of colors, with unevenly spaced boundaries. im = ax2.imshow(Zm, interpolation='nearest', cmap=palette, norm=colors.BoundaryNorm([-1, -0.5, -0.2, 0, 0.2, 0.5, 1], ncolors=palette.N), aspect='auto', origin='lower', extent=[x0, x1, y0, y1]) ax2.set_title('With BoundaryNorm') cbar = fig.colorbar(im, extend='both', spacing='proportional', shrink=0.9, ax=ax2) cbar.set_label('proportional') fig.suptitle('imshow, with out-of-range and masked data') plt.show() ###Output _____no_output_____ ###Markdown ------------References""""""""""The use of the following functions and methods is shownin this example: ###Code import matplotlib matplotlib.axes.Axes.imshow matplotlib.pyplot.imshow matplotlib.figure.Figure.colorbar matplotlib.pyplot.colorbar matplotlib.colors.BoundaryNorm matplotlib.colorbar.ColorbarBase.set_label ###Output _____no_output_____
session-6/MultiVarMultiStep.ipynb
###Markdown ###Code %tensorflow_version 2.x import tensorflow as tf import os !wget --no-check-certificate \ https://raw.githubusercontent.com/mohmiim/MLIntroduction/master/session-6/data/combined_csv.csv \ -O /tmp/weather.csv import pandas as pd import matplotlib.pyplot as plt import csv import numpy as np import tensorflow as tf from tensorflow.keras.models import Sequential from tensorflow.keras.layers import LSTM,Conv1D,Dense,MaxPooling1D,Flatten, Conv1D, Dropout,TimeDistributed, ConvLSTM2D,RepeatVector from tensorflow.keras.optimizers import SGD from tensorflow.keras.losses import Huber from math import sqrt df = pd.read_csv("/tmp/weather.csv") columnsToConsider = ['p (mbar)', 'T (degC)', 'rho (g/m**3)'] features = df[columnsToConsider] features.index = df['Date Time'] features.plot(subplots=True) plt.show() TRAIN_DATA_SIZE = 260000 dataset = features.values data_mean = dataset[:TRAIN_DATA_SIZE].mean(axis=0) data_std = dataset[:TRAIN_DATA_SIZE].std(axis=0) dataset = (dataset-data_mean)/data_std print(len(dataset)) training_data = dataset[:TRAIN_DATA_SIZE] validation_data = dataset[TRAIN_DATA_SIZE:] print(len(training_data)) print(len(validation_data)) print(data_mean) print(data_std) def sequenceData(dataset, target, start_index, end_index, history_size,target_size, step, single_step=False): X = [] y = [] start_index = start_index + history_size if end_index is None: end_index = len(dataset) - target_size for i in range(start_index, end_index): indices = range(i-history_size, i, step) X.append(dataset[indices]) if single_step: y.append(target[i+target_size]) else: y.append(target[i:i+target_size]) return np.array(X), np.array(y) LOOK_AHEAD = 72 STEP = 6 WINDOW_SIZE = 1440 BATCH_SIZE = 256 BUFFER_SIZE = 1000 SEQ = 10 N_LENGTH = int((WINDOW_SIZE/STEP)/SEQ) CONVLSTM = True X_train, y_train = sequenceData(dataset, dataset[:, 1], 0, TRAIN_DATA_SIZE, WINDOW_SIZE, LOOK_AHEAD, STEP) X_val, y_val = sequenceData(dataset, dataset[:, 1], TRAIN_DATA_SIZE, None, WINDOW_SIZE, LOOK_AHEAD, STEP) print(X_train.shape) print(y_train.shape) print(X_val.shape) print(y_val.shape) if CONVLSTM: print(X_train.shape) X_train = X_train.reshape(X_train.shape[0],SEQ,1,N_LENGTH,X_train.shape[2]) X_val = X_val.reshape(X_val.shape[0],SEQ,1,N_LENGTH,X_val.shape[2]) print(X_train.shape) train_data = tf.data.Dataset.from_tensor_slices((X_train, y_train)) train_data = train_data.shuffle(BUFFER_SIZE).batch(BATCH_SIZE) val_data = tf.data.Dataset.from_tensor_slices((X_val, y_val)) val_data = val_data.batch(BATCH_SIZE) def create_time_steps(length): return list(range(-length, 0)) def multi_step_plot(history, true_future, prediction): plt.figure(figsize=(12, 6)) num_in = create_time_steps(len(history)) num_out = len(true_future) plt.plot(num_in, np.array(history[:, 1]), label='History') plt.plot(np.arange(num_out)/STEP, np.array(true_future), 'bo', label='True Future') if prediction.any(): plt.plot(np.arange(num_out)/STEP, np.array(prediction), 'ro', label='Predicted Future') plt.legend(loc='upper left') plt.show() for x, y in train_data.take(1): if CONVLSTM : print(x[0].shape) x = x[0].numpy().reshape(SEQ*N_LENGTH,x.shape[4]) print(x.shape) multi_step_plot(x, y[0], np.array([0])) else: multi_step_plot(x[0], y[0], np.array([0])) def createConv_LSTMModel() : model = Sequential() model.add(Conv1D(512, 5, activation='relu', input_shape=X_train.shape[-2:])) model.add(Conv1D(512, 5, activation='relu')) model.add(MaxPooling1D()) model.add(LSTM(144,return_sequences=True,activation='relu')) model.add(LSTM(72, activation='relu')) model.add(Dense(72)) model.compile(optimizer=tf.keras.optimizers.Adam(), loss='mae') return model def createConvLSTM() : model = Sequential() model.add(ConvLSTM2D(64, (1,3), activation='relu', input_shape=(SEQ, 1, N_LENGTH, X_train.shape[4]))) model.add(Flatten()) model.add(RepeatVector(72)) model.add(LSTM(200, activation='relu', return_sequences=True)) model.add(TimeDistributed(Dense(100, activation='relu'))) model.add(TimeDistributed(Dense(1))) model.compile(loss='mse', optimizer='adam') model.summary() return model def createLSTMModel() : model = Sequential() model.add(LSTM(32,return_sequences=True,activation='relu',input_shape=X_train.shape[-2:])) model.add(LSTM(16, activation='relu')) model.add(Dense(72)) model.compile(optimizer=tf.keras.optimizers.Adam(), loss='mse') return model if CONVLSTM: model = createConvLSTM() else: model = createLSTMModel() EVALUATION_INTERVAL = 200 EPOCHS = 20 history = model.fit(train_data, epochs=EPOCHS) for X, y in val_data.take(10): if CONVLSTM: multi_step_plot(X[0].numpy().reshape(SEQ*N_LENGTH,X.shape[4]), y[0], model.predict(X)[0]) else : multi_step_plot(X[0], y[0], model.predict(X)[0]) ###Output _____no_output_____
demos/TitanDemo.ipynb
###Markdown In this notebook, we demonstrate how to create and modify a Titan graph in python, and then visualize the result using Graphistry's visual graph explorer. We assume the gremlin server for our Titan graph is hosted locally on port 8182 - This notebook utilizes the python modules aiogremlin and asyncio. - The GremlinClient class of aiogremlin communicates asynchronously with the gremlin server using websockets via asyncio coroutines. - This implementation allows you to submit additional requests to the server before any responses are recieved, which is much faster than synchronous request / response cycles. - For more information about these modules, please visit: - aiogremlin: http://aiogremlin.readthedocs.org/en/latest/index.html - asyncio: https://pypi.python.org/pypi/asyncio ###Code import asyncio import aiogremlin # Create event loop and initialize gremlin client loop = asyncio.get_event_loop() client = aiogremlin.GremlinClient(url='ws://localhost:8182/', loop=loop) # Default url ###Output _____no_output_____ ###Markdown Functions for graph modification ###Code @asyncio.coroutine def add_vertex_routine(name, label): yield from client.execute("graph.addVertex(label, l, 'name', n)", bindings={"l":label, "n":name}) def add_vertex(name, label): loop.run_until_complete(add_vertex_routine(name, label)) @asyncio.coroutine def add_relationship_routine(who, relationship, whom): yield from client.execute("g.V().has('name', p1).next().addEdge(r, g.V().has('name', p2).next())", bindings={"p1":who, "p2":whom, "r":relationship}) def add_relationship(who, relationship, whom): loop.run_until_complete(add_relationship_routine(who, relationship, whom)) @asyncio.coroutine def remove_all_vertices_routine(): resp = yield from client.submit("g.V()") results = [] while True: msg = yield from resp.stream.read(); if msg is None: break if msg.data is None: break for vertex in msg.data: yield from client.submit("g.V(" + str(vertex['id']) + ").next().remove()") def remove_all_vertices(): results = loop.run_until_complete(remove_all_vertices_routine()) @asyncio.coroutine def remove_vertex_routine(name): return client.execute("g.V().has('name', n).next().remove()", bindings={"n":name}) def remove_vertex(name): return loop.run_until_complete(remove_vertex_routine(name)); ###Output _____no_output_____ ###Markdown Functions for translating a graph to node and edge lists: - Currently, our API can only upload data from a pandas DataFrame, but we plan to implement more flexible uploads in the future. - For now, we can rely on the following functions to create the necessary DataFrames from our graph. ###Code @asyncio.coroutine def get_node_list_routine(): resp = yield from client.submit("g.V().as('node')\ .label().as('type')\ .select('node').values('name').as('name')\ .select('name', 'type')") results = []; while True: msg = yield from resp.stream.read(); if msg is None: break; if msg.data is None: break; else: results.extend(msg.data) return results def get_node_list(): results = loop.run_until_complete(get_node_list_routine()) return results @asyncio.coroutine def get_edge_list_routine(): resp = yield from client.submit("g.E().as('edge')\ .label().as('relationship')\ .select('edge').outV().values('name').as('source')\ .select('edge').inV().values('name').as('dest')\ .select('source', 'relationship', 'dest')") results = []; while True: msg = yield from resp.stream.read(); if msg is None: break; if msg.data is None: break; else: results.extend(msg.data) return results def get_edge_list(): results = loop.run_until_complete(get_edge_list_routine()) return results ###Output _____no_output_____ ###Markdown Let's start with an empty graph: ###Code remove_all_vertices() ###Output _____no_output_____ ###Markdown And then populate it with the Graphistry team members and some of thier relationships: ###Code add_vertex("Paden", "Person") add_vertex("Thibaud", "Person") add_vertex("Leo", "Person") add_vertex("Matt", "Person") add_vertex("Brian", "Person") add_vertex("Quinn", "Person") add_vertex("Paul", "Person") add_vertex("Lee", "Person") add_vertex("San Francisco", "Place") add_vertex("Oakland", "Place") add_vertex("Berkeley", "Place") add_vertex("Turkey", "Thing") add_vertex("Rocks", "Thing") add_vertex("Motorcycles", "Thing") add_relationship("Paden", "lives in", "Oakland") add_relationship("Quinn", "lives in", "Oakland") add_relationship("Thibaud", "lives in", "Berkeley") add_relationship("Matt", "lives in", "Berkeley") add_relationship("Leo", "lives in", "San Francisco") add_relationship("Paul", "lives in", "San Francisco") add_relationship("Brian", "lives in", "Oakland") add_relationship("Paden", "eats", "Turkey") add_relationship("Quinn", "cooks", "Turkey") add_relationship("Thibaud", "climbs", "Rocks") add_relationship("Matt", "climbs", "Rocks") add_relationship("Brian", "rides", "Motorcycles") add_vertex("Graphistry", "Work") add_relationship("Paden", "works at", "Graphistry") add_relationship("Thibaud", "works at", "Graphistry") add_relationship("Matt", "co-founded", "Graphistry") add_relationship("Leo", "co-founded", "Graphistry") add_relationship("Paul", "works at", "Graphistry") add_relationship("Quinn", "works at", "Graphistry") add_relationship("Brian", "works at", "Graphistry") ###Output _____no_output_____ ###Markdown Now, let's convert our graph database to a pandas DataFrame, so it can be uploaded into our tool: ###Code import pandas nodes = pandas.DataFrame(get_node_list()) edges = pandas.DataFrame(get_edge_list()) ###Output _____no_output_____ ###Markdown And color the nodes based on their "type" property: ###Code # Assign different color to each type in a round robin fashion. # For more information and coloring options please visit: https://graphistry.github.io/docs/legacy/api/0.9.2/api.html unique_types = list(nodes['type'].unique()) nodes['color'] = nodes['type'].apply(lambda x: unique_types.index(x) % 11) nodes edges ###Output _____no_output_____ ###Markdown Finally, let's vizualize the results! ###Code import graphistry graphistry.register(key='YOUR API KEY') plotter = graphistry.bind(source="source", destination="dest", node='name', point_color='color', edge_title='relationship') plotter.plot(edges, nodes) ###Output _____no_output_____
docs/examples/epochs_create.ipynb
###Markdown How to create epochs So, in your experiment, participants undergo a number of trials (events) and these events are possibly of different conditions. And you are wondering how can you locate these events on your signals and perhaps make them into epochs for future analysis?This example shows how to use Neurokit to extract epochs from data based on events localisation. In case you have multiple data files for each subject, this example also shows you how to create a loop through the subject folders and put the files together in an epoch format for further analysis. ###Code # Load NeuroKit and other useful packages import neurokit2 as nk import matplotlib.pyplot as plt plt.rcParams['figure.figsize'] = [15, 5] # Bigger images ###Output _____no_output_____ ###Markdown In this example, we will use a short segment of data which has ECG, EDA and respiration (RSP) signals. One signal with multiple event markings ###Code # Retrieve ECG data from data folder (sampling rate= 1000 Hz) data = nk.data("bio_eventrelated_100hz") ###Output _____no_output_____ ###Markdown Besides the signal channels, this data also has a fourth channel which consists of a string of 0 and 5. This is a binary marking of the Digital Input channel in BIOPAC. Let's visualize the event-marking channel below. ###Code # Visualize the event-marking channel plt.plot(data['Photosensor']) ###Output _____no_output_____ ###Markdown Depends on how you set up your experiment, the onset of the event can either be marked by signal going from 0 to 5 or vice versa. Specific to this data, the onsets of the events are marked where the signal in the event-marking channel goes from 5 to 0 and the offsets of the events are marked where the signal goes from 0 to 5.As shown in the above figure, there are four times the signal going from 5 to 0, corresponding to the 4 events (4 trials) in this data. There were 2 types (the condition) of images that were shown to the participant: “Negative” vs. “Neutral” in terms of emotion. Each condition had 2 trials. The following list is the condition order. ###Code condition_list = ["Negative", "Neutral", "Neutral", "Negative"] ###Output _____no_output_____ ###Markdown Before we can epoch the data, we have to locate the events and extract their related information. This can be done using Neurokit function [events_find()](https://neurokit2.readthedocs.io/en/latest/functions.htmlneurokit2.events_find>). ###Code # Find events events = nk.events_find(event_channel=data["Photosensor"], threshold_keep='below', event_conditions=condition_list) events ###Output _____no_output_____ ###Markdown The output of [events_find()](https://neurokit2.readthedocs.io/en/latest/functions.htmlneurokit2.events_find>) gives you a `dictionary` that contains the information of event onsets, event duration, event label and event condition. As stated, as the event onsets of this data are marked by event channel going from 5 to **0**, the `threshold_keep` is set to `below`. Depends on your data, you can customize the `arguments` in [events_find()](https://neurokit2.readthedocs.io/en/latest/functions.htmlneurokit2.events_find>) to correctly locate the events. You can use the [events_plot()](https://neurokit2.readthedocs.io/en/latest/functions.htmlneurokit2.events_plot>) function to plot the events that have been found together with your event channel to confirm that it is correct. ###Code plot = nk.events_plot(events, data['Photosensor']) ###Output _____no_output_____ ###Markdown Or you can visualize the events together with the all other signals. ###Code plot = nk.events_plot(events, data) ###Output _____no_output_____ ###Markdown After you have located the events, you can now create epochs using the NeuroKit [epochs_create()](https://neurokit2.readthedocs.io/en/latest/functions.htmlneurokit2.epochs_create>) function. However, we recommend to process your signal first before cutting them to smaller epochs. You can read more about processing of physiological signals using NeuroKit in [Custom your Processing Pipeline](https://neurokit2.readthedocs.io/en/latest/examples/custom.html>) Example. ###Code # Process the signal df, info = nk.bio_process(ecg=data["ECG"], rsp=data["RSP"], eda=data["EDA"], sampling_rate=100) ###Output _____no_output_____ ###Markdown Now, let's think about how we want our epochs to be like. For this example, we want: 1. Epochs to start *1 second before the event onset* 2. Epochs to end *6 seconds* afterwardsThese are passed into the `epochs_start` and `epochs_end` arguments, respectively. Our epochs will then cover the region from **-1 s** to **+6 s** relative to the onsets of events (i.e., 700 data points since the signal is sampled at 100Hz). ###Code # Build and plot epochs epochs = nk.epochs_create(df, events, sampling_rate=100, epochs_start=-1, epochs_end=6) ###Output _____no_output_____ ###Markdown And as easy as that, you have created a dictionary of four dataframes, each correspond to an epoch of the event. Here, in the above example, all your epochs have the same starting time and ending time, specified by `epochs_start` and `epochs_end`. Nevertheless, you can also pass a list of different timings to these two arguments to customize the duration of the epochs for each of your events. One subject with multiple data files In some experimental designs, instead of having one signal file with multiple events, each subject can have multiples files where each file is the record of one event. In the following example, we will show you how to create a loop through the subject folders and put the files together in an epoch format for further analysis. Firstly, let's say your data is arranged as the following where each subject has a folder and in each folder there are multiple data files corresponding to different events: ```[Experiment folder]|└── Data| || └── Subject_001/| | │ event_1.[csv]| | │ event_2.[csv]| | |__ ......| └── Subject_002/| │ event_1.[csv]| │ event_2.[csv]| |__ ......└── analysis_script.py``` The following will illustrate how your analysis script might look like. Try to re-create such data structure and the analysis script in your computer! Now, in our analysis scripts, let's load the necessary packages: ###Code # Load packages import pandas as pd import os ###Output _____no_output_____ ###Markdown Assuming that your working directory is now at your analysis script, and you want to read all the data files of `Subject_001`. Your analysis script should look something like below: ###Code # Your working directory should be at Experiment folder participant = 'Subject_001' sampling_rate=100 # List all data files in Subject_001 folder all_files = os.listdir('Data/' + participant) # Create an empty directory to store your files (events) epochs = {} # Loop through each file in the subject folder for i, file in enumerate(all_files): # Read the file data = pd.read_csv('Data/' + participant + '/' + file) # Add a Label column (e.g Label 1 for epoch 1) data['Label'] = np.full(len(data), str(i+1)) # Set index of data to time in seconds index = data.index/sampling_rate data = data.set_index(pd.Series(index)) # Append the file into the dictionary epochs[str(i + 1)] = data epochs ###Output _____no_output_____ ###Markdown How to create epochs So, in your experiment, participants undergo a number of trials (events) and these events are possibly of different conditions. And you are wondering how can you locate these events on your signals and perhaps make them into epochs for future analysis?This example shows how to use Neurokit to extract epochs from data based on events localisation. In case you have multiple data files for each subject, this example also shows you how to create a loop through the subject folders and put the files together in an epoch format for further analysis. ###Code # Load NeuroKit and other useful packages import neurokit2 as nk import matplotlib.pyplot as plt %matplotlib inline plt.rcParams['figure.figsize'] = [15, 5] # Bigger images ###Output _____no_output_____ ###Markdown In this example, we will use a short segment of data which has ECG, EDA and respiration (RSP) signals. One signal with multiple event markings ###Code # Retrieve ECG data from data folder (sampling rate= 1000 Hz) data = nk.data("bio_eventrelated_100hz") ###Output _____no_output_____ ###Markdown Besides the signal channels, this data also has a fourth channel which consists of a string of 0 and 5. This is a binary marking of the Digital Input channel in BIOPAC. Let's visualize the event-marking channel below. ###Code # Visualize the event-marking channel plt.plot(data['Photosensor']) ###Output _____no_output_____ ###Markdown Depends on how you set up your experiment, the onset of the event can either be marked by signal going from 0 to 5 or vice versa. Specific to this data, the onsets of the events are marked where the signal in the event-marking channel goes from 5 to 0 and the offsets of the events are marked where the signal goes from 0 to 5.As shown in the above figure, there are four times the signal going from 5 to 0, corresponding to the 4 events (4 trials) in this data. There were 2 types (the condition) of images that were shown to the participant: “Negative” vs. “Neutral” in terms of emotion. Each condition had 2 trials. The following list is the condition order. ###Code condition_list = ["Negative", "Neutral", "Neutral", "Negative"] ###Output _____no_output_____ ###Markdown Before we can epoch the data, we have to locate the events and extract their related information. This can be done using Neurokit function [events_find()](https://neurokit2.readthedocs.io/en/latest/functions.htmlneurokit2.events_find>). ###Code # Find events events = nk.events_find(event_channel=data["Photosensor"], threshold_keep='below', event_conditions=condition_list) events ###Output _____no_output_____ ###Markdown The output of [events_find()](https://neurokit2.readthedocs.io/en/latest/functions.htmlneurokit2.events_find>) gives you a `dictionary` that contains the information of event onsets, event duration, event label and event condition. As stated, as the event onsets of this data are marked by event channel going from 5 to **0**, the `threshold_keep` is set to `below`. Depends on your data, you can customize the `arguments` in [events_find()](https://neurokit2.readthedocs.io/en/latest/functions.htmlneurokit2.events_find>) to correctly locate the events. You can use the [events_plot()](https://neurokit2.readthedocs.io/en/latest/functions.htmlneurokit2.events_plot>) function to plot the events that have been found together with your event channel to confirm that it is correct. ###Code plot = nk.events_plot(events, data['Photosensor']) ###Output _____no_output_____ ###Markdown Or you can visualize the events together with the all other signals. ###Code plot = nk.events_plot(events, data) ###Output _____no_output_____ ###Markdown After you have located the events, you can now create epochs using the NeuroKit [epochs_create()](https://neurokit2.readthedocs.io/en/latest/functions.htmlneurokit2.epochs_create>) function. However, we recommend to process your signal first before cutting them to smaller epochs. You can read more about processing of physiological signals using NeuroKit in [Custom your Processing Pipeline](https://neurokit2.readthedocs.io/en/latest/examples/custom.html>) Example. ###Code # Process the signal df, info = nk.bio_process(ecg=data["ECG"], rsp=data["RSP"], eda=data["EDA"], sampling_rate=100) ###Output _____no_output_____ ###Markdown Now, let's think about how we want our epochs to be like. For this example, we want: 1. Epochs to start *1 second before the event onset* 2. Epochs to end *6 seconds* afterwardsThese are passed into the `epochs_start` and `epochs_end` arguments, respectively. Our epochs will then cover the region from **-1 s** to **+6 s** relative to the onsets of events (i.e., 700 data points since the signal is sampled at 100Hz). ###Code # Build and plot epochs epochs = nk.epochs_create(df, events, sampling_rate=100, epochs_start=-1, epochs_end=6) ###Output _____no_output_____ ###Markdown And as easy as that, you have created a dictionary of four dataframes, each correspond to an epoch of the event. Here, in the above example, all your epochs have the same starting time and ending time, specified by `epochs_start` and `epochs_end`. Nevertheless, you can also pass a list of different timings to these two arguments to customize the duration of the epochs for each of your events. One subject with multiple data files In some experimental designs, instead of having one signal file with multiple events, each subject can have multiples files where each file is the record of one event. In the following example, we will show you how to create a loop through the subject folders and put the files together in an epoch format for further analysis. Firstly, let's say your data is arranged as the following where each subject has a folder and in each folder there are multiple data files corresponding to different events: ```[Experiment folder]|└── Data| || └── Subject_001/| | │ event_1.[csv]| | │ event_2.[csv]| | |__ ......| └── Subject_002/| │ event_1.[csv]| │ event_2.[csv]| |__ ......└── analysis_script.py``` The following will illustrate how your analysis script might look like. Try to re-create such data structure and the analysis script in your computer! Now, in our analysis scripts, let's load the necessary packages: ###Code # Load packages import pandas as pd import os ###Output _____no_output_____ ###Markdown Assuming that your working directory is now at your analysis script, and you want to read all the data files of `Subject_001`. Your analysis script should look something like below: ###Code # Your working directory should be at Experiment folder participant = 'Subject_001' sampling_rate=100 # List all data files in Subject_001 folder all_files = os.listdir('Data/' + participant) # Create an empty directory to store your files (events) epochs = {} # Loop through each file in the subject folder for i, file in enumerate(all_files): # Read the file data = pd.read_csv('Data/' + participant + '/' + file) # Add a Label column (e.g Label 1 for epoch 1) data['Label'] = np.full(len(data), str(i+1)) # Set index of data to time in seconds index = data.index/sampling_rate data = data.set_index(pd.Series(index)) # Append the file into the dictionary epochs[str(i + 1)] = data epochs ###Output _____no_output_____ ###Markdown How to create epochs So, in your experiment, participants undergo a number of trials (events) and these events are possibly of different conditions. And you are wondering how can you locate these events on your signals and perhaps make them into epochs for future analysis?This example can be referenced by [citing the package](https://github.com/neuropsychology/NeuroKitcitation).This example shows how to use Neurokit to extract epochs from data based on events localisation. In case you have multiple data files for each subject, this example also shows you how to create a loop through the subject folders and put the files together in an epoch format for further analysis. ###Code # Load NeuroKit and other useful packages import neurokit2 as nk import matplotlib.pyplot as plt plt.rcParams['figure.figsize'] = [15, 5] # Bigger images ###Output _____no_output_____ ###Markdown In this example, we will use a short segment of data which has ECG, EDA and respiration (RSP) signals. One signal with multiple event markings ###Code # Retrieve ECG data from data folder (sampling rate= 1000 Hz) data = nk.data("bio_eventrelated_100hz") ###Output _____no_output_____ ###Markdown Besides the signal channels, this data also has a fourth channel which consists of a string of 0 and 5. This is a binary marking of the Digital Input channel in BIOPAC. Let's visualize the event-marking channel below. ###Code # Visualize the event-marking channel plt.plot(data['Photosensor']) ###Output _____no_output_____ ###Markdown Depends on how you set up your experiment, the onset of the event can either be marked by signal going from 0 to 5 or vice versa. Specific to this data, the onsets of the events are marked where the signal in the event-marking channel goes from 5 to 0 and the offsets of the events are marked where the signal goes from 0 to 5.As shown in the above figure, there are four times the signal going from 5 to 0, corresponding to the 4 events (4 trials) in this data. There were 2 types (the condition) of images that were shown to the participant: “Negative” vs. “Neutral” in terms of emotion. Each condition had 2 trials. The following list is the condition order. ###Code condition_list = ["Negative", "Neutral", "Neutral", "Negative"] ###Output _____no_output_____ ###Markdown Before we can epoch the data, we have to locate the events and extract their related information. This can be done using Neurokit function [events_find()](https://neurokit2.readthedocs.io/en/latest/functions.htmlneurokit2.events_find>). ###Code # Find events events = nk.events_find(event_channel=data["Photosensor"], threshold_keep='below', event_conditions=condition_list) events ###Output _____no_output_____ ###Markdown The output of [events_find()](https://neurokit2.readthedocs.io/en/latest/functions.htmlneurokit2.events_find>) gives you a `dictionary` that contains the information of event onsets, event duration, event label and event condition. As stated, as the event onsets of this data are marked by event channel going from 5 to **0**, the `threshold_keep` is set to `below`. Depends on your data, you can customize the `arguments` in [events_find()](https://neurokit2.readthedocs.io/en/latest/functions.htmlneurokit2.events_find>) to correctly locate the events. You can use the [events_plot()](https://neurokit2.readthedocs.io/en/latest/functions.htmlneurokit2.events_plot>) function to plot the events that have been found together with your event channel to confirm that it is correct. ###Code plot = nk.events_plot(events, data['Photosensor']) ###Output _____no_output_____ ###Markdown Or you can visualize the events together with the all other signals. ###Code plot = nk.events_plot(events, data) ###Output _____no_output_____ ###Markdown After you have located the events, you can now create epochs using the NeuroKit [epochs_create()](https://neurokit2.readthedocs.io/en/latest/functions.htmlneurokit2.epochs_create>) function. However, we recommend to process your signal first before cutting them to smaller epochs. You can read more about processing of physiological signals using NeuroKit in [Custom your Processing Pipeline](https://neurokit2.readthedocs.io/en/latest/examples/custom.html>) Example. ###Code # Process the signal df, info = nk.bio_process(ecg=data["ECG"], rsp=data["RSP"], eda=data["EDA"], sampling_rate=100) ###Output _____no_output_____ ###Markdown Now, let's think about how we want our epochs to be like. For this example, we want: 1. Epochs to start *1 second before the event onset* 2. Epochs to end *6 seconds* afterwardsThese are passed into the `epochs_start` and `epochs_end` arguments, respectively. Our epochs will then cover the region from **-1 s** to **+6 s** relative to the onsets of events (i.e., 700 data points since the signal is sampled at 100Hz). ###Code # Build and plot epochs epochs = nk.epochs_create(df, events, sampling_rate=100, epochs_start=-1, epochs_end=6) ###Output _____no_output_____ ###Markdown And as easy as that, you have created a dictionary of four dataframes, each correspond to an epoch of the event. Here, in the above example, all your epochs have the same starting time and ending time, specified by `epochs_start` and `epochs_end`. Nevertheless, you can also pass a list of different timings to these two arguments to customize the duration of the epochs for each of your events. One subject with multiple data files In some experimental designs, instead of having one signal file with multiple events, each subject can have multiples files where each file is the record of one event. In the following example, we will show you how to create a loop through the subject folders and put the files together in an epoch format for further analysis. Firstly, let's say your data is arranged as the following where each subject has a folder and in each folder there are multiple data files corresponding to different events: ```[Experiment folder]|└── Data| || └── Subject_001/| | │ event_1.[csv]| | │ event_2.[csv]| | |__ ......| └── Subject_002/| │ event_1.[csv]| │ event_2.[csv]| |__ ......└── analysis_script.py``` The following will illustrate how your analysis script might look like. Try to re-create such data structure and the analysis script in your computer! Now, in our analysis scripts, let's load the necessary packages: ###Code # Load packages import pandas as pd import os ###Output _____no_output_____ ###Markdown Assuming that your working directory is now at your analysis script, and you want to read all the data files of `Subject_001`. Your analysis script should look something like below: ###Code # Your working directory should be at Experiment folder participant = 'Subject_001' sampling_rate=100 # List all data files in Subject_001 folder all_files = os.listdir('Data/' + participant) # Create an empty directory to store your files (events) epochs = {} # Loop through each file in the subject folder for i, file in enumerate(all_files): # Read the file data = pd.read_csv('Data/' + participant + '/' + file) # Add a Label column (e.g Label 1 for epoch 1) data['Label'] = np.full(len(data), str(i+1)) # Set index of data to time in seconds index = data.index/sampling_rate data = data.set_index(pd.Series(index)) # Append the file into the dictionary epochs[str(i + 1)] = data epochs ###Output _____no_output_____ ###Markdown How to create epochs So, in your experiment, participants undergo a number of trials (events) and these events are possibly of different conditions. And you are wondering how can you locate these events on your signals and perhaps make them into epochs for future analysis?This example shows how to use Neurokit to extract epochs from data based on events localisation. In case you have multiple data files for each subject, this example also shows you how to create a loop through the subject folders and put the files together in an epoch format for further analysis. ###Code # Load NeuroKit and other useful packages import neurokit2 as nk import matplotlib.pyplot as plt plt.rcParams['figure.figsize'] = [8, 5] # Bigger images %matplotlib inline ###Output _____no_output_____ ###Markdown In this example, we will use a short segment of data which has ECG, EDA and respiration (RSP) signals. One signal with multiple event markings ###Code # Retrieve ECG data from data folder (sampling rate= 1000 Hz) data = nk.data("bio_eventrelated_100hz") ###Output _____no_output_____ ###Markdown Besides the signal channels, this data also has a fourth channel which consists of a string of 0 and 5. This is a binary marking of the Digital Input channel in BIOPAC. Let's visualize the event-marking channel below. ###Code # Visualize the event-marking channel plt.plot(data['Photosensor']) ###Output _____no_output_____ ###Markdown Depends on how you set up your experiment, the onset of the event can either be marked by signal going from 0 to 5 or vice versa. Specific to this data, the onsets of the events are marked where the signal in the event-marking channel goes from 5 to 0 and the offsets of the events are marked where the signal goes from 0 to 5.As shown in the above figure, there are four times the signal going from 5 to 0, corresponding to the 4 events (4 trials) in this data. There were 2 types (the condition) of images that were shown to the participant: “Negative” vs. “Neutral” in terms of emotion. Each condition had 2 trials. The following list is the condition order. ###Code condition_list = ["Negative", "Neutral", "Neutral", "Negative"] ###Output _____no_output_____ ###Markdown Before we can epoch the data, we have to locate the events and extract their related information. This can be done using Neurokit function [events_find()](https://neurokit2.readthedocs.io/en/latest/functions.htmlneurokit2.events_find>). ###Code # Find events events = nk.events_find(event_channel=data["Photosensor"], threshold_keep='below', event_conditions=condition_list) events ###Output _____no_output_____ ###Markdown The output of [events_find()](https://neurokit2.readthedocs.io/en/latest/functions.htmlneurokit2.events_find>) gives you a `dictionary` that contains the information of event onsets, event duration, event label and event condition. As stated, as the event onsets of this data are marked by event channel going from 5 to **0**, the `threshold_keep` is set to `below`. Depends on your data, you can customize the `arguments` in [events_find()](https://neurokit2.readthedocs.io/en/latest/functions.htmlneurokit2.events_find>) to correctly locate the events. You can use the [events_plot()](https://neurokit2.readthedocs.io/en/latest/functions.htmlneurokit2.events_plot>) function to plot the events that have been found together with your event channel to confirm that it is correct. ###Code plot = nk.events_plot(events, data['Photosensor']) ###Output _____no_output_____ ###Markdown Or you can visualize the events together with the all other signals. ###Code plot = nk.events_plot(events, data) ###Output _____no_output_____ ###Markdown After you have located the events, you can now create epochs using the NeuroKit [epochs_create()](https://neurokit2.readthedocs.io/en/latest/functions.htmlneurokit2.epochs_create>) function. However, we recommend to process your signal first before cutting them to smaller epochs. You can read more about processing of physiological signals using NeuroKit in [Custom your Processing Pipeline](https://neurokit2.readthedocs.io/en/latest/examples/custom.html>) Example. ###Code # Process the signal df, info = nk.bio_process(ecg=data["ECG"], rsp=data["RSP"], eda=data["EDA"], sampling_rate=100) ###Output _____no_output_____ ###Markdown Now, let's think about how we want our epochs to be like. For this example, we want: 1. Epochs to start *1 second before the event onset* 2. Epochs to end *6 seconds* afterwardsThese are passed into the `epochs_start` and `epochs_end` arguments, respectively. Our epochs will then cover the region from **-1 s** to **+6 s** relative to the onsets of events (i.e., 700 data points since the signal is sampled at 100Hz). ###Code # Build and plot epochs epochs = nk.epochs_create(df, events, sampling_rate=100, epochs_start=-1, epochs_end=6) ###Output _____no_output_____ ###Markdown And as easy as that, you have created a dictionary of four dataframes, each correspond to an epoch of the event. Here, in the above example, all your epochs have the same starting time and ending time, specified by `epochs_start` and `epochs_end`. Nevertheless, you can also pass a list of different timings to these two arguments to customize the duration of the epochs for each of your events. One subject with multiple data files In some experimental designs, instead of having one signal file with multiple events, each subject can have multiples files where each file is the record of one event. In the following example, we will show you how to create a loop through the subject folders and put the files together in an epoch format for further analysis. Firstly, let's say your data is arranged as the following where each subject has a folder and in each folder there are multiple data files corresponding to different events: ```[Experiment folder]|└── Data| || └── Subject_001/| | │ event_1.[csv]| | │ event_2.[csv]| | |__ ......| └── Subject_002/| │ event_1.[csv]| │ event_2.[csv]| |__ ......└── analysis_script.py``` The following will illustrate how your analysis script might look like. Try to re-create such data structure and the analysis script in your computer! Now, in our analysis scripts, let's load the necessary packages: ###Code # Load packages import pandas as pd import os ###Output _____no_output_____ ###Markdown Assuming that your working directory is now at your analysis script, and you want to read all the data files of `Subject_001`. Your analysis script should look something like below: ###Code # Your working directory should be at Experiment folder participant = 'Subject_001' sampling_rate=100 # List all data files in Subject_001 folder all_files = os.listdir('Data/' + participant) # Create an empty directory to store your files (events) epochs = {} # Loop through each file in the subject folder for i, file in enumerate(all_files): # Read the file data = pd.read_csv('Data/' + participant + '/' + file) # Add a Label column (e.g Label 1 for epoch 1) data['Label'] = np.full(len(data), str(i+1)) # Set index of data to time in seconds index = data.index/sampling_rate data = data.set_index(pd.Series(index)) # Append the file into the dictionary epochs[str(i + 1)] = data epochs ###Output _____no_output_____ ###Markdown How to create epochs So, in your experiment, participants undergo a number of trials (events) and these events are possibly of different conditions. And you are wondering how can you locate these events on your signals and perhaps make them into epochs for future analysis?This example can be referenced by [citing the package](https://github.com/neuropsychology/NeuroKitcitation).This example shows how to use Neurokit to extract epochs from data based on events localisation. In case you have multiple data files for each subject, this example also shows you how to create a loop through the subject folders and put the files together in an epoch format for further analysis. ###Code # Load NeuroKit and other useful packages import neurokit2 as nk import matplotlib.pyplot as plt plt.rcParams['figure.figsize'] = [15, 5] # Bigger images ###Output _____no_output_____ ###Markdown In this example, we will use a short segment of data which has ECG, EDA and respiration (RSP) signals. One signal with multiple event markings ###Code # Retrieve ECG data from data folder (sampling rate= 1000 Hz) data = nk.data("bio_eventrelated_100hz") ###Output _____no_output_____ ###Markdown Besides the signal channels, this data also has a fourth channel which consists of a string of 0 and 5. This is a binary marking of the Digital Input channel in BIOPAC. Let's visualize the event-marking channel below. ###Code # Visualize the event-marking channel plt.plot(data['Photosensor']) ###Output _____no_output_____ ###Markdown Depends on how you set up your experiment, the onset of the event can either be marked by signal going from 0 to 5 or vice versa. Specific to this data, the onsets of the events are marked where the signal in the event-marking channel goes from 5 to 0 and the offsets of the events are marked where the signal goes from 0 to 5.As shown in the above figure, there are four times the signal going from 5 to 0, corresponding to the 4 events (4 trials) in this data. There were 2 types (the condition) of images that were shown to the participant: “Negative” vs. “Neutral” in terms of emotion. Each condition had 2 trials. The following list is the condition order. ###Code condition_list = ["Negative", "Neutral", "Neutral", "Negative"] ###Output _____no_output_____ ###Markdown Before we can epoch the data, we have to locate the events and extract their related information. This can be done using Neurokit function [events_find()](https://neurokit2.readthedocs.io/en/latest/functions.htmlneurokit2.events_find>). ###Code # Find events events = nk.events_find(event_channel=data["Photosensor"], threshold_keep='below', event_conditions=condition_list) events ###Output _____no_output_____ ###Markdown The output of [events_find()](https://neurokit2.readthedocs.io/en/latest/functions.htmlneurokit2.events_find>) gives you a `dictionary` that contains the information of event onsets, event duration, event label and event condition. As stated, as the event onsets of this data are marked by event channel going from 5 to **0**, the `threshold_keep` is set to `below`. Depends on your data, you can customize the `arguments` in [events_find()](https://neurokit2.readthedocs.io/en/latest/functions.htmlneurokit2.events_find>) to correctly locate the events. You can use the [events_plot()](https://neurokit2.readthedocs.io/en/latest/functions.htmlneurokit2.events_plot>) function to plot the events that have been found together with your event channel to confirm that it is correct. ###Code plot = nk.events_plot(events, data['Photosensor']) ###Output _____no_output_____ ###Markdown Or you can visualize the events together with the all other signals. ###Code plot = nk.events_plot(events, data) ###Output _____no_output_____ ###Markdown After you have located the events, you can now create epochs using the NeuroKit [epochs_create()](https://neurokit2.readthedocs.io/en/latest/functions.htmlneurokit2.epochs_create>) function. However, we recommend to process your signal first before cutting them to smaller epochs. You can read more about processing of physiological signals using NeuroKit in [Custom your Processing Pipeline](https://neurokit2.readthedocs.io/en/latest/examples/custom.html>) Example. ###Code # Process the signal df, info = nk.bio_process(ecg=data["ECG"], rsp=data["RSP"], eda=data["EDA"], sampling_rate=100) ###Output _____no_output_____ ###Markdown Now, let's think about how we want our epochs to be like. For this example, we want: 1. Epochs to start *1 second before the event onset* 2. Epochs to end *6 seconds* afterwardsThese are passed into the `epochs_start` and `epochs_end` arguments, respectively. Our epochs will then cover the region from **-1 s** to **+6 s** relative to the onsets of events (i.e., 700 data points since the signal is sampled at 100Hz). ###Code # Build and plot epochs epochs = nk.epochs_create(df, events, sampling_rate=100, epochs_start=-1, epochs_end=6) ###Output _____no_output_____ ###Markdown And as easy as that, you have created a dictionary of four dataframes, each correspond to an epoch of the event. Here, in the above example, all your epochs have the same starting time and ending time, specified by `epochs_start` and `epochs_end`. Nevertheless, you can also pass a list of different timings to these two arguments to customize the duration of the epochs for each of your events. One subject with multiple data files In some experimental designs, instead of having one signal file with multiple events, each subject can have multiples files where each file is the record of one event. In the following example, we will show you how to create a loop through the subject folders and put the files together in an epoch format for further analysis. Firstly, let's say your data is arranged as the following where each subject has a folder and in each folder there are multiple data files corresponding to different events: ```[Experiment folder]|└── Data| || └── Subject_001/| | │ event_1.[csv]| | │ event_2.[csv]| | |__ ......| └── Subject_002/| │ event_1.[csv]| │ event_2.[csv]| |__ ......└── analysis_script.py``` The following will illustrate how your analysis script might look like. Try to re-create such data structure and the analysis script in your computer! Now, in our analysis scripts, let's load the necessary packages: ###Code # Load packages import pandas as pd import os ###Output _____no_output_____ ###Markdown Assuming that your working directory is now at your analysis script, and you want to read all the data files of `Subject_001`. Your analysis script should look something like below: ###Code # Your working directory should be at Experiment folder participant = 'Subject_001' sampling_rate=100 # List all data files in Subject_001 folder all_files = os.listdir('Data/' + participant) # Create an empty directory to store your files (events) epochs = {} # Loop through each file in the subject folder for i, file in enumerate(all_files): # Read the file data = pd.read_csv('Data/' + participant + '/' + file) # Add a Label column (e.g Label 1 for epoch 1) data['Label'] = np.full(len(data), str(i+1)) # Set index of data to time in seconds index = data.index/sampling_rate data = data.set_index(pd.Series(index)) # Append the file into the dictionary epochs[str(i + 1)] = data epochs ###Output _____no_output_____
Nearby Exoplanet Map DataVis.ipynb
###Markdown Turning RA/Dec/Distance into 3D coordinates with astropy: ###Code from astropy.coordinates import SkyCoord import astropy.units as u sc=SkyCoord('17h57m48.49803s +04d41m36.2072s',unit=(u.hourangle, u.deg)) np.sum(np.array(['Barnard' in stname for stname in table.pl_hostname.values])) #Adding barnard's star because it's not in the table yet :P extrarow=pd.Series({'pl_hostname':"Barnards Star",'pl_letter':'b','pl_name':"Barnards Star b",'pl_discmethod':"Radial Velocity", 'st_mass':0.144,'st_rad':0.196,'st_dist':1.8266,'pl_bmassj':3.0,'pl_orbper':240,'pl_orbsmax':0.38358, 'ra_str':'17h57m48.49803s','ra':sc.ra.deg,'dec_str':'+04d41m36.2072s','dec':sc.dec.deg},name='3836') table=table.append(extrarow) ###Output _____no_output_____ ###Markdown Taking only nearby stars: ###Code near=table[table.st_dist<20] allscs=SkyCoord(near.ra,near.dec,distance=near.st_dist,unit=(u.deg,u.deg,u.pc)) near['b']=allscs.galactic.b.deg near['l']=allscs.galactic.l.deg near['d']=allscs.galactic.distance # Placeholder as I work out the orientation of these 3D coordinates... # #3.35170258, 23.67992809, 4.31 -> 3.94036076, 0.230767484, 1.73101227 # ^ towards galactic centre but moderately above. #90.0626363 , -34.72725534, 15.47 -> -0.0138994771, 12.7143699e+01, -8.8128034 # ^ directly east of galactic and below plane # in-out, left-right, up-down near['gal_inwards']=allscs.galactic.cartesian.x near['gal_eastwards']=allscs.galactic.cartesian.y near['gal_upwards']=allscs.galactic.cartesian.z ###Output C:\Users\Home User\Anaconda3\lib\site-packages\ipykernel_launcher.py:1: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy """Entry point for launching an IPython kernel. C:\Users\Home User\Anaconda3\lib\site-packages\ipykernel_launcher.py:2: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy C:\Users\Home User\Anaconda3\lib\site-packages\ipykernel_launcher.py:3: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy This is separate from the ipykernel package so we can avoid doing imports until ###Markdown For some reason some semi-major axes values are missing, so replacing manually: ###Code near.loc[pd.isnull(near.pl_orbsmax)] ###Output _____no_output_____ ###Markdown Taking colour of the points from the stellar mass, so using `digitize` to bin these: ###Code hist=plt.hist(near.st_mass,9) near['colorbin']=np.digitize(near.st_mass,hist[1])-1 ###Output C:\Users\Home User\Anaconda3\lib\site-packages\ipykernel_launcher.py:2: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy ###Markdown Tidying up the CSV: ###Code near.loc[near.pl_name=='GJ 676 A e','pl_orbsmax']=0.187 near.loc[near.pl_name=='HD 189733 b','pl_orbsmax']=0.03099 near.loc[near.pl_name=='GJ 676 A d','pl_orbsmax']=0.0413 near.loc[near.pl_name=='TRAPPIST-1 h','pl_orbsmax']=0.063 near.loc[near.pl_name=='HD 26965 b','pl_orbsmax']=0.224 [strname for strname in near.pl_hostname if len(strname)>12] near.loc[near.pl_name=='VHS J125601.92-125723.9','pl_hostname']='VHS J1256' near.loc[near.pl_name=='WISEP J121756.91+162640.2 A','pl_hostname']='WISEP J1217' near.loc[near.pl_name=='VHS J125601.92-125723.9','pl_name']='VHS J1256 b' near.loc[near.pl_name=='WISEP J121756.91+162640.2 A','pl_name']='WISEP J1217 b' near.loc[pd.isnull(near.st_rad),'st_rad']=near.loc[pd.isnull(near.st_rad),'st_mass'].values**0.85 near.to_csv('Nearest_planetary_systems_edited.csv') import pandas as pd import numpy as np near = pd.read_csv('Nearest_planetary_systems_edited.csv') #near=pd.DataFrame.from_csv('Nearest_planetary_systems_edited.csv') ###Output _____no_output_____ ###Markdown Colour indexes (in sns "muted") for the detection methods: ###Code disc_met_index={'Radial Velocity':0, 'Transit':3, 'Imaging':2} import matplotlib.pyplot as plt %matplotlib inline import seaborn as sns sns.set_style('white') sns.set(rc={'axes.facecolor':'#f4ecdc', 'figure.facecolor':'#ffffff'}) #sns.set(rc={'axes.facecolor':'#ffffff', 'figure.facecolor':'#ffffff'}) sns.set_palette("Spectral", 10) ###Output _____no_output_____ ###Markdown Plotting: ###Code #sns.set(rc={'axes.facecolor':'#f4ecdc', 'figure.facecolor':'#ffffff'}) plotsize=8.5 vert_scale=0.05 def smascale(sma): return 0.85*sma**0.25 def radscale(rad): return 280*rad**0.66 fig,ax1 = plt.subplots(1,figsize=(15,15)) ax1.set_xlim(-1*plotsize,plotsize) ax1.set_ylim(-1*plotsize,plotsize) #bbox = ax1.get_window_extent().transformed(fig.dpi_scale_trans.inverted()) #pix_per_pc_x, pix_per_pc_y = (bbox.width*fig.dpi)/(2*plotsize), (bbox.height*fig.dpi)/(2*plotsize) finalx=[];finaly=[];finals=[];cols=[];circles=[];names=[];initx=[];inity=[];finals_proj=[] #Looping over each star for starname in pd.unique(near.loc[(near.gal_eastwards**2+near.gal_inwards**2)<(plotsize*1.5)**2].pl_hostname): star=near.loc[near.pl_hostname==starname].iloc[0] dash_gs=vert_scale*np.hypot(star.gal_eastwards,star.gal_inwards) zmult=1+vert_scale*star.gal_upwards #print(int(np.round(dash_gs)), int(np.round(2*dash_gs))) ls=':' if star.gal_upwards<0 else '-' dashorder=-1 if star.gal_upwards<0 else 1 #Plotting dashed/solid lines from 2D position to 3D ax1.plot([star.gal_eastwards,star.gal_eastwards*zmult],[star.gal_inwards,star.gal_inwards*zmult], linestyle=ls,zorder=dashorder,alpha=0.5,color='#888888',linewidth=3) initx+=[star.gal_eastwards] inity+=[star.gal_inwards] finalx+=[star.gal_eastwards*zmult] finaly+=[star.gal_inwards*zmult] finals+=[radscale(star.st_rad)] finals_proj+=[finals[-1]/zmult**1.25] cols+=[sns.color_palette('Spectral',10)[star['colorbin']]] names+=[starname] npl=0 #Plotting circles for each planet for name,pl in near.loc[near.pl_hostname==starname].iterrows(): circles+=[plt.Circle((finalx[-1], finaly[-1]), 0.1*(npl+2), color=sns.color_palette('muted')[disc_met_index[pl['pl_discmethod']]], fill=False,alpha=0.75,zorder=dashorder*3,linewidth=1.5)] npl+=1 #ax1.text(finalx[-1]+0.25*(npl),finaly[-1]+0.15*(npl),starname,fontsize=9, clip_on=True) hal='right' if initx[-1]<0.0 else 'left' val='bottom' if inity[-1]<0.0 or '28794' in starname else 'top' #Plotting the text for the starname ax1.text(initx[-1]+0.04,inity[-1]+0.04,starname,fontsize=9, clip_on=True, horizontalalignment=hal,verticalalignment=val,zorder=4) #print(pl['pl_orbsmax'],pix_per_pc_x,pix_per_pc_y) #print(finalx[-1]+(5+smascale(pl['pl_orbsmax'])/pix_per_pc_x),finaly[-1]+(5+smascale(pl['pl_orbsmax'])/pix_per_pc_y),starname) #ax1.text(finalx[-1]+(5+smascale(np.max(near.loc[near.pl_hostname==starname,'pl_orbsmax'])))/pix_per_pc_x,finaly[-1]+(5+smascale(np.max(star['pl_orbsmax'])))/pix_per_pc_y,starname,fontsize=9) #ax1.scatter(near.loc[near.gal_upwards<0.0,'gal_eastwards'].values,near.loc[near.gal_upwards<0.0,'gal_inwards'].values,zorder=2) #Plotting the stars and the star "shadows" on the 2D plane below=np.array(finals_proj)<np.array(finals) ax1.scatter(np.array(finalx)[below],np.array(finaly)[below],s=np.array(finals)[below],zorder=4,c=np.array(cols)[below]) ax1.scatter(np.array(finalx)[~below],np.array(finaly)[~below],s=np.array(finals)[~below],zorder=-4,c=np.array(cols)[~below]) ax1.scatter(np.array(initx)[below],np.array(inity)[below],s=np.array(finals_proj)[below],c='#BBBBBB',facecolor=None,zorder=-3,alpha=0.6,marker='+') ax1.scatter(np.array(initx)[~below],np.array(inity)[~below],s=np.array(finals_proj)[~below],c='#BBBBBB',facecolor=None,zorder=3,alpha=0.6,marker='+') #Plotting the Sun! for npl,sma in enumerate([0.387,0.723,1.000,1.524,5.20,9.54,19.19,30.1]): circles+=[plt.Circle((0,0), 0.1*(npl+2), color=sns.color_palette('muted')[1], fill=False,alpha=0.75,zorder=0,linewidth=1.5)] ax1.text(0.05,0.05,'Sun',fontsize=9, clip_on=True,horizontalalignment='left',verticalalignment='bottom',zorder=4) ax1.scatter(0.0,0.0,s=radscale(1.0),zorder=2,c=sns.color_palette('Spectral',10)[4]) #Plotting 5, 10 and 15 parsec circles: circles+=[plt.Circle((0, 0), 5, color=sns.color_palette('muted')[4],fill=False,alpha=0.35,zorder=0,linewidth=1.5)] circles+=[plt.Circle((0, 0), 10, color=sns.color_palette('muted')[4],fill=False,alpha=0.35,zorder=0,linewidth=1.5)] circles+=[plt.Circle((0, 0), 15, color=sns.color_palette('muted')[4],fill=False,alpha=0.35,zorder=0,linewidth=1.5)] ax1.set_yticklabels([]) ax1.set_xticklabels([]) ax1.grid(False) for circ in circles: ax1.add_artist(circ) #Adding labels to those 1/15/15pc circles: ax1.text(5/np.sqrt(2),5/np.sqrt(2),'5pc',rotation=-45,color=sns.color_palette('muted')[4],zorder=0) ax1.text(10/np.sqrt(2),10/np.sqrt(2),'10pc',rotation=-45,color=sns.color_palette('muted')[4],zorder=0) ax1.text(-5/np.sqrt(2),-5/np.sqrt(2),'5pc',rotation=-45,color=sns.color_palette('muted')[4],zorder=0) ax1.text(10*np.cos(228/180*np.pi),10*np.sin(228/180*np.pi),'10pc',rotation=-42,color=sns.color_palette('muted')[4],zorder=0) #Adding Signature ax1.text(-11.25/np.sqrt(2),-11.65/np.sqrt(2),'Hugh Osborn\n @exohugh',color=sns.color_palette('muted')[4],zorder=-6,fontsize=18) #Adding arrow to galactic centre: plt.arrow(0,7.95,0.0,0.33,width=0.02,color=sns.color_palette('muted')[4],zorder=-6) ax1.text(-0.05,8.0,'Galactic\nCentre',color=sns.color_palette('muted')[4],zorder=-6,fontsize=9,horizontalalignment='right') #Doing Legend myself: ax1.plot([-7.65,-7.65*1.06],[-5.93,-5.93*1.06], linestyle='-',zorder=1,alpha=0.5,color='#888888',linewidth=3) ax1.scatter(-7.65,-5.93,s=60,marker='+',zorder=0,c='#BBBBBB',alpha=0.6,) ax1.scatter(-7.65*1.06,-5.93*1.06,s=60/1.2,c=sns.color_palette('Spectral',10)[2],zorder=2) ax1.text(-7.65,-5.93,"Above\ngalactic\nplane",fontsize=9, clip_on=True,horizontalalignment='left',verticalalignment='top',zorder=4) ax1.plot([-6.5,-6.5*0.94],[-6.45,-6.45*0.94], linestyle=':',zorder=1,alpha=0.5,color='#888888',linewidth=3) ax1.scatter(-6.5,-6.45,s=90,marker='+',zorder=0,c='#BBBBBB',alpha=0.6,) ax1.scatter(-6.5*0.94,-6.45*0.94,s=90*1.2,c=sns.color_palette('Spectral',10)[5],zorder=-2) ax1.text(-6.5,-6.45,"Below\ngalactic\nplane",fontsize=9, clip_on=True,horizontalalignment='right',verticalalignment='bottom',zorder=4) #Doing "planet detection method" legend: legendcircles=[] ax1.scatter(-7.85,-7,s=120,c=sns.color_palette('Spectral',10)[6],zorder=0) for npl in range(2): legendcircles+=[plt.Circle((-7.85,-7), 0.1*(npl+2), color=sns.color_palette('muted')[3], fill=False,alpha=0.75,zorder=0,linewidth=1.5)] ax1.text(-7.85,-7,'Transits',fontsize=9, clip_on=True,horizontalalignment='right',verticalalignment='bottom',zorder=4) ax1.scatter(-7.05,-7,s=70,c=sns.color_palette('Spectral',10)[3],zorder=0) for npl in range(3): legendcircles+=[plt.Circle((-7.05,-7), 0.1*(npl+2), color=sns.color_palette('muted')[0], fill=False,alpha=0.75,zorder=0,linewidth=1.5)] ax1.text(-7.05,-7,'RVs',fontsize=9, clip_on=True,horizontalalignment='left',verticalalignment='bottom',zorder=4) ax1.scatter(-6.4,-7,s=70,c=sns.color_palette('Spectral',10)[0],zorder=0) for npl in range(1): legendcircles+=[plt.Circle((-6.4,-7), 0.1*(npl+2), color=sns.color_palette('muted')[2], fill=False,alpha=0.75,zorder=0,linewidth=1.5)] ax1.text(-6.4,-7,'Imaging',fontsize=9, clip_on=True,horizontalalignment='left',verticalalignment='bottom',zorder=4) for circ in legendcircles: ax1.add_artist(circ) #Removing whitespace fig.tight_layout() #Saving plt.savefig("AllNearbyPlanets_small.png",dpi=200) plt.savefig("AllNearbyPlanets.png",dpi=600) ###Output 'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points. 'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points. 'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points. 'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points. 'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points. 'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points. ###Markdown Business card stuff:I'm gonna maybe put this design on a business card with "above" planets on one side and "below" planets on the other... ###Code sns.set(rc={'axes.facecolor':'#f4ecdc', 'figure.facecolor':'#ffffff'}) fig, (ax1,ax2) = plt.subplots(2,figsize=(8.5*1.2,2*5.5*1.2)) ax1.set_yticklabels([]) ax1.set_xticklabels([]) ax1.grid(False) ax2.set_yticklabels([]) ax2.set_xticklabels([]) ax2.grid(False) fig.tight_layout() plt.savefig("BusinessCardColourBG_55x85.png",dpi=500) sns.set(rc={'axes.facecolor':'#ffffff', 'figure.facecolor':'#ffffff'}) plotsize=10.3 vert_scale=0.05 def smascale(sma): return 0.85*sma**0.25 def radscale(rad): return 280*rad**0.66 fig, (ax1,ax2) = plt.subplots(2,figsize=(8.5*1.2,2*5.5*1.2)) ax1.set_xlim(-1*plotsize,plotsize) ax1.set_ylim(-1*plotsize*55/85.0,plotsize*55/85.0) bbox = ax1.get_window_extent().transformed(fig.dpi_scale_trans.inverted()) pix_per_pc_x, pix_per_pc_y = (bbox.width*fig.dpi)/(2*plotsize), (bbox.height*fig.dpi)/(2*plotsize/np.sqrt(2)) finalx=[];finaly=[];finals=[];cols=[];circles=[];names=[] for starname in pd.unique(near.loc[(near.gal_upwards<0.0)*(near.st_dist<plotsize*1.25)].pl_hostname): star=near.loc[near.pl_hostname==starname].iloc[0] dash_gs=vert_scale*np.hypot(star.gal_eastwards,star.gal_inwards) zmult=1+vert_scale*abs(star.gal_upwards) #print(int(np.round(dash_gs)), int(np.round(2*dash_gs))) ax1.plot([star.gal_eastwards,star.gal_eastwards*zmult],[star.gal_inwards,star.gal_inwards*zmult], linestyle=':', dashes=(0.25+int(np.round(dash_gs)), 0.25+int(np.round(2*dash_gs))), zorder=1,alpha=0.75,color='#888888',linewidth=3) finalx+=[star.gal_eastwards*zmult] finaly+=[star.gal_inwards*zmult] finals+=[radscale(star.st_rad)] cols+=[sns.color_palette('Spectral',10)[star['colorbin']]] names+=[starname] npl=0 for name,pl in near.loc[near.pl_hostname==starname].iterrows(): circles+=[plt.Circle((finalx[-1], finaly[-1]), 0.2*(npl+2), color=sns.color_palette('muted')[disc_met_index[pl['pl_discmethod']]], fill=False,alpha=0.75,zorder=0,linewidth=1.5)] npl+=1 #ax1.text(finalx[-1]+0.25*(npl),finaly[-1]+0.15*(npl),starname,fontsize=9) #print(pl['pl_orbsmax'],pix_per_pc_x,pix_per_pc_y) #print(finalx[-1]+(5+smascale(pl['pl_orbsmax'])/pix_per_pc_x),finaly[-1]+(5+smascale(pl['pl_orbsmax'])/pix_per_pc_y),starname) #ax1.text(finalx[-1]+(5+smascale(np.max(near.loc[near.pl_hostname==starname,'pl_orbsmax'])))/pix_per_pc_x,finaly[-1]+(5+smascale(np.max(star['pl_orbsmax'])))/pix_per_pc_y,starname,fontsize=9) #ax1.scatter(near.loc[near.gal_upwards<0.0,'gal_eastwards'].values,near.loc[near.gal_upwards<0.0,'gal_inwards'].values,zorder=2) ax1.scatter(finalx,finaly,s=finals,zorder=2,c=cols) #Plotting the Sun! for npl,sma in enumerate([0.387,0.723,1.000,1.524,5.20,9.54,19.19,30.1,39.5]): circles+=[plt.Circle((0,0), 0.2*(npl+2), color=sns.color_palette('muted')[1], fill=False,alpha=0.75,zorder=0,linewidth=1.5)] ax1.scatter(0.0,0.0,s=radscale(1.0),zorder=2,c=sns.color_palette('Spectral',10)[4]) circles+=[plt.Circle((0, 0), 5, color=sns.color_palette('muted')[4],fill=False,alpha=0.35,zorder=0,linewidth=1.5)] circles+=[plt.Circle((0, 0), 10, color=sns.color_palette('muted')[4],fill=False,alpha=0.35,zorder=0,linewidth=1.5)] circles+=[plt.Circle((0, 0), 15, color=sns.color_palette('muted')[4],fill=False,alpha=0.35,zorder=0,linewidth=1.5)] ax1.set_yticklabels([]) ax1.set_xticklabels([]) ax1.grid(False) for circ in circles: ax1.add_artist(circ) finalx2=[];finaly2=[];finals2=[];cols2=[];circles2=[];names2=[] for starname in pd.unique(near.loc[(near.gal_upwards>=0.0)*(near.st_dist<plotsize*1.25)].pl_hostname): star=near.loc[near.pl_hostname==starname].iloc[0] dash_gs=vert_scale*np.hypot(star.gal_eastwards,star.gal_inwards) zmult=1+vert_scale*abs(star.gal_upwards) #print(int(np.round(dash_gs)), int(np.round(2*dash_gs))) ax2.plot([star.gal_eastwards,star.gal_eastwards*zmult],[-1*star.gal_inwards,-1*star.gal_inwards*zmult], linestyle=':', dashes=(0.25+int(np.round(dash_gs)), 0.25+int(np.round(2*dash_gs))), zorder=1,alpha=0.75,color='#888888',linewidth=3) finalx2+=[star.gal_eastwards*zmult] finaly2+=[-1*star.gal_inwards*zmult] finals2+=[radscale(star.st_rad)] cols2+=[sns.color_palette('Spectral',10)[star['colorbin']]] names2+=[starname] npl=0 for name,pl in near.loc[near.pl_hostname==starname].iterrows(): circles2+=[plt.Circle((finalx2[-1], finaly2[-1]), 0.2*(npl+2), color=sns.color_palette('muted')[disc_met_index[pl['pl_discmethod']]], fill=False,alpha=0.75,zorder=0,linewidth=1.5)] npl+=1 #print(pl.pl_name,star.gal_eastwards*zmult,star.gal_inwards*zmult,star.gal_upwards,smascale(pl['pl_orbsmax']),sns.color_palette('Spectral',10)[star['colorbin']],sns.color_palette('muted')[disc_met_index[pl['pl_discmethod']]]) #ax2.text(finalx2[-1]+0.25*(npl),finaly2[-1]+0.15*(npl),starname,fontsize=9) #ax2.text(finalx2[-1]+(5+smascale(np.max(near.loc[near.pl_hostname==starname,'pl_orbsmax'])))/pix_per_pc_x,finaly2[-1]+(5+smascale(np.max(star['pl_orbsmax'])))/pix_per_pc_y,starname,fontsize=9) #ax2.scatter(near.loc[near.gal_upwards<0.0,'gal_eastwards'].values,near.loc[near.gal_upwards<0.0,'gal_inwards'].values,zorder=2) ax2.scatter(finalx2,finaly2,s=finals2,zorder=2,c=cols2) ax2.scatter(finalx,-1*np.array(finaly),s=finals,c='#BBBBBB',facecolors='none',zorder=-1,alpha=0.3) ax1.scatter(finalx2,-1*np.array(finaly2),s=finals2,c='#BBBBBB',facecolors='none',zorder=-1,alpha=0.3) print(finals) #Plotting the Sun! for npl,sma in enumerate([0.387,0.723,1.000,1.524,5.20,9.54,19.19,30.1,39.5]): circles2+=[plt.Circle((0,0), 0.2*(npl+2), color=sns.color_palette('muted')[1], fill=False,alpha=0.75,zorder=0,linewidth=1.5)] ax2.scatter(0.0,0.0,s=radscale(1.0),zorder=2,c=sns.color_palette('Spectral',10)[4]) circles2+=[plt.Circle((0, 0), 5, color=sns.color_palette('muted')[4],fill=False,alpha=0.35,zorder=0,linewidth=1.5)] circles2+=[plt.Circle((0, 0), 10, color=sns.color_palette('muted')[4],fill=False,alpha=0.35,zorder=0,linewidth=1.5)] circles2+=[plt.Circle((0, 0), 15, color=sns.color_palette('muted')[4],fill=False,alpha=0.35,zorder=0,linewidth=1.5)] ax2.set_yticklabels([]) ax2.set_xticklabels([]) ax2.grid(False) for circ in circles2: ax2.add_artist(circ) ax2.set_xlim(-1*plotsize,plotsize) ax2.set_ylim(-1*plotsize*55/85.0,plotsize*55/85.0) fig.tight_layout() plt.savefig("BusinessCardDesigns_55x85.png",dpi=500) ###Output C:\Users\Home User\Anaconda3\lib\site-packages\pandas\core\computation\expressions.py:178: UserWarning: evaluating in Python space because the '*' operator is not supported by numexpr for the bool dtype, use '&' instead f"evaluating in Python space because the {repr(op_str)} " 'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points. 'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points. ###Markdown Ignore everything from here on out - it's just the wasteland: ###Code ax1.arrow? near.loc[near.pl_hostname=='YZ Cet'] near.loc[near.pl_hostname=='Proxima Cen'] finalx[2],finaly[2] plt.scatter(finalx2,finaly2,s=finals2,zorder=2,c=cols2) plt.scatter(finalx,finaly,s=finals,zorder=2,c=cols) cols near.loc[near.pl_hostname=="GJ 1214"].iloc[0] near.loc[pd.isnull(near.pl_orbsmax),'pl_name'] near.columns 0.210*1.8266 table.dec_str ###Output _____no_output_____
content/03/02g_commontasks.ipynb
###Markdown Common tasksThis page is kind of long. (It's got a lot of useful info!) Use the page's table of contents to the right to jump to what you're looking for. Reshaping dataIn the [shape of data](02b_pandasVocab) page, I explained the concept of wide vs. tall data with this example: ###Code import pandas as pd df = (pd.Series({ ('Ford',2000):10, ('Ford',2001):12, ('Ford',2002):14, ('Ford',2003):16, ('GM',2000):11, ('GM',2001):13, ('GM',2002):13, ('GM',2003):15}) .to_frame() .rename(columns={0:'Sales'}) .rename_axis(['Firm','Year']) .reset_index() ) print("Tall:") display(df) ###Output Tall: ###Markdown ```{note}To reshape dataframes, you have to work with index and column names. ```So before we use `stack` and `unstack` here, put the firm and year into the index. ###Code tall = df.set_index(['Firm','Year']) ###Output _____no_output_____ ###Markdown To convert a tall dataframe to wide: `df.unstack()`.If your index has multiple levels, the level parameter is used to pick which to unstack. "0" is the innermost level of the index. ###Code print("\n\nUnstack (make it shorter+wider) on level 0/Firm:\n") display(tall.unstack(level=0)) print("\n\nUnstack (make it shorter+wider) on level 1/Year:\n") display(tall.unstack(level=1)) ###Output Unstack (make it shorter+wider) on level 0/Firm: ###Markdown To convert a wide dataframe to tall/long: `df.stack()`.```{tip}Pay attention after reshaping to the order of your index variables and how they are sorted. ``` ###Code # save the wide df above to this name for subseq examples wide_year = tall.unstack(level=0) print("\n\nStack it back (make it tall): wide_year.stack()\n") display(wide_year.stack()) print("\n\nYear-then-firm doesn't make much sense.\nReorder to firm-year: wide_year.stack().swaplevel()") display(wide_year.stack().swaplevel()) print("\n\nYear-then-firm sorting make much sense.\nSort to firm-year: wide_year.stack().swaplevel().sort_index()") display(wide_year.stack().swaplevel().sort_index()) ###Output Stack it back (make it tall): wide_year.stack() ###Markdown **Beautiful!** Lambda (in `assign` or after `groupby`)You will see this inside pandas chains a lot: `lambda x: someFunc(x)`, e.g.:- `.assign(lev = lambda x: (x['dltt']+x['dlc'])/x['at'] )`- `.groupby('industry').assign(avglev = lambda x: x['lev'].mean() )`What is that "lambda" and why is it there? Well, when you get to the "assign" step, what you would do to reference a variable is type the dataframe name and the variable name. _But often, the dataframe object doesn't exist in memory yet and so it has no name._ In the example above, `[df].groupby('industry').assign(avglev = lambda x: x['lev'].mean() )`, pandas splits the dataframe into groups, within each group applies a function (here: the mean), and then returns a new dataframe with one observation for each group (the average leverage for the industry). Visually, this **split-apply-combine**[^ref] process looks like this:![](https://jakevdp.github.io/PythonDataScienceHandbook/figures/03.08-split-apply-combine.png)[^ref]: (This figure is yet another resource I'm borrowing from the awesome [PythonDataScienceHandbook](https://jakevdp.github.io/PythonDataScienceHandbook). So, the `.assign()` portion is working on these tiny pieces of the dataframe. Those pieces are dataframe objects that don't have names! **So how do you refer to an unnamed dataframe object?**Answer: Lambda functions. When you type `.assign(newVar = lambda x: someFunc(x))`, `x` is the object ("some df object") that assign is working on. Ta da!```python common syntax within pandas.assign( = lambda : ) often, tempname is just "x" for short.assign( = lambda x: ) ``````{note}It turns out that lambda functions are very useful in python programming, and not just within pandas. But pandas is where we will use them most in this class.``` `.transform()` after groupbySometimes you get a statistic for a group, but you want that statistic in every single row of your original dataset.But `groupby` creates a new dataframe that is smaller, with only one row per row.```{admonition}:class: tipUse `.transform()` after `groupby` to "cast" those statistics back to the original ``` ###Code import pandas as pd import numpy as np df = pd.DataFrame({'key':["A",'B','C',"A",'B','C'], 'data':np.arange(1,7)}).set_index('key').sort_index() display(df) # the input # groupby().sum() shrinks the dataset display(df.groupby(level='key')['data'].sum() .to_frame() ) # just added this line bc df prints prettier than series # groupby().transform(sum) does NOT shrink the dataset df.groupby(level='key').transform(sum) ###Output _____no_output_____ ###Markdown One last trick: Let's add that new variable to the original dataset! ###Code # option 1: create the var df['groupsum'] = df.groupby(level='key').transform(sum) # option 2: create the var with assign (can be used inside chains) df = df.assign(groupsum = df.groupby(level='key')['data'].transform(sum)) display(df) ###Output _____no_output_____ ###Markdown `.pipe()`One problem with chains on dataframes is that you can only use methods that work on the object (a dataframe) that is getting chained. So for example, you've formatted dataframe to plot. You can't directly add a seaborn function to the chain: _Seaborn functions are methods of the package seaborn, not the dataframe._ (It's `sns.lmplot`, not `df.lmplot`.) `.pipe()` allows you to hand a dataframe to functions that don't work directly on dataframes. ````{admonition} The syntax of .pipe()```pythondf.pipe(, <'if the first parameter of the outside function isnt the df, ' 'the name of the parameter that is expecting the dataframe'>, ```Note that the object after the pipe command is run might not be a dataframe anymore! It's whatever object the piped function produces!```` Example 1[From one of the pandas devs:](https://tomaugspurger.github.io/method-chaining)> ```python> jack_jill = pd.DataFrame()> (jack_jill.pipe(went_up, 'hill')> .pipe(fetch, 'water')> .pipe(fell_down, 'jack')> .pipe(broke, 'crown')> .pipe(tumble_after, 'jill')> )> ```> > This really is just right-to-left function execution. The first argument to pipe, a callable, is called with the DataFrame on the left as its first argument, and any additional arguments you specify.> > I hope the analogy to data analysis code is clear. Code is read more often than it is written. When you or your coworkers or research partners have to go back in two months to update your script, having the story of raw data to results be told as clearly as possible will save you time. Example 2[From Steven Morse:](https://stmorse.github.io/journal/tidyverse-style-pandas.html)> ```python> (sns.load_dataset('diamonds')> .query('cut in ["Ideal", "Good"] & \> clarity in ["IF", "SI2"] & \> carat < 3')> .pipe((sns.FacetGrid, 'data'),> row='cut', col='clarity', hue='color',> hue_order=list('DEFGHIJ'),> height=6,> legend_out=True)> .map(sns.scatterplot, 'carat', 'price', alpha=0.8)> .add_legend())> ``` Printing inside of chains```{tip}One thing about chains, is that sometimes it's hard to know what's going on within them without just commenting out all the code and running it bit-by-bit. This function will let you print messages from inside the chain, by exploiting the `.pipe()` function we just covered!```![](https://media.giphy.com/media/Buy7YdhkyHBCM/source.gif)Copy this into your code: ###Code def csnap(df, fn=lambda x: x.shape, msg=None): """ Custom Help function to print things in method chaining. Will also print a message, which helps if you're printing a bunch of these, so that you know which csnap print happens at which point. Returns back the df to further use in chaining. Usage examples - within a chain of methods: df.pipe(csnap) df.pipe(csnap, lambda x: <do stuff>) df.pipe(csnap, msg="Shape here") df.pipe(csnap, lambda x: x.sample(10), msg="10 random obs") """ if msg: print(msg) display(fn(df)) return df ###Output _____no_output_____ ###Markdown An example of this in use: ###Code (df .pipe(csnap, msg="Shape before describe") .describe()['data'] # get the distribution stats of a variable (I'm just doing something to show csnap off) .pipe(csnap, msg="Shape after describe and pick one var") # see, it prints a message from within the chain! .to_frame() .assign(ones = 1) .pipe(csnap, lambda x: x.sample(2), msg="Random sample of df at point #3") # see, it prints a message from within the chain! .assign(twos=2,threes=3) ) ###Output Shape before describe ###Markdown Common tasksThis page is kind of long. (It's got a lot of useful info!) Use the page's table of contents to the right to jump to what you're looking for. Reshaping dataIn the [shape of data](02b_pandasVocab) page, I explained the concept of wide vs. tall data with this example: ###Code import pandas as pd df = (pd.Series({ ('Ford',2000):10, ('Ford',2001):12, ('Ford',2002):14, ('Ford',2003):16, ('GM',2000):11, ('GM',2001):13, ('GM',2002):13, ('GM',2003):15}) .to_frame() .rename(columns={0:'Sales'}) .rename_axis(['Firm','Year']) .reset_index() ) print("Tall:") display(df) ###Output Tall: ###Markdown ```{note}To reshape dataframes, you have to work with index and column names. ```So before we use `stack` and `unstack` here, put the firm and year into the index. ###Code tall = df.set_index(['Firm','Year']) ###Output _____no_output_____ ###Markdown To convert a tall dataframe to wide: `df.unstack()`.If your index has multiple levels, the level parameter is used to pick which to unstack. "0" is the innermost level of the index. ###Code print("\n\nUnstack (make it shorter+wider) on level 0/Firm:\n") display(tall.unstack(level=0)) print("\n\nUnstack (make it shorter+wider) on level 1/Year:\n") display(tall.unstack(level=1)) ###Output Unstack (make it shorter+wider) on level 0/Firm: ###Markdown To convert a wide dataframe to tall/long: `df.stack()`.```{tip}Pay attention after reshaping to the order of your index variables and how they are sorted. ``` ###Code # save the wide df above to this name for subseq examples wide_year = tall.unstack(level=0) print("\n\nStack it back (make it tall): wide_year.stack()\n") display(wide_year.stack()) print("\n\nYear-then-firm doesn't make much sense.\nReorder to firm-year: wide_year.stack().swaplevel()") display(wide_year.stack().swaplevel()) print("\n\nYear-then-firm sorting make much sense.\nSort to firm-year: wide_year.stack().swaplevel().sort_index()") display(wide_year.stack().swaplevel().sort_index()) ###Output Stack it back (make it tall): wide_year.stack() ###Markdown **Beautiful!** Lambda (in `assign` or after `groupby`)You will see this inside pandas chains a lot: `lambda x: someFunc(x)`, e.g.:- `.assign(lev = lambda x: (x['dltt']+x['dlc'])/x['at'] )`- `.groupby('industry').assign(avglev = lambda x: x['lev'].mean() )`What is that "lambda" and why is it there? Well, when you get to the "assign" step, what you would do to reference a variable is type the dataframe name and the variable name. _But often, the dataframe object doesn't exist in memory yet and so it has no name._ In the example above, `[df].groupby('industry').assign(avglev = lambda x: x['lev'].mean() )`, pandas splits the dataframe into groups, within each group applies a function (here: the mean), and then returns a new dataframe with one observation for each group (the average leverage for the industry). Visually, this **split-apply-combine**[^ref] process looks like this:![](https://jakevdp.github.io/PythonDataScienceHandbook/figures/03.08-split-apply-combine.png)[^ref]: (This figure is yet another resource I'm borrowing from the awesome [PythonDataScienceHandbook](https://jakevdp.github.io/PythonDataScienceHandbook). So, the `.assign()` portion is working on these tiny pieces of the dataframe. Those pieces are dataframe objects that don't have names! **So how do you refer to an unnamed dataframe object?**Answer: Lambda functions. When you type `.assign(newVar = lambda x: someFunc(x))`, `x` is the object ("some df object") that assign is working on. Ta da!```python common syntax within pandas.assign( = lambda : ) often, tempname is just "x" for short.assign( = lambda x: ) ``````{note}It turns out that lambda functions are very useful in python programming, and not just within pandas. But pandas is where we will use them most in this class.``` `.transform()` after groupbySometimes you get a statistic for a group, but you want that statistic in every single row of your original dataset.But `groupby` creates a new dataframe that is smaller, with only one row per row.```{admonition}:class: tipUse `.transform()` after `groupby` to "cast" those statistics back to the original ``` ###Code import pandas as pd import numpy as np df = pd.DataFrame({'key':["A",'B','C',"A",'B','C'], 'data':np.arange(1,7)}).set_index('key').sort_index() display(df) # the input # groupby().sum() shrinks the dataset display(df.groupby(level='key')['data'].sum() .to_frame() ) # just added this line bc df prints prettier than series # groupby().transform(sum) does NOT shrink the dataset df.groupby(level='key').transform(sum) ###Output _____no_output_____ ###Markdown One last trick: Let's add that new variable to the original dataset! ###Code # option 1: create the var df['groupsum'] = df.groupby(level='key').transform(sum) # option 2: create the var with assign (can be used inside chains) df = df.assign(groupsum = df.groupby(level='key')['data'].transform(sum)) display(df) ###Output _____no_output_____ ###Markdown `.pipe()`One problem with chains on dataframes is that you can only use methods that work on the object (a dataframe) that is getting chained. So for example, you've formatted dataframe to plot. You can't directly add a seaborn function to the chain: _Seaborn functions are methods of the package seaborn, not the dataframe._ (It's `sns.lmplot`, not `df.lmplot`.) `.pipe()` allows you to hand a dataframe to functions that don't work directly on dataframes. ````{admonition} The syntax of .pipe()```pythondf.pipe(, <'if the first parameter of the outside function isnt the df, ' 'the name of the parameter that is expecting the dataframe'>, ```Note that the object after the pipe command is run might not be a dataframe anymore! It's whatever object the piped function produces!```` Example 1[From one of the pandas devs:](https://tomaugspurger.github.io/method-chaining)> ```python> jack_jill = pd.DataFrame()> (jack_jill.pipe(went_up, 'hill')> .pipe(fetch, 'water')> .pipe(fell_down, 'jack')> .pipe(broke, 'crown')> .pipe(tumble_after, 'jill')> )> ```> > This really is just right-to-left function execution. The first argument to pipe, a callable, is called with the DataFrame on the left as its first argument, and any additional arguments you specify.> > I hope the analogy to data analysis code is clear. Code is read more often than it is written. When you or your coworkers or research partners have to go back in two months to update your script, having the story of raw data to results be told as clearly as possible will save you time. Example 2[From Steven Morse:](https://stmorse.github.io/journal/tidyverse-style-pandas.html)> ```python> (sns.load_dataset('diamonds')> .query('cut in ["Ideal", "Good"] & \> clarity in ["IF", "SI2"] & \> carat < 3')> .pipe((sns.FacetGrid, 'data'),> row='cut', col='clarity', hue='color',> hue_order=list('DEFGHIJ'),> height=6,> legend_out=True)> .map(sns.scatterplot, 'carat', 'price', alpha=0.8)> .add_legend())> ``` Printing inside of chains```{tip}One thing about chains, is that sometimes it's hard to know what's going on within them without just commenting out all the code and running it bit-by-bit. This function will let you print messages from inside the chain, by exploiting the `.pipe()` function we just covered!```![](https://media.giphy.com/media/Buy7YdhkyHBCM/source.gif)Copy this into your code: ###Code def csnap(df, fn=lambda x: x.shape, msg=None): """ Custom Help function to print things in method chaining. Will also print a message, which helps if you're printing a bunch of these, so that you know which csnap print happens at which point. Returns back the df to further use in chaining. Usage examples - within a chain of methods: df.pipe(csnap) df.pipe(csnap, lambda x: <do stuff>) df.pipe(csnap, msg="Shape here") df.pipe(csnap, lambda x: x.sample(10), msg="10 random obs") """ if msg: print(msg) display(fn(df)) return df ###Output _____no_output_____ ###Markdown An example of this in use: ###Code (df .pipe(csnap, msg="Shape before describe") .describe()['data'] # get the distribution stats of a variable (I'm just doing something to show csnap off) .pipe(csnap, msg="Shape after describe and pick one var") # see, it prints a message from within the chain! .to_frame() .assign(ones = 1) .pipe(csnap, lambda x: x.sample(2), msg="Random sample of df at point #3") # see, it prints a message from within the chain! .assign(twos=2,threes=3) ) ###Output Shape before describe ###Markdown Common tasks```{important}Yes, this page is kind of long. But that's because it has a lot of useful info!Use the page's table of contents to the right to jump to what you're looking for. ``` Reshaping dataIn the [shape of data](02b_pandasVocab) page, I explained the concept of wide vs. tall data with this example: ###Code import pandas as pd df = (pd.Series({ ('Ford',2000):10, ('Ford',2001):12, ('Ford',2002):14, ('Ford',2003):16, ('GM',2000):11, ('GM',2001):13, ('GM',2002):13, ('GM',2003):15}) .to_frame() .rename(columns={0:'Sales'}) .rename_axis(['Firm','Year']) .reset_index() ) print("Tall:") display(df) ###Output Tall: ###Markdown ```{note}To reshape dataframes, you have to work with index and column names. ```So before we use `stack` and `unstack` here, put the firm and year into the index. ###Code tall = df.set_index(['Firm','Year']) ###Output _____no_output_____ ###Markdown To convert a tall dataframe to wide: `df.unstack()`.If your index has multiple levels, the level parameter is used to pick which to unstack. "0" is the innermost level of the index. ###Code print("\n\nUnstack (make it shorter+wider) on level 0/Firm:\n") display(tall.unstack(level=0)) print("\n\nUnstack (make it shorter+wider) on level 1/Year:\n") display(tall.unstack(level=1)) ###Output Unstack (make it shorter+wider) on level 0/Firm: ###Markdown To convert a wide dataframe to tall/long: `df.stack()`.```{tip}Pay attention after reshaping to the order of your index variables and how they are sorted. ``` ###Code # save the wide df above to this name for subseq examples wide_year = tall.unstack(level=0) print("\n\nStack it back (make it tall): wide_year.stack()\n") display(wide_year.stack()) print("\n\nYear-then-firm doesn't make much sense.\nReorder to firm-year: wide_year.stack().swaplevel()") display(wide_year.stack().swaplevel()) print("\n\nYear-then-firm sorting make much sense.\nSort to firm-year: wide_year.stack().swaplevel().sort_index()") display(wide_year.stack().swaplevel().sort_index()) ###Output Stack it back (make it tall): wide_year.stack() ###Markdown **Beautiful!** Lambda (in `assign` or after `groupby`)You will see this inside pandas chains a lot: `lambda x: someFunc(x)`, e.g.:- `.assign(lev = lambda x: (x['dltt']+x['dlc'])/x['at'] )`- `.groupby('industry').assign(avglev = lambda x: x['lev'].mean() )`Q1: What is that "lambda"?A1: A lambda function is an anonymous function that is usually one line and usually defined without a name. You write it like this:```pylambda : ```Here, you can see how the lambda function takes inputs and creates output the same way a function does: ###Code dumb_prog = lambda a: a + 10 # I added "dumb_prog =" to name the lambda function and use it dumb_prog(5) # we could define a fnc to do the exact same thing def dumb_prog(a): return a + 10 dumb_prog(5) ###Output _____no_output_____ ###Markdown Q2: Why is that lambda there? A2: We use lambdas when we need a function for a short period of time and when the name of the function doesn't matter. <!-- Let's go back to this bit of code: `.groupby('industry').assign(avglev = lambda x: x['lev'].mean() )`When you get to the "assign" step, what you would do to reference a variable is type the dataframe's name and the variable's name. _But `df.groupby('industry')` is a different dataframe than `df`! And when the code starts to execute the `.assign` method, the `df.groupby('industry')` doesn't exist in memory yet and so it has no name!_ --> In the example above, `[df].groupby('industry').assign(avglev = lambda x: x['lev'].mean() )`, 1. groupby **splits** the dataframe into groups, 2. then, within each group, it **applies** a function (here: the mean), 3. and then returns a new dataframe with one observation for each group (the average leverage for the industry). Visually, this **split-apply-combine**[^ref] process looks like this:![](https://jakevdp.github.io/PythonDataScienceHandbook/figures/03.08-split-apply-combine.png)[^ref]: (This figure is yet another resource I'm borrowing from the awesome [PythonDataScienceHandbook](https://jakevdp.github.io/PythonDataScienceHandbook). But notice! The `.assign()` portion is working on these tiny split up pieces of the dataframe created by `df.groupby('industry')`. Those pieces are dataframe objects that don't have names! **So lambda functions let us refer to an unnamed dataframe objects!** When you type `.assign(newVar = lambda x: someFunc(x))`, `x` is the object ("some df object") that assign is working on. Ta da!```python common syntax within pandas.assign( = lambda : ) often, tempname is just "x" for short.assign( = lambda x: ) example:.assign(lev = lambda x: (x['dltt']+x['dlc'])/x['at'] )``````{note}It turns out that lambda functions are very useful in python programming, and not just within pandas. For example, some functions take functions as inputs, like [csnap()](printing-inside-of-chains), `map()`, and `filter()`, and lambda functions lets us give them custom functions quickly. But pandas is where we will use lambda functions most in this class.``` `.transform()` after groupbySometimes you get a statistic for a group, but you want that statistic in every single row of your original dataset.But `groupby` creates a new dataframe that is smaller, with only one row per row.```{admonition}:class: tipUse `.transform()` after `groupby` to "cast" those statistics back to the original ``` ###Code import pandas as pd import numpy as np df = pd.DataFrame({'key':["A",'B','C',"A",'B','C'], 'data':np.arange(1,7)}).set_index('key').sort_index() display(df) # the input # groupby().sum() shrinks the dataset display(df.groupby(level='key')['data'].sum() .to_frame() ) # just added this line bc df prints prettier than series # groupby().transform(sum) does NOT shrink the dataset df.groupby(level='key').transform(sum) ###Output _____no_output_____ ###Markdown One last trick: Let's add that new variable to the original dataset! ###Code # option 1: create the var df['groupsum'] = df.groupby(level='key').transform(sum) # option 2: create the var with assign (can be used inside chains) df = df.assign(groupsum = df.groupby(level='key')['data'].transform(sum)) display(df) ###Output _____no_output_____ ###Markdown Using non-pandas functions inside chains One problem with writing chains on dataframes is that you can only use methods that work on the object (a dataframe) that is getting chained. So for example, you've formatted dataframe to plot. You can't directly add a seaborn function to the chain: _Seaborn functions are methods of the package seaborn, not the dataframe._ (It's `sns.lmplot`, not `df.lmplot`.) `.pipe()` allows you to hand a dataframe to functions that don't work directly on dataframes. ````{admonition} The syntax of .pipe()```pythondf.pipe(, <'if the first parameter of the outside function isnt the df, ' 'the name of the parameter that is expecting the dataframe'>, ```Note that the object after the pipe command is run might not be a dataframe anymore! It's whatever object the piped function produces!```` Example 1[From one of the pandas devs:](https://tomaugspurger.github.io/method-chaining)> ```python> jack_jill = pd.DataFrame()> (jack_jill.pipe(went_up, 'hill')> .pipe(fetch, 'water')> .pipe(fell_down, 'jack')> .pipe(broke, 'crown')> .pipe(tumble_after, 'jill')> )> ```> > This really is just right-to-left function execution. The first argument to pipe, a callable, is called with the DataFrame on the left as its first argument, and any additional arguments you specify.> > I hope the analogy to data analysis code is clear. Code is read more often than it is written. When you or your coworkers or research partners have to go back in two months to update your script, having the story of raw data to results be told as clearly as possible will save you time. Example 2[From Steven Morse:](https://stmorse.github.io/journal/tidyverse-style-pandas.html)> ```python> (sns.load_dataset('diamonds')> .query('cut in ["Ideal", "Good"] & \> clarity in ["IF", "SI2"] & \> carat < 3')> .pipe((sns.FacetGrid, 'data'),> row='cut', col='clarity', hue='color',> hue_order=list('DEFGHIJ'),> height=6,> legend_out=True)> .map(sns.scatterplot, 'carat', 'price', alpha=0.8)> .add_legend())> ``` Printing inside of chains```{tip}One thing about chains, is that sometimes it's hard to know what's going on within them without just commenting out all the code and running it bit-by-bit. This function, `csnap` (meaning "C"hain "SNAP"shot) will let you print messages from inside the chain, by exploiting the `.pipe()` function we just covered!```![](https://media.giphy.com/media/Buy7YdhkyHBCM/source.gif)Copy this into your code: ###Code def csnap(df, fn=lambda x: x.shape, msg=None): """ Custom Help function to print things in method chaining. Will also print a message, which helps if you're printing a bunch of these, so that you know which csnap print happens at which point. Returns back the df to further use in chaining. Usage examples - within a chain of methods: df.pipe(csnap) df.pipe(csnap, lambda x: <do stuff>) df.pipe(csnap, msg="Shape here") df.pipe(csnap, lambda x: x.sample(10), msg="10 random obs") """ if msg: print(msg) display(fn(df)) return df ###Output _____no_output_____ ###Markdown An example of this in use: ###Code (df .pipe(csnap, msg="Shape before describe") .describe()['data'] # get the distribution stats of a variable (I'm just doing something to show csnap off) .pipe(csnap, msg="Shape after describe and pick one var") # see, it prints a message from within the chain! .to_frame() .assign(ones = 1) .pipe(csnap, lambda x: x.sample(2), msg="Random sample of df at point #3") # see, it prints a message from within the chain! .assign(twos=2,threes=3) ) ###Output Shape before describe
age_speed_h2a/H2A_age_confspeed.ipynb
###Markdown A test of the joint torque redistribution hypothesis**Subjects at the comfortable speed**> [Laboratory of Biomechanics and Motor Control](http://pesquisa.ufabc.edu.br/bmclab/) > Federal University of ABC, Brazil Contents1&nbsp;&nbsp;Introduction2&nbsp;&nbsp;Python setup2.1&nbsp;&nbsp;Environment2.2&nbsp;&nbsp;Custom functions3&nbsp;&nbsp;Read data4&nbsp;&nbsp;Data selection-55-years" data-toc-modified-id="Include-only-subjects-in-two-age-categories:--55-years-4.1">4.1&nbsp;&nbsp;Include only subjects in two age categories: &lt;=40 years and =&gt; 55 years4.2&nbsp;&nbsp;Include only subjects walking at the comfortable speed4.3&nbsp;&nbsp;Possible outliers in data4.3.1&nbsp;&nbsp;Don't remove possible outliers for now5&nbsp;&nbsp;Descriptive statistics6&nbsp;&nbsp;Inferential statistics6.1&nbsp;&nbsp;Correlation between variables6.2&nbsp;&nbsp;Test for difference between groups' characteristics6.3&nbsp;&nbsp;Test for difference between groups' spatio-temporal variables6.4&nbsp;&nbsp;Test for difference between groups' joint torque and power variables7&nbsp;&nbsp;Regression models7.1&nbsp;&nbsp;Data normalization7.1.1&nbsp;&nbsp;Replace letters by numeric values because it's easier to identify the effect7.2&nbsp;&nbsp;Age, Step Length, Cadence and Speed (Step Length × Cadence)8&nbsp;&nbsp;References IntroductionThis notebook reports the statistical results of testing two hypotheses: 1. Aging is associated to a redistribution of joint torques and powers during gait (proposed by DeVita and Hortobagyi in 2000). 2. Such age-related redistribution of joint torques and powers during gait is due to differences in spatio-temporal variables, such as step length, cadence and speed (proposed by Lim, Lin and Pandy in 2013).The experimental data are taken from an open dataset (Fukuchi et al., 2018). This dataset contains raw and processed data from standard 3d gait analysis of healthy volunteers walking in both overground and treadmill in a range of speeds. The discrete variables were calculated according to reported by DeVita and Hortobagyi (2000). Python setup ###Code from pathlib import Path import numpy as np import pandas as pd %matplotlib inline import matplotlib.pyplot as plt import seaborn as sns import plotly from plotly.subplots import make_subplots import plotly.express as px import statsmodels.formula.api as smf import statsmodels.api as sm import statsmodels.stats.api as sms import pingouin as pg from tqdm.notebook import tqdm %load_ext watermark ###Output _____no_output_____ ###Markdown Environment ###Code sns.set_style('whitegrid') sns.set_context('notebook', font_scale=1.1) #palette = sns.color_palette(palette='Set1') # tab10 #palette[0], palette[1] = palette[1], palette[0] #sns.set_palette(palette=palette) pd.set_option('precision', 3) # number of decimal places for the environment path2 = Path(r'./') # number of bootstraps to be performed n_boots = 100 # significance level alpha = 0.05 # colors #colors = sns.color_palette() colors = plotly.colors.DEFAULT_PLOTLY_COLORS colors2 = [tuple(np.fromstring(c[4:-1], sep=',')/255) for c in colors] %watermark %watermark --iversions ###Output Last updated: 2021-08-18T01:34:54.577604-03:00 Python implementation: CPython Python version : 3.8.10 IPython version : 7.26.0 Compiler : GCC 9.3.0 OS : Linux Release : 5.11.0-31-generic Machine : x86_64 Processor : x86_64 CPU cores : 12 Architecture: 64bit statsmodels: 0.12.2 pandas : 1.3.2 json : 2.0.9 matplotlib : 3.4.3 seaborn : 0.11.2 plotly : 5.2.1 autopep8 : 1.5.6 numpy : 1.21.2 pingouin : 0.4.0 ###Markdown Custom functions ###Code def ttest(df, feature, group, levels=None, alpha=alpha): """t-test statistcs for dataframe columns using the pingouin library. """ stats = pd.DataFrame() if levels is None: levels = df[group].unique() if len(levels) != 2: raise Exception('Incorrect number of levels: {}'.format(len(levels))) for f in feature: x = df[df[group] == levels[0]][f] y = df[df[group] == levels[1]][f] stat = pg.ttest(x, y, confidence=1-alpha) stat.index = [f] diff = np.round(100 * (np.mean(x) - np.mean(y)) / np.mean(y), 0) diff = pd.DataFrame(data=diff, index=[f], columns=['%diff'], dtype=int) stat = pd.concat([stat, diff], axis=1) stats = pd.concat([stats, stat], axis=0) stats.drop(columns=['alternative', 'BF10', 'power'], inplace=True) stats.index.name = '{}-{}'.format(*levels) display(stats.style.format({'p-val': '{:.3f}'}).apply(sig_red, subset='p-val', axis=1)) return stats def normtest(df, feature, group, alpha=alpha): """Normality test for dataframe columns using the pingouin library. """ levels = df[group].unique() for level in levels: test = pg.normality(df[df[group] == level][feature], method='normaltest') test.index.name = level display(test.style.format({'pval': '{:.3f}'}).apply(sig_red, axis=1)) def normality(df): """Get the p-val of the normality test using the pingouin library. """ return pg.normality(df)['pval'] def describe(df, feature, group, stat=['count', 'mean', 'std', 'min', 'max', normality]): """Descriptive statistics for dataframe columns. """ col = [('Young', 'normality'), ('Older', 'normality')] x = df.groupby(group)[feature].agg(stat).stack().transpose().style.apply(sig_red, subset=col) display(x) return x def sig_red(col, alpha=alpha): """Returns string 'color: red' for `col` < `alpha`, black otherwise. """ col = np.array([(float(x[1:]) if isinstance(x, str) else float(x)) if len(str(x)) else np.nan for x in col]) is_sig = col < alpha return ['color: red' if x else 'color: black' for x in is_sig] def regression(fit_ml, fit_re, names): """Get results from linear regression as list. results = ['Response', 'Coef', 'CI', 'p', 'Coef', 'CI', 'p', Coef', 'CI', 'p', 'llf', 'AIC', 'R2'] """ # print(fit_re.model.exog_names) if names is None: names = fit_re.model.exog_names[1:] results = [np.nan]*(1 + len(names)*3 + 3) # response results[ 0] = fit_re.model.endog_names # log-likelihood function results[-3] = '{:.1f}'.format(fit_ml.llf) # Akaike information criterion results[-2] = '{:.1f}'.format(fit_ml.aic) # marginal R2, proportion of variance explained by the fixed factor(s) alone results[-1] = '{:.2f}'.format(np.corrcoef(fit_re.model.endog, fit_re.predict())[0, 1]**2) # conditional R2, proportion of variance explained by both the fixed and random factors #results[-1] = np.round(np.corrcoef(fit_re.model.endog, fit_re.fittedvalues)[0, 1]**2, 2) for name in fit_re.model.exog_names[1:]: idx = names.index(name) # fitted fixed-effects coefficients results[3*idx+1] = '{:.2f}'.format(fit_re.params[name]) # confidence interval for the fitted parameters ci = fit_re.conf_int().loc[name].values results[3*idx+2] = '[{:.2f}, {:.2f}]'.format(ci[0], ci[1]) # two-tailed p values for the t-stats of the params if fit_re.pvalues[name] < 0.0001: results[3*idx+3] = '< 0.0001' else: results[3*idx+3] = '{:.4f}'.format(fit_re.pvalues[name]) return results def runmodels(data, predictors, responses, groups, names, mixed=True, show=True): """Run OLS or mixed linear regression models. """ fit_ml = [] fit_re = [] models = [] i = 0 print('Running regression models...') for response in responses: for predictor in predictors: eq = '{} ~ {}'.format(response, predictor) print(response, predictor) if mixed: md = smf.mixedlm(formula=eq, data=data, groups=groups) else: md = smf.ols(formula=eq, data=data) # use ML method to estimate AIC and llf fit_ml.append(md.fit(reml=False)) # use REML method to get unbiased estimations of the coefficients fit_re.append(md.fit(reml=True)) models.append(regression(fit_ml[-1], fit_re[-1], names=names)) if mixed: text = 'converged' if fit_re[-1].converged else 'didn\'t converge' if show: if mixed: print('Model {:2}: {} {}.'.format(i, eq, text)) else: print('Model {:2}: {}.'.format(i, eq)) #display(fit_re[-1].summary()) i += 1 if show: print('...done.') return models, fit_ml, fit_re def display_table(models, names, del_name_idx=None, filename=None): """Display rich table with stats from regression models. """ h0, h1, h2 = ['Feature'], ['Feature'], ['Feature'] h0.extend(['Predictor']*3*len(names)) for name in names: h1.extend(name*3) h2.extend(['Coef', 'CI', 'p-value']*len(names)) h0.extend(['LLF', 'AIC', 'R2']) h1.extend(['LLF', 'AIC', 'R2']) h2.extend(['LLF', 'AIC', 'R2']) table = pd.DataFrame(data=models) table.columns=[h0, h1, h2] table.replace({np.nan: ''}, inplace=True) if del_name_idx is not None: for col in del_name_idx: table = table.drop(columns=names[col], level=1) if filename is not None: table.to_csv(path2 / filename, sep='\t', index=False) table = table.style \ .apply(sig_red, subset=[c for c in table.columns if c[-1] == 'p-value']) \ .set_table_styles([dict(selector='th', props=[('text-align', 'center')])]) return table def plot_residuals(fit=None, residuals=None, kind='marginal', x=None, xlabel=None, ylabel=None, alpha=0.05, hover=None): """Scater plot, histogram and Q-Q plot for testing residuals. This function generates three subplots (1x3): 1. Scatter plot of the residuals versus predictor variables. 2. Histogram of the residuals and the expected normal function. 3. Q-Q plot. On the third plot are also shown the statistic and p-value of the Shapiro-Wilk test for normality. These values are also returned as output of the function. Parameters ---------- fit: statsmodels regression results or None, optional (default=None) `fit` is a mod.fit() structure See https://www.statsmodels.org/stable/regression.html The residuals and predictor values are taken from this parameter. residuals: 1-D array_like or None, optional (default=None) The residuals to test the normality. Enter this parameter only if `fit` is not inputed. kind: {'marginal', 'conditional'}, optional (default='marginal') Which kind of residuals to test (only if `fit` is inputed). 'marginal': residuals from fixed effects 'conditional': residuals from fixed and random effects x: 1-D array_like or None, optional (default=None) The predictor values. xlabel: string or None, optional (default=None) The predictor label. ylabel: string or None, optional (default=None) The response label. alpha: float, optional (default=0.05) The significance level hover: tuple (string, 1-D array_like) or None, optional (default=None) Information to show when hovering the data in the first plot. See the examples. Returns ------- statistic: float The statistic of the Shapiro-Wilk test for normality. p-value: float The p-value of the null-hypothesis test for normality. Notes ----- https://www.statsmodels.org/stable/_modules/statsmodels/regression/mixed_linear_model.html fit.model.fit().predict() or fit.predict() only reflect fixed effects mean structure of the model. fit.model.fit().fittedvalues or fit.fittedvalues reflect the mean structure specified by fixed effects and predicted random effects. Examples -------- >>> residuals = np.random.normal(loc=0.0, scale=1.0, size=1000) >>> plot_residuals(residuals=residuals) >>> residuals = np.random.lognormal(mean=1.0, sigma=0.5, size=100) >>> plot_residuals(residuals=residuals, xlabel= 'X', ylabel='Y') >>> residuals = np.random.lognormal(mean=1.0, sigma=0.5, size=10) >>> plot_residuals(residuals=residuals, hover=('Datum', np.arange(10))) """ import numpy as np import scipy as sp import plotly from plotly.subplots import make_subplots import plotly.graph_objects as go import plotly.figure_factory as ff if fit is not None: # fit.model.fit().predict() or fit.predict() only reflect # fixed effects mean structure of the model. # fit.model.fit().fittedvalues or fit.fittedvalues reflect the mean # structure specified by fixed effects and predicted random effects. if kind == 'marginal': residuals = fit.model.endog - fit.model.fit().predict() elif kind == 'conditional': residuals = fit.model.endog - fit.model.fit().fittedvalues else: raise ValueError("Valid options for 'kind': 'marginal' or 'conditional'.") x = fit.model.exog[:, 1] else: if residuals is None: raise ValueError('If fit is None, residuals cannot be None.') if x is None: x = np.arange(0, len(residuals)) if xlabel is None: xlabel = 'Predictor' if ylabel is None: ylabel = 'Response' # normality of residuals test W, p = sp.stats.shapiro(residuals) # Shapiro-Wilk test if p < 0.001: p_str = 'p < 0.001' else: p_str = 'p = {:.3f}'.format(p) # plots fig = make_subplots(rows=1, cols=3, horizontal_spacing=0.1, subplot_titles=('Scatter plot', 'Histogram', 'Q-Q plot')) # scatter plot if hover is not None: label, data = hover[0], hover[1] template = ['<b>{}: '.format(label) + '%{customdata}</b> <br>' + '{}: '.format(xlabel) + '%{x} <br>Residual: %{y} '] fig.add_trace(go.Scatter(x=x, y=residuals, mode='markers', marker=dict(color=colors[0]), customdata=data, hovertemplate=template[0], name=''), row=1, col=1) else: fig.add_trace(go.Scatter(x=x, y=residuals, mode='markers', name='', marker=dict(color=colors[0])), row=1, col=1) if min(residuals)<=0 and max(residuals)>=0: fig.add_hline(y=0, line=dict(width=2, color='rgba(0,0,0,.5)'), row=1, col=1) # histogram fig.add_trace(go.Histogram(x=residuals, marker_color=colors[0], name='', histnorm='probability density'), row=1, col=2) norm = ff.create_distplot([residuals], group_labels=[''], curve_type='normal', show_rug=False).data[1] fig.add_trace(go.Scatter(x=norm['x'], y=norm['y'], mode = 'lines', name='', line=dict(width=3, color=colors[1])), row=1, col=2) # Q-Q plot qq = sp.stats.probplot(residuals, dist='norm') qqx = np.array([qq[0][0][0], qq[0][0][-1]]) fig.add_trace(go.Scatter(x=qq[0][0], y=qq[0][1], mode='markers', name='', marker=dict(color=colors[0])), row=1, col=3) fig.add_trace(go.Scatter(x=qqx, y=qq[1][1] + qq[1][0]*qqx, mode='lines', name='', line=dict(width=3, color=colors[1])), row=1, col=3) fig.add_annotation(text='W = {:.2f}<br>{}'.format(W, p_str), xref='x domain', yref='y domain', align='left', valign='top', x=0.02, y=0.98, showarrow=False, row=1, col=3) # x and y axes properties fig.update_xaxes(title_text=xlabel, row=1, col=1) fig.update_xaxes(title_text='Residuals', row=1, col=2) fig.update_xaxes(title_text='Normal theoretical quantiles', row=1, col=3) fig.update_yaxes(title_text='Residuals', row=1, col=1) fig.update_yaxes(title_text='Probability density', row=1, col=2) fig.update_yaxes(title_text='Observed data quantiles', row=1, col=3) text='Normality tests for residuals of {} &times; {}'.format(ylabel, xlabel) fig.update_layout(showlegend=False, height=400, font_color='black', title=dict(text=text, x=.5, xanchor='center', yanchor='top', font=dict(size=20))) fig.show() print(['We {}reject the null hypothesis that the residuals come from a population' + ' with normal distribution\n(Shapiro-Wilk test: W({}) = {:.2f}, {}).' ][0].format('failed to ' if p>alpha else '', len(x), W, p_str)) return W, p def bootstrap(df, df2, response, predictor, groups, n_boots=1000): """Bootstrap observations for parameter estimation of linear mixed effects model. """ y_boot = np.zeros((df2.shape[0], n_boots)) eq = '{} ~ {}'.format(response, predictor) for i in tqdm(range(n_boots)): y_boot[:, i] = smf.mixedlm(formula=eq, groups=groups, data=df.sample(n=df.shape[0], replace=True) ).fit().predict(df2) return y_boot def boxplots(df, var=['Speed', 'Cadence', 'StepLength', 'H2A_M', 'H2A_I', 'H2A_W']): """Boxplots of variables var in df """ fig, axs = plt.subplots(1, len(var), figsize=(12, 3), gridspec_kw={'hspace':.1, 'wspace':.5}) for ax, v in zip(axs, var): sns.boxplot(x='AgeGroup', y=v, data=df, fliersize=9, ax=ax) sns.swarmplot(x='AgeGroup', y=v, data=df, ax=ax, color='gray') ax.set_title(v) ax.set_ylabel('') plt.show() ###Output _____no_output_____ ###Markdown Read data ###Code # file with discrete variables calculated by 'Walking speed torque .ipynb' filename = path2 / 'wbdsRedist_clean3.csv' df = pd.read_csv(filename) df.drop(columns=['SpeedRaw', 'Subject.1', 'StepTime'], inplace=True) # unused columns # Height cm to meters and stride to step length df['Height'] = df['Height']/100 df['StepLength'] = df['StepLength']/2 # Append BMI and Froude number for gait speed df = df.assign(BMI = df['Mass']/df['Height']**2) df = df.assign(SpeedFroude = df['Speed'].values/np.sqrt((9.81*df['LegLength'].values))) # rename columns df.rename(columns={'SpeedCategory':'SpeedCat', 'PeakHipMom':'Hip_M', 'PeakKneeMom': 'Knee_M', 'PeakAnkleMom': 'Ankle_M', 'Hip2AnkleRatio': 'H2A_M', 'hipEXTimp': 'Hip_Iext', 'hipFLXimp': 'Hip_Iflx', 'kneeEXTimp': 'Knee_Iext', 'ankleEXTimp': 'Ankle_Iext', 'hip2ankleRatioImp': 'H2A_I', 'hipPOSwork': 'Hip_Wpos', 'hipNEGwork': 'Hip_Wneg', 'kneePOSwork': 'Knee_Wpos', 'kneeNEGwork': 'Knee_Wneg', 'anklePOSwork': 'Ankle_Wpos', 'ankleNEGwork': 'Ankle_Wneg', 'hip2ankleRatioWork': 'H2A_W'}, inplace=True) # comfortable speed as a new variable (column) df = df.assign(SpeedComf = df.Speed) for s in df['Subject'].unique(): df.at[df['Subject']==s, 'SpeedComf'] = df[(df['Subject']==s) & (df['SpeedCat']=='V5')]['Speed'].values[0] # reorder columns and drop data for knee df = df[['Subject', 'AgeGroup', 'Gender', 'Age', 'Height', 'Mass', 'BMI', 'LegLength', 'SpeedCat', 'SpeedComf', 'Speed', 'StepLength', 'Cadence', 'H2A_M', 'H2A_I', 'H2A_W']] df ###Output _____no_output_____ ###Markdown Data selectionLet's replicate similar conditions of the DeVita and Hortobagyi (2000) study: - Two age categories (Young adults and Older adults). - All subjects walking at a self-selected comfortable speed on a treadmill (for a more reliable control of speed). ###Code df0 = df.copy(deep=True) ###Output _____no_output_____ ###Markdown Include only subjects in two age categories: 55 years ###Code df = df.drop(index=df[(df.Age > 40) & (df.Age < 55)].index, inplace=False) ###Output _____no_output_____ ###Markdown Include only subjects walking at the comfortable speed ###Code df = df[df['SpeedCat']=='V5'].drop(columns=['SpeedCat']) df ###Output _____no_output_____ ###Markdown Possible outliers in data ###Code var = ['Age', 'Height', 'Mass', 'BMI', 'LegLength', 'Speed', 'StepLength', 'Cadence', 'H2A_M', 'H2A_I', 'H2A_W'] describe(df, feature=var, group='AgeGroup'); boxplots(df) ###Output _____no_output_____ ###Markdown Don't remove possible outliers for nowFeatures `Cadence` for both groups and `H2A_W` for the Older group don't present normal distribution. Inpecting the boxplots, a few extreme values might be the cause for non-normality. Since our main data analysis is based on linear regression, we will not remove these outliers for now; we will remove outliers only if they cause violations of the linear regression assumptions. ###Code # Outlier for variables H2A subject = df[df['H2A_W'] > 3]['Subject'] print(subject) # uncomment the following line to remove these data: df = df.drop(index=subject.index, inplace=False) # Outlier for variable Cadence subject = df[df['Cadence'] > 140]['Subject'] print(subject) # uncomment the following line to remove these data: #df = df.drop(index=subject.index, inplace=False) describe(df, feature=var, group='AgeGroup'); boxplots(df) ###Output _____no_output_____ ###Markdown Descriptive statisticsLet's visualize the data with pair plots and histograms for the main variables. ###Code display(df.drop_duplicates(subset='Subject', inplace=False)[ ['Subject', 'AgeGroup', 'Gender']].groupby(['AgeGroup', 'Gender']).count().T) g = sns.pairplot(df, vars=['Speed', 'Cadence', 'StepLength', 'H2A_M', 'H2A_I', 'H2A_W'], diag_kind='auto', hue='AgeGroup', plot_kws={'s':60}, height=1.2, aspect=1.1) handles = g._legend_data.values() labels = g._legend_data.keys() g._legend.remove() g.fig.legend(handles=handles, labels=labels, loc='upper right', ncol=2, bbox_to_anchor=(1, 1.02), bbox_transform=plt.gcf().transFigure) g.fig.subplots_adjust(left=.1, right=.99, bottom = 0.1, top=.95, hspace=.1, wspace=.1) g.fig.align_ylabels(g.axes[:, 0]) plt.show() ###Output _____no_output_____ ###Markdown Inferential statistics Correlation between variables ###Code var = ['Speed', 'Cadence', 'StepLength', 'H2A_M', 'H2A_I', 'H2A_W'] corr = df[df['AgeGroup']=='Young'][var].rcorr(stars=True) display(corr.style.set_caption('Correlation matrix for group Young')) corr = df[df['AgeGroup']=='Older'][var].rcorr(stars=True) display(corr.style.set_caption('Correlation matrix for group Older')) # A more detailed correlation table: d = df[df['AgeGroup']=='Young'][var].pairwise_corr(method='pearson') display(d.style.format({'p-unc': '{:.3f}'}).apply(sig_red, subset=['p-unc']) .set_caption('Pairwise correlation for group Young')) d = df[df['AgeGroup']=='Older'][var].pairwise_corr(method='pearson') display(d.style.format({'p-unc': '{:.3f}'}).apply(sig_red, subset=['p-unc']) .set_caption('Pairwise correlation for group Older')) ###Output _____no_output_____ ###Markdown Test for difference between groups' characteristics ###Code var = ['Age', 'Height', 'Mass', 'BMI', 'LegLength'] stats = ttest(df, var, 'AgeGroup', levels=['Older', 'Young']) ###Output _____no_output_____ ###Markdown Test for difference between groups' spatio-temporal variables ###Code var = ['Speed', 'Cadence', 'StepLength'] stats = ttest(df, var, 'AgeGroup', levels=['Older', 'Young']) ###Output _____no_output_____ ###Markdown Test for difference between groups' joint torque and power variables ###Code var = ['H2A_M', 'H2A_I', 'H2A_W'] stats = ttest(df, var, 'AgeGroup', levels=['Older', 'Young']) ###Output _____no_output_____ ###Markdown Regression modelsThe predictors are Age (as categorical), cadence, step length. The response variables are H2A_M, H2A_I and H2A_W. Data normalizationNormalize data for satisfying linear regression assumptions, but this step has no effect on the final results. ###Code dfraw = df.copy(deep=True) var = ['Height', 'Mass', 'BMI', 'LegLength', 'Speed', 'Cadence', 'StepLength', 'H2A_M', 'H2A_I', 'H2A_W'] # Standardization (mean 0, variance 1) df[var] = df[var].apply(lambda x: (x-x.mean())/x.std(), axis=0) ###Output _____no_output_____ ###Markdown Replace letters by numeric values because it's easier to identify the effect E.g.: {'Y': 0, 'O': 1} implies that if there is an effect of Age and its coefficient (slope) is positive, it means that the response increases for older subjects and decreases for young subjects. Internally the letters were replaced by numbers anyway but we didn't know the order. ###Code df.loc[:, 'AgeGroup'].replace({'Young': 0, 'Older': 1}, inplace=True) df.loc[:, 'Gender'].replace({'F': 0, 'M': 1}, inplace=True) ###Output _____no_output_____ ###Markdown Age, Step Length, Cadence and Speed (Step Length × Cadence)Of note, the variable Speed is equal to the product between the variables Step Length and Cadence. ###Code features = ['H2A_M', 'H2A_I', 'H2A_W'] labels = ['H2A_M', 'H2A_I', 'H2A_W'] predictors = ['C(AgeGroup)', 'StepLength', 'Cadence', 'Speed', 'C(AgeGroup) + StepLength', 'C(AgeGroup) + Cadence', 'C(AgeGroup) + Speed', 'C(AgeGroup) + StepLength + Cadence', 'StepLength + Cadence', 'StepLength + Cadence + Speed', 'C(AgeGroup) + StepLength + Cadence + Speed' ] groups = df['Subject'] names = ['C(AgeGroup)[T.1]', 'StepLength', 'Cadence', 'Speed'] models, fit_ml, fit_re = runmodels(df, predictors, features, groups=None, names=names, mixed=False) names = [['Age'], ['StepLength'], ['Cadence'], ['Speed']] display_table(models, names) ###Output _____no_output_____
PythonScripts/Paper1Figures/fig_bathy.ipynb
###Markdown Figure of bathymetry and CS definitions ###Code from brokenaxes import brokenaxes import cmocean as cmo import matplotlib.pyplot as plt import matplotlib.gridspec as gspec import matplotlib as mpl %matplotlib inline from netCDF4 import Dataset import numpy as np import seaborn as sns import xarray as xr import canyon_tools.readout_tools as rout import canyon_tools.savitzky_golay as sg import warnings warnings.filterwarnings('ignore') def plotCSPos(ax,CS1,CS2,CS3,CS4,CS5,LID): ax.axvline(x=CS1, ymin=1., ymax=0.75, color='0.95',linestyle='-') ax.axvline(x=CS2, ymin=1., ymax=0.75,color='0.95',linestyle='-') ax.axvline(x=CS3, ymin=1., ymax=0.75,color='0.95',linestyle='-') ax.axvline(x=CS4, ymin=1., ymax=0.75,color='0.95',linestyle='-') ax.axvline(x=CS5, ymin=1., ymax=0.75,color='0.95',linestyle='-') ax.axhline(y=LID, xmin=0., xmax=1,color='0.95',linestyle='-') def plotPoolArea(ax,xx,yy): #ax.plot(xx[1,:],yy[1,:],'--k') ax.plot(xx[:,1],yy[:,1],':r') ax.plot(xx[-1,:],yy[-1,:],':r') ax.plot(xx[:,-1],yy[:,-1],':r') def plotCSLines(ax,xx,yy,CS1x,CS2x,CS3x,CS4x): ax.plot(xx[227,slice(0,CS1x)],yy[227,slice(0,CS1x)],'-',color='0.5',linewidth=3) ax.plot(xx[227,slice(CS1x,CS2x)],yy[227,slice(CS1x,CS2x)],'-k',linewidth=3) ax.plot(xx[227,slice(CS2x,CS3x)],yy[227,slice(CS2x,CS3x)],'-',color='0.5',linewidth=3) ax.plot(xx[227,slice(CS3x,CS4x)],yy[227,slice(CS3x,CS4x)],'-k',linewidth=3) ax.plot(xx[227,slice(CS4x,360)],yy[227,slice(CS4x,360)],'-',color='0.5',linewidth=3) # Cross-shelf def Plot1_crossshelf(gs_ax,depths,zslice,yslice,xind_shelf=100,xind_axis=180,color='black'): ax = plt.subplot(gs_ax) ax.plot(grid.Y[yslice]/1000,depths[yslice,xind_shelf], '--', color=color, linewidth=2, ) ax.plot(grid.Y[yslice]/1000,depths[yslice,xind_axis], '-', color=color, linewidth=2, ) ax.contourf(grid.Y[yslice]/1000,grid.RC[zslice],grid.HFacC[zslice,yslice,xind_axis],[0,0.5,1],colors=['0.7','1','1']) #ax.axvline(x=grid.Y[227]/1000, linestyle=':',color='k') #ax.axhline(y=grid.Z[29], linestyle=':',color='k') return ax # Alongshelf def Plot2_alongshelf(gs_ax,depths,zslice,xslice,yind=227,color='black'): ax = plt.subplot(gs_ax) plotCSPos(ax,grid.XC[1,60]/1000,grid.XC[1,120]/1000,grid.XC[1,240]/1000,grid.XC[1,300]/1000, grid.XC[1,360]/1000,grid.Z[29]) ax.plot(grid.X[xslice]/1000,depths[yind,xslice], '-', color=color, linewidth=2, ) ax.plot(grid.X[xslice]/1000,depths[yind,xslice], '--', color=color, linewidth=2, ) ax.contourf(grid.X[xslice]/1000,grid.RC[zslice],grid.HFacC[zslice,yind,xslice],[0,0.5,1],colors=['0.7','1','1']) #ax.axhline(y=grid.Z[29], linestyle=':',color='k') return ax # Top view def Plot3_topview(gs_ax,depths,xslice,yslice,color='black', clabels=True): ax = plt.subplot(gs_ax) #plotPoolArea(ax,grid.XC[slice(227,315),slice(120,463)]/1000,grid.YC[slice(227,315),slice(120,463)]/1000) ax.contour(grid.X[xslice]/1000,grid.Y[yslice]/1000,depths[yslice,xslice],[147.5], colors=['k']) CS=ax.contour(grid.X[xslice]/1000,grid.Y[yslice]/1000,depths[yslice,xslice],[20,100,200,400,600,800,1000,1200], colors=['0.8','0.8','0.8','0.8','0.8','0.8','0.8','0.8']) if clabels == True: plt.clabel(CS, fontsize=9,inline=1,inline_spacing=1, fmt = '%1.0f', ticks=[400,600,800,1000]) return ax # Top view def Plot4_zoom(gs_ax,depths,xslice,yslice,color='black'): ax = plt.subplot(gs_ax) CS = ax.contour(grid.X[xslice]/1000,grid.Y[yslice]/1000,depths[yslice,xslice], [100,110,120,130,140,147.5], colors=['0.8','0.8','0.8','0.8','0.8','k']) manual_locations = [(72.5,60),(72.5,56),(72.5,52)] plt.clabel(CS,[100,120,140],fontsize=9,inline=True,inline_spacing=1, manual=manual_locations, fmt = '%1.0f' ) return ax # Plot Kv profiles def Plot_kv(gs_ax, dep, colors, labels): ax = plt.subplot(gs_ax) ax.axhline(dep[22],linestyle=':',color='r',linewidth=2) profiles = get_profiles() sns.set_palette("Purples_r") for ii, col, lab in zip(range(len(profiles)),colors, labels): ax.plot(profiles[ii,:48], dep[:48], label=lab) return ax def get_profiles(): kv_dir = '/ocean/kramosmu/Building_canyon/BuildCanyon/Stratification/616x360x90/' ini_kv_files = [kv_dir + 'KrDiff_e05_exact_nosmooth_90zlev_616x360_Quad.bin', kv_dir + 'KrDiff_e10_kv1E2_90zlev_616x360_Quad.bin', kv_dir + 'KrDiff_e25_kv1E2_90zlev_616x360_Quad.bin', kv_dir + 'KrDiff_e50_kv1E2_90zlev_616x360_Quad.bin', kv_dir + 'KrDiff_e100_kv1E2_90zlev_616x360_Quad.bin', ] dt = np.dtype('>f8') # float 64 big endian DnC = [261, 180] # y, x indices of DnS station ini_kv_profiles = np.zeros((len(ini_kv_files),nz)) for file, ii in zip(ini_kv_files, range(len(ini_kv_files))): data = np.fromfile(file, dt) ini_kv = np.reshape(data,(nz,ny,nx),order='C') ini_kv_profiles[ii,:] = ini_kv[:, DnC[0], DnC[1]] return ini_kv_profiles # Plot T, S, Tr profiles def Plot_tracers(gs_ax, dep, colors, labels): gs001 = gspec.GridSpecFromSubplotSpec(1, 2, subplot_spec=gs_ax,width_ratios=[1,1], wspace=0.3) ax = plt.subplot(gs001[0]) axx = ax.twiny() axxx = plt.subplot(gs001[1]) ax.axhline(dep[29],linestyle=':',color='0.5',linewidth=2) axxx.axhline(dep[29],linestyle=':',color='0.5',linewidth=2) state_file = '/data/kramosmu/results/TracerExperiments/CNTDIFF/run38/stateGlob.nc' state = xr.open_dataset(state_file) tr_file = '/data/kramosmu/results/TracerExperiments/CNTDIFF/run38/ptracersGlob.nc' tr = xr.open_dataset(tr_file) T = state.Temp[0,:,50,180] S = state.S[0,:,50,180] Tr = tr.Tr1[0,:,50,180] ax.plot(T[:], dep[:], color='darkblue') axx.plot(S[:], dep[:], color='orangered') axxx.plot(Tr[:], dep[:], color='darkorange') axxx.set_yticks([40]) axxx.set_yticklabels(['']) #ax.spines['right'].set_visible(False) #axx.spines['right'].set_visible(False) ax.set_xticks([0,6,12]) axx.set_xticks([32,33,34]) axxx.set_xticks([0,25,50]) return ax, axx, axxx # Grid, state and tracers datasets of base case grid_file = '/data/kramosmu/results/TracerExperiments/CNTDIFF/run38/gridGlob.nc' grid = xr.open_dataset(grid_file) # General input nx = 616 ny = 360 nz = 90 nt = 19 # t dimension size xslice=slice(100,360) yslice=slice(120,310) tslice = slice(8,16) xind = 240 yind = 227 # y index for alongshore cross-section zind = 27 hFacmasked = np.ma.masked_values(grid.HFacC.data, 0) MaskC = np.ma.getmask(hFacmasked) print(grid.X[615]/1000) zslice = slice(0,90) yslice = slice(0,360) xslice = slice(0,616) xslice2 = slice(0,616) zslice2 = slice(0,57) xslice4 = slice(95,265) yslice4 = slice(225,270) sns.set_style('white') sns.set_context('paper') plt.rcParams['font.size'] = 10.0 f = plt.figure(figsize = (7.4,4.5)) gs = gspec.GridSpec(2, 1, height_ratios=[0.7,1], hspace=0.3) gs0 = gspec.GridSpecFromSubplotSpec(1, 2, subplot_spec=gs[0],width_ratios=[0.3,0.7], wspace=0.15) gs1 = gspec.GridSpecFromSubplotSpec(1, 2, subplot_spec=gs[1],width_ratios=[1.2,1], wspace=0.17) gs11 = gspec.GridSpecFromSubplotSpec(1,2, subplot_spec=gs1[1],width_ratios=[0.5,0.5], wspace=0.3) gs10 = gspec.GridSpecFromSubplotSpec(1, 2, subplot_spec=gs1[0],width_ratios=[1, 0.166], wspace=0.05) ax2 = Plot1_crossshelf(gs0[0],-grid.Depth,zslice,yslice,xind_shelf=100,xind_axis=180,color='black') ax2.set_ylim(-500,0) ax2.set_xlim(30,80) colors = ['k','0.2','0.4','0.6','0.8'] labels = ['$\epsilon=5$ m','$\epsilon=10$ m','$\epsilon=25$ m','$\epsilon=50$ m','$\epsilon=100$ m'] ax4 = Plot_kv(gs11[0],grid.Z,colors, labels) ax3 = Plot3_topview(gs10[0],grid.Depth,xslice,yslice, clabels=False) ax5 = Plot3_topview(gs10[1],grid.Depth,xslice,yslice, clabels=False) ax6,ax7,ax8 = Plot_tracers(gs11[1], grid.Z,colors, labels) ax1 = Plot4_zoom(gs0[1],grid.Depth,xslice4,yslice4) ax1.set_ylim(50,60) ax1.set_xlim(45,75) ax1.set_xticks([45,50,55,60,65,70,75]) ax1.set_xticklabels(['','50','55','60','65','70','']) ax1.tick_params(axis='x', pad=1) ax2.tick_params(axis='x', pad=1) ax3.tick_params(axis='x', pad=1) ax5.tick_params(axis='x', pad=1) ax4.tick_params(axis='x', pad=1.5) ax6.tick_params(axis='x', pad=1) ax6.tick_params('x', colors='darkblue') ax7.tick_params('x', colors='orangered') ax7.tick_params(axis='x', pad=1) ax8.tick_params(axis='x', pad=1) ax1.tick_params(axis='y', pad=1) ax2.tick_params(axis='y', pad=1) ax3.tick_params(axis='y', pad=1) ax4.tick_params(axis='y', pad=1) ax6.tick_params(axis='y', pad=1) ax4.set_xlabel(r'$K_v$ / m$^2$s$^{-1}$',labelpad=1) ax6.set_xlabel(r'T / $^{\circ}$C',labelpad=1, color='darkblue') ax8.set_xlabel(r'Tr / $\mu$M',labelpad=1) ax7.set_xlabel(r'S / g kg$^{-1}$',color='orangered') ax2.set_xlabel('C-S distance (km)',labelpad=1) ax2.set_ylabel('Depth (m)',labelpad=1) ax4.set_ylabel('Depth (m)',labelpad=1) ax1.set_xlabel('Alongshelf distance (km)',labelpad=1,) ax3.set_xlabel('Alongshelf distance (km)',labelpad=1) ax1.set_ylabel('C-S (km)',labelpad=1) ax3.set_ylabel('C-S distance (km)',labelpad=1) xlim_ax3 = 120000 xlim_ax5 = 281000 min_xlim_ax5 = 261000 ax3.axhline(y=50,xmin=0, xmax=(grid.X[60]/xlim_ax3), linewidth=3, color='0.7') ax3.axhline(y=50,xmin=(grid.X[60]/xlim_ax3), xmax=(grid.X[120]/xlim_ax3), linewidth=3, color='0.3') ax3.axhline(y=50,xmin=(grid.X[120]/xlim_ax3), xmax=(grid.X[240]/xlim_ax3), linewidth=3, color='0.7') ax3.axhline(y=50,xmin=(grid.X[240]/xlim_ax3), xmax=(grid.X[300]/xlim_ax3), linewidth=3, color='0.3') ax3.axhline(y=50,xmin=(grid.X[300]/xlim_ax3), xmax=(grid.X[360]/xlim_ax3), linewidth=3, color='0.7') ax5.axhline(y=50,xmin=0, xmax=1, linewidth=3, color='0.3') ax1.text(0.93,0.05,'(b)',fontsize=9,transform=ax1.transAxes) ax2.text(0.88,0.05,'(a)',transform=ax2.transAxes,fontsize=9,) ax5.text(0.5,0.05,'(c)',transform=ax5.transAxes,fontsize=9,) ax4.text(0.8,0.9,'(d)',transform=ax4.transAxes,fontsize=9,) ax6.text(0.4,0.05,'(e)',transform=ax6.transAxes,fontsize=9,) ax8.text(0.1,0.05,'(f)',transform=ax8.transAxes,fontsize=9,) ax3.set_xlim(0,120) ax5.set_xlim(261,281) ax3.set_aspect(1) ax5.set_aspect(1) ax3.spines['right'].set_visible(False) ax5.spines['left'].set_visible(False) ax3.yaxis.tick_left() ax3.tick_params(labelright='off') ax5.set_yticks([40]) ax5.set_yticklabels(['']) ax5.set_xticks([280]) ax1.set_aspect('equal') ax4.legend(loc=0) plt.savefig('fig_bathy_v2.eps',format='eps',bbox_inches='tight') sns.set_style('white') sns.set_context('paper') plt.rcParams['font.size'] = 10.0 f = plt.figure(figsize = (7.4,4.5)) gs = gspec.GridSpec(2, 1, height_ratios=[0.7,1]) gs0 = gspec.GridSpecFromSubplotSpec(1, 2, subplot_spec=gs[0],width_ratios=[0.3,0.7], wspace=0.15) gs1 = gspec.GridSpecFromSubplotSpec(1, 2, subplot_spec=gs[1],width_ratios=[0.8,0.20], wspace=0.2) ax2 = Plot1_crossshelf(gs0[0],-grid.Depth,zslice,yslice,xind_shelf=100,xind_axis=180,color='black') ax2.set_ylim(-500,0) ax2.set_xlim(30,80) colors = ['k','0.2','0.4','0.6','0.8'] labels = ['$\epsilon=5$ m','$\epsilon=10$ m','$\epsilon=25$ m','$\epsilon=50$ m','$\epsilon=100$ m'] ax4 = Plot_kv(gs1[1],grid.Z,colors, labels) ax3 = Plot3_topview(gs1[0],grid.Depth,xslice,yslice) ax1 = Plot4_zoom(gs0[1],grid.Depth,xslice4,yslice4) ax1.set_ylim(50,60) ax1.set_xlim(45,75) ax1.set_xticks([45,50,55,60,65,70,75]) ax1.set_xticklabels(['','50','55','60','65','70','']) ax1.tick_params(axis='x', pad=1) ax2.tick_params(axis='x', pad=1) ax3.tick_params(axis='x', pad=1) ax4.tick_params(axis='x', pad=1.5) ax1.tick_params(axis='y', pad=1) ax2.tick_params(axis='y', pad=1) ax3.tick_params(axis='y', pad=1) ax4.tick_params(axis='y', pad=1) ax4.set_xlabel(r'$K_v$ / m$^2$s$^{-1}$',labelpad=1) ax2.set_xlabel('C-S distance (km)',labelpad=1) ax2.set_ylabel('Depth (m)',labelpad=1) ax4.set_ylabel('Depth (m)',labelpad=1) ax1.set_xlabel('Alongshelf distance (km)',labelpad=1) ax3.set_xlabel('Alongshelf distance (km)',labelpad=1) ax1.set_ylabel('C-S (km)',labelpad=1) ax3.set_ylabel('C-S distance (km)',labelpad=1) ax3.axhline(y=50,xmin=0, xmax=(grid.X[60]/grid.X[615]), linewidth=3, color='0.7') ax3.axhline(y=50,xmin=(grid.X[60]/grid.X[615]), xmax=(grid.X[120]/grid.X[615]), linewidth=3, color='0.3') ax3.axhline(y=50,xmin=(grid.X[120]/grid.X[615]), xmax=(grid.X[240]/grid.X[615]), linewidth=3, color='0.7') ax3.axhline(y=50,xmin=(grid.X[240]/grid.X[615]), xmax=(grid.X[300]/grid.X[615]), linewidth=3, color='0.3') ax3.axhline(y=50,xmin=(grid.X[300]/grid.X[615]), xmax=(grid.X[360]/grid.X[615]), linewidth=3, color='0.7') ax3.axhline(y=50,xmin=(grid.X[360]/grid.X[615]), xmax=1, linewidth=3, color='0.3') ax1.text(0.93,0.05,'(b)',fontsize=9,transform=ax1.transAxes) ax2.text(0.88,0.05,'(a)',transform=ax2.transAxes,fontsize=9,) ax3.text(0.95,0.05,'(c)',transform=ax3.transAxes,fontsize=9,) ax4.text(0.8,0.9,'(d)',transform=ax4.transAxes,fontsize=9,) ax5.text(0.8,0.9,'(e)',transform=ax5.transAxes,fontsize=9,) ax3.set_xlim(0,280) ax3.set_aspect(1) ax1.set_aspect('equal') ax4.legend(loc=0) #plt.savefig('fig_bathy_rev1.eps',format='eps',bbox_inches='tight') sns.set_style('white') plt.rcParams['font.size'] = 8.0 f = plt.figure(figsize = (7.5,4.65)) gs = gspec.GridSpec(1, 1) ax1 = Plot1_crossshelf(gs[0],-grid.Depth,zslice,yslice,xind_shelf=100,xind_axis=180,color='black') ax1.set_xticks([]) ax1.set_ylim(-400,0) ax1.set_xlim(42,65) ax1.tick_params(axis='x') ax1.tick_params(axis='y') ax1.set_xlabel('Cross-shore distance ',labelpad=1) ax1.set_ylabel('Depth',labelpad=1) sns.set_style('white') plt.rcParams['font.size'] = 8.0 f = plt.figure(figsize = (7.5,4.65)) gs = gspec.GridSpec(1, 1) ax1 = Plot1_crossshelf(gs[0],-grid.Depth,zslice,yslice,xind_shelf=100,xind_axis=180,color='black') ax1.set_yticks([0,-50,-100,-150,-200,-250,-300,-350,-400]) ax1.set_ylim(-400,0) ax1.set_xlim(42,65) ax1.axhline(grid.RC[29]) ax1.axhline(grid.RC[19]) print('Shelf-break depth is %1.1f m, and head depth is %1.1f m' %(grid.RC[29],grid.RC[19])) ax1.set_xlabel('Cross-shore distance ',labelpad=1) ax1.set_ylabel('Depth',labelpad=1) sns.set_style('white') plt.rcParams['font.size'] = 8.0 fig,ax = plt.subplots(1,1,figsize=(6,5)) ax4 = Plot4_zoom(ax,grid.Depth,xslice4,yslice4) ax4.set_ylim(50,60) ax4.set_xlim(40,80) ax4.set_xticks([]) ax4.set_yticks([]) plt.savefig('fig_topview_bathyzoom.eps',format='eps',bbox_inches='tight') # Plot for OSM 2018 sns.set_context('talk') sns.set_style('white') fig, ax = plt.subplots(1,1) ax.contour(grid.X[xslice]/1000,grid.Y[yslice]/1000,grid.Depth[yslice,xslice],[147.5], colors=['k']) CS=ax.contour(grid.X[xslice]/1000,grid.Y[yslice]/1000,grid.Depth[yslice,xslice],[20,100,200,400,600,800,1000,1200], colors=['0.6','0.6','0.6','0.6','0.6','0.6','0.6','0.6']) #plt.clabel(CS, fontsize=9,inline=1,inline_spacing=1,manual=[[250,70],[20,60],[250,45]], # fmt = '%1.0f', ticks=[20,100,200]) plt.clabel(CS, fontsize=13,inline=1,inline_spacing=1, fmt = '%1.0f', ticks=[400,600,800,1000]) ax.arrow(10, 66, 30, 0 , width = 3, head_width=10, head_length=10, fc=sns.xkcd_rgb['ocean blue'], ec='k') ax.arrow(10, 20, 30, 0 , width = 3, head_width=10, head_length=10, fc=sns.xkcd_rgb['ocean blue'], ec='k') ax.text(10,75,'Alongshelf current',color=sns.xkcd_rgb['ocean blue'] ) ax.set_aspect(1) ax.set_ylabel('Cross-shelf distance (km)') ax.set_xlabel('Alongshelf distance (km)') plt.savefig('bathy_OSM2018.eps',format='eps',bbox_inches='tight') sns.set_style('white') sns.set_context('talk') f = plt.figure(figsize = (3,5)) gs = gspec.GridSpec(1, 1) colors = ['k','0.2','0.4','0.6','0.8'] labels = ['$\epsilon=5$ m','$\epsilon=10$ m','$\epsilon=25$ m','$\epsilon=50$ m','$\epsilon=100$ m'] ax4 = Plot_kv(gs[0],grid.Z,colors, labels) ax4.tick_params(axis='x', pad=1.5) ax4.tick_params(axis='y', pad=1) ax4.set_xlabel(r'$K_v$ / m$^2$s$^{-1}$',labelpad=1) ax4.set_ylabel('Depth (m)',labelpad=1) ax4.legend(loc=0) plt.savefig('epsilon_profiles.eps',format='eps',bbox_inches='tight') ###Output _____no_output_____
notebooks/henrik_ueb02/Mustererkennung_in_Funkmessdaten.ipynb
###Markdown Mustererkennung in Funkmessdaten Aufgabe 1: Laden der Datenbank in Jupyter Notebook ###Code %matplotlib inline import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns import pprint as pp hdfs = pd.HDFStore("../../data/raw/henrik/TestMessungen_NEU.hdf") hdfs.keys ###Output _____no_output_____ ###Markdown Aufgabe 2: Inspektion eines einzelnen Dataframes ###Code df1 = hdfs.get('/x1/t1/trx_1_2') df1.head(5) # Little function to retrieve sender-receiver tuple from df columns import re def extract_snd_rcv(df): regex = r"trx_[1-4]_[1-4]_ifft_[0-9]*" snd_rcv = {x[4:7] for x in df.columns if re.search(regex, x)} return [(x[0],x[-1]) for x in snd_rcv] def get_column_counts(snd_rcv, df): col_counts = {} for snd,rcv in snd_rcv: col_counts['trx_{}_{}_ifft'.format(snd, rcv)] = len([i for i, word in enumerate(list(df.columns)) if word.startswith('trx_{}_{}_ifft'.format(snd, rcv))]) return col_counts df1_snd_rcv = extract_snd_rcv(df1) cc = get_column_counts(df1_snd_rcv, df1) pp.pprint(cc) print("Sum of measure columns: %i" % sum(cc.values())) print("# of other columns: %i" % (len(df1.columns) - sum(cc.values()))) [col for col in df1.columns if 'ifft' not in col] print(df1['target'].unique()) print("# Unique values in target: %i" % len(df1['target'].unique())) df2 = hdfs.get('/x1/t1/trx_1_4') df2.head() import re df2_snd_rcv = extract_snd_rcv(df2) cc = get_column_counts(df2_snd_rcv, df2) pp.pprint(cc) print("Sum of measure columns: %i" % sum(cc.values())) print("# of other columns: %i" % (len(df2.columns) - sum(cc.values()))) [col for col in df2.columns if 'ifft' not in col] print(df2['target'].unique()) print("# Unique values in target: %i" % len(df2['target'].unique())) ###Output ['Empty_0.0,0.0_0.0,0.0' 'Standing_1.0,1.0_1.0,1.0' 'Step_1.0,1.0_1.0,2.0' 'Standing_1.0,2.0_1.0,2.0' 'Step_1.0,2.0_2.0,2.0' 'Standing_2.0,2.0_2.0,2.0' 'Step_2.0,2.0_2.0,1.0' 'Standing_2.0,1.0_2.0,1.0' 'Step_2.0,1.0_1.0,1.0' 'Walking_0.0,0.0_0.0,0.0'] # Unique values in target: 10 ###Markdown Aufgabe 3: Visualisierung & Groundtruth-Label Visualisierung ###Code plt.figure(figsize=(20, 15)) ax = sns.heatmap(df1.loc[:,'trx_1_2_ifft_0':'trx_1_2_ifft_1999'].values, cmap='nipy_spectral_r') plt.figure(figsize=(20, 15)) ax = sns.heatmap(df2.loc[:,'trx_2_4_ifft_0':'trx_2_4_ifft_1999'].values, cmap='YlGnBu') ###Output _____no_output_____ ###Markdown Groundtruth-Label anpassen ###Code # Iterating over hdfs data and creating interim data presentation stored in data/interim/henrik/testmessungen_interim.hdf # Interim data representation contains aditional binary class (binary_target - encoding 0=empty and 1=not empty) # and multi class target (multi_target - encoding 0-9 for each possible class) from sklearn.preprocessing import LabelEncoder le = LabelEncoder() interim_path = '../../data/interim/henrik/01_testmessungen.hdf' def binary_mapper(df): def map_binary(target): if target.startswith('Empty'): return 0 else: return 1 df['binary_target'] = pd.Series(map(map_binary, df['target'])) def multiclass_mapper(df): le.fit(df['target']) df['multi_target'] = le.transform(df['target']) for key in hdfs.keys(): df = hdfs.get(key) binary_mapper(df) multiclass_mapper(df) df.to_hdf(interim_path, key) hdfs.close() ###Output _____no_output_____ ###Markdown Aufgabe 4: Einfacher Erkenner mit Hold-Out-Validierung ###Code from evaluation import * from filters import * from utility import * from features import * hdfs = pd.HDFStore('../../data/interim/henrik/01_testmessungen.hdf') # generate datasets tst = ['1','2','3'] tst_ds = [] for t in tst: df_tst = hdfs.get('/x1/t'+t+'/trx_3_1') lst = df_tst.columns[df_tst.columns.str.contains('_ifft_')] #df_tst_cl,_ = distortion_filter(df_tst_cl) groups = get_trx_groups(df_tst) df_std = rf_grouped(df_tst, groups=groups, fn=rf_std_single, label='target') df_mean = rf_grouped(df_tst, groups=groups, fn=rf_mean_single) df_p2p = rf_grouped(df_tst, groups=groups, fn=rf_ptp_single) # added p2p feature df_all = pd.concat( [df_std, df_mean, df_p2p], axis=1 ) # added p2p feature df_all = cf_std_window(df_all, window=4, label='target') df_tst_sum = generate_class_label_presence(df_all, state_variable='target') # remove index column df_tst_sum = df_tst_sum[df_tst_sum.columns.values[~df_tst_sum.columns.str.contains('index')].tolist()] print('Columns in Dataset:',t) print(df_tst_sum.columns) tst_ds.append(df_tst_sum.copy()) # holdout validation print(hold_out_val(tst_ds, target='target', include_self=False, cl='rf', verbose=False, random_state=1)) hdfs.close() ###Output _____no_output_____ ###Markdown Aufgabe 5: Eigener ErkennerIm Rahmen des eigenen Erkenners werden die entsprechenden Preprocessing und Mapping Schritte, anhand des originalen Datasets, erneut durchgeführt im Hinblick auf die Anpassung an den eigenen Erkenner. ###Code # Load hdfs data hdfs = pd.HDFStore("../../data/raw/henrik/TestMessungen_NEU.hdf") # Check available keys in hdf5 store hdfs.keys # Step-0 # Mapping groundtruth to 0-empty and 1-not empty and prepare for further preprocessing by # removing additional timestamp columns and index column # Storing cleaned dataframes (no index, removed _ts columns, mapped multi classes to 0-empty, 1-not empty) # to new hdfstore to `data/interim/henrik/02_testmessungen.hdf` hdf_path = "../../data/interim/henrik/02_tesmessungen.hdf" dfs = [] for key in hdfs.keys(): df = hdfs.get(key) #df['target'] = df['target'].map(lambda x: 0 if x.startswith("Empty") else 1) # drop all time stamp columns who endswith _ts cols = [c for c in df.columns if not c.lower().endswith("ts")] df = df[cols] df = df.drop('index', axis=1) df.to_hdf(hdf_path, key) hdfs.close() hdfs = pd.HDFStore(hdf_path) df = hdfs.get("/x1/t1/trx_1_2") df.head() # Step-1 repeating the previous taks 4 to get a comparable base result with the now dropped _ts and index column to improve from # generate datasets from evaluation import * from filters import * from utility import * from features import * tst = ['1','2','3'] tst_ds = [] for t in tst: df_tst = hdfs.get('/x1/t'+t+'/trx_3_1') lst = df_tst.columns[df_tst.columns.str.contains('_ifft_')] #df_tst_cl,_ = distortion_filter(df_tst_cl) df_tst,_ = distortion_filter(df_tst) groups = get_trx_groups(df_tst) df_std = rf_grouped(df_tst, groups=groups, fn=rf_std_single, label='target') df_mean = rf_grouped(df_tst, groups=groups, fn=rf_mean_single) df_p2p = rf_grouped(df_tst, groups=groups, fn=rf_ptp_single) # added p2p feature df_kurt = rf_grouped(df_tst, groups=groups, fn=rf_kurtosis_single) df_all = pd.concat( [df_std, df_mean, df_p2p, df_kurt], axis=1 ) # added p2p feature df_all = cf_std_window(df_all, window=4, label='target') df_all = cf_diff(df_all, label='target') df_tst_sum = generate_class_label_presence(df_all, state_variable='target') # remove index column # df_tst_sum = df_tst_sum[df_tst_sum.columns.values[~df_tst_sum.columns.str.contains('index')].tolist()] print('Columns in Dataset:',t) print(df_tst_sum.columns) tst_ds.append(df_tst_sum.copy()) print(hold_out_val(tst_ds, target='target', include_self=False, cl='dt', verbose=False, random_state=1)) # Evaluating different supervised learning methods provided in eval.py # added a NN evaluator but there are some problems regarding usage and hidden layers # For the moment only kurtosis and cf_diff are added to the dataset as well as the distortion filter # Feature selection is needed right now! for elem in ['rf', 'dt', 'nb' ,'nn','knn']: print(hold_out_val(tst_ds, target='target', include_self=False, cl=elem, verbose=False, random_state=1)) ###Output (0.86225141715993037, 0.049405788187083757) (0.87454461167643449, 0.055507534644738628) (0.69467597326646258, 0.037431406160581791) (0.45908595917510514, 0.083621392018383839) (0.87454461167643449, 0.055507534644738628) ###Markdown Aufgabe 6: Umwandlung in einen Onlineerkenner The following command starts a flask_restful server on localhost port:5444 which answers json post requests. The server is implemented in the file online.py within the ipynb folder and makes use of the final chosen model.Requests can be made as post request to http://localhost:5444/predict with a json file of the following format:{ "row": "features"}be careful that the sent file is valid json. The answer contains the row and the predicted class.{ "row": "features", "p_class": "predicted class"}For now the online predictor only predicts the class of single sent lines ###Code %run -i './online.py' ###Output _____no_output_____
002_Python_Functions_Built_in/022_Python_float().ipynb
###Markdown All the IPython Notebooks in this lecture series by Dr. Milan Parmar are available @ **[GitHub](https://github.com/milaan9/04_Python_Functions/tree/main/002_Python_Functions_Built_in)** Python `float()`The **`float()`** method returns a floating point number from a number or a string.**Syntax**:```pythonfloat([x])``` `float()` ParametersThe **`float()`** method takes a single parameter:* **`x` (Optional)** - number or string that needs to be converted to floating point numberIf it's a string, the string should contain decimal points**Different parameters with **`float()`****:| Parameter Type | Usage ||:----| :--- || **Float number** | **Use as a floating number** || **Integer** | **Use as an integer** || **String** | **Must contain decimal numbers. Leading and trailing whitespaces are removed. Optional use of "`+`", "`-`" signs. Could contain `NaN`, `Infinity`, `inf` (lowercase or uppercase).** | Return Value from `float()`**`float()`** method returns:* Equivalent floating point number if an argument is passed* 0.0 if no arguments passed* **`OverflowError`** exception if the argument is outside the range of Python float ###Code # Example 1: How float() works in Python? # for integers print(float(10)) # for floats print(float(11.22)) # for string floats print(float("-13.33")) # for string floats with whitespaces print(float(" -24.45\n")) # string float error print(float("abc")) # Example 2: float() for infinity and Nan(Not a number)? # for NaN print(float("nan")) print(float("NaN")) # for inf/infinity print(float("inf")) print(float("InF")) print(float("InFiNiTy")) print(float("infinity")) ###Output nan nan inf inf inf inf
notebooks/watson_changepoint_detection.ipynb
###Markdown Change Point Detection in Time Series Sensor data 1 Environment Setup 1.1 Install dependent libraries ###Code # Clear all objects from memory rm(list=ls()) # Check for installed libraries inspkgs = as.data.frame(installed.packages()[,c(1,3:4)]) inspkgs = inspkgs[is.na(inspkgs$Priority),1:2,drop=FALSE] inspkgs[1:3,] # Displying only 3 sample packages # Run the below commands for libraries installation only # if they are not istalled already as indicated by the above command # Uncomment and run the installation commands below # install.packages("sqldf") # install.packages("ggplot2") # install.packages("jsonlite", repos="http://cran.r-project.org") ###Output _____no_output_____ ###Markdown 1.2 Load dependent libraries ###Code library(sqldf) library(httr) library(RCurl) library(bitops) library(jsonlite) library("aws.s3") library(stringr) ###Output Loading required package: gsubfn Loading required package: proto Warning message in doTryCatch(return(expr), name, parentenv, handler): “unable to load shared object '/opt/conda/envs/R/lib/R/modules//R_X11.so': libXt.so.6: cannot open shared object file: No such file or directory”Warning message: “no DISPLAY variable so Tk is not available”Loading required package: RSQLite Loading required package: bitops ###Markdown 2 Configure Parameters for Change Point Detection 2.1 Read DSX Configuration file and load all parameters Complete below 2 steps before executing the rest of the cells1. Configure the parameters in JSON file and upload to Object storage2. Set the Configuration .json file name in the next section 2.1.1 Set the name of the .json configuration file ###Code # Specify file names for sample text and configuration files # Not required when reading data from database v_sampleConfigFileName = "cpd_dsx_config.json" ###Output _____no_output_____ ###Markdown 2.1.2 Insert the Object Storage file credentials to read the .json configuration file ###Code # @hidden_cell # The section below needs to be modified: # Insert your credentials to read data from your data sources and replace # the credentials.1 <- list() section below credentials.1 <-list( endpoint = "https://s3-api.us-geo.objectstorage.service.networklayer.com", api.key = "bT_623i_H_rIyjvo-DYiU59M-YQvB9hyaaIc-QS3Bu9m", iam.service.id = "iam-ServiceId-bef1701c-ab8e-445f-8e68-af5da024ee91", bucket.name = "iotchangepointcloudstorage-donotdelete-pr-fqyloftlgsbqoc", file.name = "cpd_dsx_config.txt", access.key = "ca6e393e8bbf42db9bb29bcbe9ccffd7", secret.key = "8b691c9f809ecc79624c5a0363898085071ef342f3bc3977", iam.service.endpoint = "https://iam.ng.bluemix.net/oidc/token") # This function accesses a file in your Object Storage. The definition contains your credentials. url1 <- str_replace(credentials.1$endpoint, "https://", "") url1 <- str_replace(url1, "http://", "") obj <- s3HTTP( verb = "GET", bucket = credentials.1$bucket.name, path = v_sampleConfigFileName, key = credentials.1$access.key, secret = credentials.1$secret.key, check_region = FALSE, base_url =url1) # Your data file was loaded into a textConnection object and you can process the data with your package of choice. configtxt = textConnection(rawToChar(obj$content)) ###Output _____no_output_____ ###Markdown 2.1.3 Read Configuration parametric values ###Code # Function to Read json parametric values f_getconfigval <- function(injsonstr, invarname) { # paramname, paramvalue injsonstr$paramvalue[injsonstr$paramname==invarname] } # Read json configuration file # Please read the documentation of 'jsonlite' to learn more about the possibilities # to adjust the data loading. # jsonlite documentation: https://cran.r-project.org/web/packages/jsonlite/jsonlite.pdf jsonstr <- fromJSON(readLines(configtxt)) head(jsonstr) # Read json configuration parametric values # # Name of the column which holds the Time stamp of data # recorded by Sensor v_coltimestamp <- f_getconfigval(jsonstr, "coltimestamp") # Name of the column which holds the Sensor identification v_colsensorid <- f_getconfigval(jsonstr, "colsensorid") # Name of the column that stores the values measured by sensor v_colsensorvalue <- f_getconfigval(jsonstr, "colsensorvalue") # Sensor ID for which the analysis needs to be applied v_sensorid <- f_getconfigval(jsonstr, "sensorid") # Time format of the data in the data frame v_datatimeformat <- f_getconfigval(jsonstr, "datatimeformat") # Time zone for the Time stamps v_intimezone <- f_getconfigval(jsonstr, "intimezone") # Time format which is used for specifying the # time ranges in the below paraneters v_rangetimeformat <- f_getconfigval(jsonstr, "rangetimeformat") # Start Time for first series Time range v_Pfrom <- f_getconfigval(jsonstr, "Pfrom") # End Time for first series Time range v_Pto <- f_getconfigval(jsonstr, "Pto") # Start Time for second series Time range v_Cfrom <- f_getconfigval(jsonstr, "Cfrom") # End Time for second series Time range v_Cto <- f_getconfigval(jsonstr, "Cto") # Set the threshold percentage of change if detected v_thresholdpercent <- as.numeric(f_getconfigval(jsonstr, "thresholdpercent")) # Cross verify configuration parametric values print(c(v_coltimestamp, v_colsensorid, v_colsensorvalue, v_sensorid, v_datatimeformat, v_intimezone, v_rangetimeformat, v_Pfrom, v_Pto, v_Cfrom, v_Cto, v_thresholdpercent)) ###Output [1] "TIMESTAMP" "SENSORID" "SENSORVALUE" [4] "3B1" "%d-%m-%Y %H:%M:%S" "GMT" [7] "%Y%m%d %H:%M:%S" "20160324 00:00:00" "20160325 00:00:00" [10] "20160325 00:00:00" "20160326 00:00:00" "25" ###Markdown 3 Read IoT Sensor data from database ###Code # Read data from DB2 Warehouse in BMX library(ibmdbR) library(RODBC) library(Matrix) # Call function to read data for specific sensor # @hidden_cell # The section below needs to be modified: # Insert your credentials to read data from your data sources and replace # the idaConnect() section below # This connection object is used to access your data and contains your credentials. con_cpd <- idaConnect("DASHDB;DATABASE=BLUDB;HOSTNAME=dashdb-entry-yp-dal09-09.services.dal.bluemix.net;PORT=50000;PROTOCOL=TCPIP;", uid = "dash10720", pwd = "32uf_R_giSXX", conType = "odbc") idaInit(con_cpd) alarmdf <- ida.data.frame('DASH10720.CHANGEPOINTIOT') head(alarmdf) nrow(alarmdf) # You can close the connection with the following code: # idaClose(con_cpd) # Function to translate from one datetime format to another datetime format # Returns character strings in the converted format NOT in posix or datetime format <br/> # DateTime passed in also should be in character string format dtformatconvert <- function(indatetimes, fromdatetimeformat="%Y-%m-%d %H:%M:%S %p", todatetimeformat="%d-%m-%Y %H:%M:%S", fromtz="GMT", totz="", usetz=FALSE) { return(strftime(as.POSIXct(indatetimes, format=fromdatetimeformat, tz=fromtz, usetz=FALSE), format=todatetimeformat, tz=totz, usetz=FALSE)) } ###Output _____no_output_____ ###Markdown 3.1 Read data for 1 sensor for analysisYou can investigate the set of unique sensor ids in the data using the below command [unique(alarmdf$SensorID)] ###Code # Function to Standardise the Dataset with standard column names # <Timestamp, SensorID, SensorValue> f_readsensordata <- function(sensordf, sensorid, coltimestamp, colsensorid, colsensorval) { # sensordf <- read.csv(paste(dirname, filename, sep=""), sep=",", as.is=TRUE, header=TRUE) ##as.is=TRUE to ensure Timestamp read in as character type sensordf1 <- sensordf[,c(coltimestamp, colsensorid, colsensorval)] sensordf1 <- as.data.frame(sensordf1) names(sensordf1) <- c("TimeStamp","SensorID","SensorValue") if (sensorid != "ALL") { sensordf1 <- sensordf1[sensordf1$SensorID==sensorid,] } #View(alarmdf1) rm(sensordf) return(sensordf1) } # Read data and store in R data frame alarmdf <- f_readsensordata(sensordf = alarmdf, sensorid = v_sensorid, coltimestamp = v_coltimestamp, colsensorid=v_colsensorid, colsensorval=v_colsensorvalue) alarmdf$SensorValue <- as.numeric(alarmdf$SensorValue) head(alarmdf) ###Output _____no_output_____ ###Markdown 4 Prepare data 4.1 Sort the data by Time stamp in ascending order ###Code # Sort the data by Time stamp alarmdf <- alarmdf[with(alarmdf, order(SensorID, as.POSIXct(TimeStamp,format=v_datatimeformat, tz=v_intimezone))), ]; head(alarmdf) ###Output _____no_output_____ ###Markdown 4.2 Split data into 2 divergent sets for detecting changes ###Code # Function to split data into 2 datasets: Previous, Current # IN: Standard Data Frame, SensorID, # Previous From Time stamp, Previous To Time stamp, # Current From Time Stamp, Current To Time Stamp # OUT: Data series <br/> # series 1 (SensorID, TimeStamp, SensorValue), # series 2 (SensorID, TimeStamp, SensorValue) f_splitdataseries <- function(SensorID, Intimeformat, Datatimeformat, PFrom, PTo, CFrom, CTo) { PFromPOSIX = as.POSIXct(PFrom, format=Intimeformat, tz="GMT", usetz=FALSE); PToPOSIX = as.POSIXct(PTo, format=Intimeformat, tz="GMT", usetz=FALSE); CFromPOSIX = as.POSIXct(CFrom, format=Intimeformat, tz="GMT", usetz=FALSE); CToPOSIX = as.POSIXct(CTo, format=Intimeformat, tz="GMT", usetz=FALSE); alarmdf$TimeStampPOSIX <- as.POSIXct(alarmdf$TimeStamp, format=Datatimeformat, tz="GMT", usetz=FALSE); series1 = alarmdf[which(alarmdf$SensorID ==SensorID & alarmdf$TimeStampPOSIX >= PFromPOSIX & alarmdf$TimeStampPOSIX < PToPOSIX),]; series2 = alarmdf[which(alarmdf$SensorID ==SensorID & alarmdf$TimeStampPOSIX >= CFromPOSIX & alarmdf$TimeStampPOSIX < CToPOSIX),]; return(list(series1, series2)); } ###Output _____no_output_____ ###Markdown Get the 2 Data series to compare ###Code # Get the 2 Data series to compare (sample values shown for help) # s = f_splitdataseries (SensorID="3B1", # Intimeformat="%Y%m%d %H:%M:%S", Datatimeformat="%d-%m-%Y %H:%M:%S", # PFrom="20160324 00:00:00", PTo="20160325 00:00:00", # CFrom="20160325 00:00:00", CTo="20160326 00:00:00") s = f_splitdataseries (SensorID=v_sensorid, Intimeformat=v_rangetimeformat, Datatimeformat=v_datatimeformat, PFrom=v_Pfrom, PTo=v_Pto, CFrom=v_Cfrom, CTo=v_Cto) # Unpack the 2 list of data frames series1 <- s[[1]] series2 <- s[[2]] head(series1) head(series2) ###Output _____no_output_____ ###Markdown 5 Analyze the Data 5.1 Plot the line graphs for both the series ###Code # Function to plot the line graphs for both the series # IN parameters: (x,y values for both the Time series in sorted order) x1, y1, x2, y2 f_plot2lines <- function(x1, y1, x2, y2) { # Draw Line plot # dev.new() plot(y1,type="l",col="green",xlab=" ", ylab=" ",pch=21,xaxt="n",yaxt="n") par(new=T) plot(y2,type="l",col="red",xlab="Time", ylab="Sensor Values",pch=21,xaxt="n",yaxt="n") axis(2,las=3,cex.axis=0.8) axis(1,at=c(1:length(x1)),c(x1),las=2,cex.axis=0.4) title("Sensor readings by Time overlay") par(new=F) } # Line plot of the 2 Data series f_plot2lines(x1 <- series1$TimeStamp, y1 <- series1$SensorValue, x2 <- series2$TimeStamp, y2 <- series2$SensorValue) ###Output _____no_output_____ ###Markdown 5.2 Plot the box plots for both the distributions ###Code # Function to plot the box plots for both the distributions <br/> # Time series order does not matter for this <br/> # IN parameters: (Sensorvalues-series 1, Sensorvalues-series 2) f_plot2boxes <- function(s1sensorvalue, s2sensorvalue) { data_list = NULL col_list = c("green", "blue") names_list = c("Previous", "Current") data_list = list() data_list[[1]] = s1sensorvalue data_list[[2]] = s2sensorvalue # dev.new() # Works in PC only boxstats <- boxplot.stats(data_list[[1]], coef=1.57, do.conf = TRUE, do.out = TRUE) #par(new=T) boxplot(data_list, las = 2, col = col_list, ylim=c(-2.0,70), names= names_list, mar = c(12, 5, 4, 2) + 0.1, main="Change point detection", sub=paste("Spread of Sensor reading distributions", ":", sep=""), ylab="Sensor Readings", coef=1.57, do.conf = TRUE, do.out = TRUE) abline(h=boxstats$stats, col="green", las=2) } # Plot the 2 box plots for the distribution f_plot2boxes(s1sensorvalue = series1$SensorValue, s2sensorvalue = series2$SensorValue) ###Output _____no_output_____ ###Markdown 6 Statistical Change point detection 6.1 Calculate the stats for both the series ###Code # Function to calculate the stats for both the series <br/> # Avg, Median, p1sd, p2sd, p3sd, n1sd, n2sd, n3sd, q0, q1, q2, q3, q4, f1range, <br/> # iqrange, f2range, sku, kurt, outliers f_seriesstats <- function(series) { boxstats <- boxplot.stats(series, coef=1.57, do.conf = TRUE, do.out = TRUE) smin <- min(series) smax <- max(series) smean <- mean(series) #Spread measures sq0 <- boxstats$stats[1] sq1 <- boxstats$stats[2] sq2 <- boxstats$stats[3] sq3 <- boxstats$stats[4] sq4 <- boxstats$stats[5] siqr <- (sq3 - sq1) # Normal distribution s1sd <- sd(series) s1sdp <- smean + s1sd s1sdn <- smean - s1sd s2sdp <- smean + (2*s1sd) s2sdn <- smean - (2*s1sd) s3sdp <- smean + (3*s1sd) s3sdn <- smean - (3*s1sd) # Outlier counts @ 2sd s2sdout <- sum(series > s2sdp) + sum(series < s2sdn) # return(list(smin, smax, smean, sq0, sq1, sq2, sq3, sq4, siqr, s1sd, s1sdp, # s1sdn, s2sdp, s2sdn, s3sdp, s3sdn)) return(list(smin=smin, smax=smax, smean=smean, sq0=sq0, sq1=sq1, sq2=sq2, sq3=sq3, sq4=sq4, siqr=siqr, s1sd=s1sd, s1sdp=s1sdp, s1sdn=s1sdn, s2sdp=s2sdp, s2sdn=s2sdn, s3sdp=s3sdp, s3sdn=s3sdn)) } # Compute the statistics for both series and check results s1stats <- f_seriesstats(series1$SensorValue) s2stats <- f_seriesstats(series2$SensorValue) ###Output _____no_output_____ ###Markdown 6.2 Calculate change point deviations ###Code ## Function to calculate change point deviatrion percentages f_changepercent <- function(val1, val2) { return(((val2-val1)/val1)*100) } # Calculate percentage deviation for individual stats f_serieschangepercent <- function(series1stats, series2stats) { n <- length(series1stats) cols=names(series2stats) cpdf <- data.frame(statname=character(), series1val = numeric(), series2val=numeric(), changeper=numeric()); for (i in 1:length(series2stats)) { newrow = data.frame(statname=cols[i], series1val=series1stats[[i]], series2val=series2stats[[i]], changeper=f_changepercent(series1stats[[i]], series2stats[[i]])) cpdf <- rbind(cpdf, newrow) } return(cpdf) } # Calculate overall percentage deviation and detect change point f_detectchangepoint <- function(dfcp, threshold) { # Overall percentage deviation newrow = data.frame(statname='overall', series1val=NA, series2val=NA, changeper=mean(abs(dfcp$changeper))) dfcp <- rbind(dfcp, newrow) # Overall change point percentage changepointper <- dfcp[which(dfcp$statname=="overall"),c("changeper")] # Mark change point at threshold % if(changepointper > threshold) {return(paste("Change Point DETECTED exceeding threshold: ",threshold,"% ", sep=""))} else {return(paste("Change Point NOT DETECTED at threshold: ",threshold,"% ", sep=""))} } ###Output _____no_output_____ ###Markdown Overall change percentage and individual key statistics ###Code # Overall change percentage in key statistics dfallstats <- f_serieschangepercent(s1stats, s2stats) print(dfallstats) # Detect changepoint f_detectchangepoint(dfallstats, v_thresholdpercent) ###Output statname series1val series2val changeper 1 smin 8.610000 9.225000 7.142857 2 smax 52.890000 55.965000 5.813953 3 smean 13.073021 18.168125 38.974191 4 sq0 8.610000 9.225000 7.142857 5 sq1 8.610000 11.685000 35.714286 6 sq2 9.840000 13.530000 37.500000 7 sq3 12.915000 18.142500 40.476190 8 sq4 19.065000 27.060000 41.935484 9 siqr 4.305000 6.457500 50.000000 10 s1sd 8.861824 11.444024 29.138478 11 s1sdp 21.934845 29.612149 35.000497 12 s1sdn 4.211197 6.724101 59.671956 13 s2sdp 30.796669 41.056174 33.313685 14 s2sdn -4.650627 -4.719924 1.490057 15 s3sdp 39.658493 52.500198 32.380721 16 s3sdn -13.512451 -16.163948 19.622625 ###Markdown Change Point Detection in Time Series Sensor data 1 Environment Setup 1.1 Install dependent libraries ###Code # Clear all objects from memory rm(list=ls()) # Check for installed libraries inspkgs = as.data.frame(installed.packages()[,c(1,3:4)]) inspkgs = inspkgs[is.na(inspkgs$Priority),1:2,drop=FALSE] inspkgs[1:3,] # Displying only 3 sample packages # Run the below commands for libraries installation only # if they are not istalled already as indicated by the above command # Uncomment and run the installation commands below # install.packages("sqldf") # install.packages("ggplot2") # install.packages("jsonlite", repos="http://cran.r-project.org") ###Output _____no_output_____ ###Markdown 1.2 Load dependent libraries ###Code library(sqldf) library(httr) library(RCurl) library(bitops) library(jsonlite) ###Output _____no_output_____ ###Markdown 2 Configure Parameters for Change Point Detection 2.1 Read DSX Configuration file and load all parameters Complete below 2 steps before executing the rest of the cells1. Configure the parameters in JSON file and upload to Object storage2. Set the Configuration .json file name in the next section 2.1.1 Set the name of the .json configuration file ###Code # Specify file names for sample text and configuration files # Not required when reading data from database v_sampleConfigFileName = "cpd_dsx_config.json" ###Output _____no_output_____ ###Markdown 2.1.2 Insert the Object Storage file credentials to read the .json configuration file ###Code # @hidden_cell # The section below needs to be modified: # Insert your credentials to read data from your data sources and replace # the idaConnect() section below # This function accesses a file in your Object Storage. The definition contains your credentials. getObjectStorageFileWithCredentials_273b1c76068e4fe4b6cb7633e12004f3 <- function(container, filename) { # This functions returns a textConnection object for a file # from Bluemix Object Storage. if(!require(httr)) install.packages('httr') if(!require(RCurl)) install.packages('RCurl') library(httr, RCurl) auth_url <- paste("https://identity.open.softlayer.com",'/v3/auth/tokens', sep= '') auth_args <- paste('{"auth": {"identity": {"password": {"user": {"domain": {"id": ', "1301cc61df814635b2dd7c9fa40e6e2a",'}, "password": ', "mHk4F6cpWl5R?*jZ",', "name": ', "member_03c4778cda0f6111933c34cba4d34b7a50f6eabb",'}}, "methods": ["password"]}}}', sep='"') auth_response <- httr::POST(url = auth_url, body = auth_args) x_subject_token <- headers(auth_response)[['x-subject-token']] auth_body <- content(auth_response) access_url <- unlist(lapply(auth_body[['token']][['catalog']], function(catalog){ if((catalog[['type']] == 'object-store')){ lapply(catalog[['endpoints']], function(endpoints){ if(endpoints[['interface']] == 'public' && endpoints[['region_id']] == 'dallas') { paste(endpoints[['url']], container, filename, sep='/')} }) } })) data <- content(httr::GET(url = access_url, add_headers ("Content-Type" = "application/json", "X-Auth-Token" = x_subject_token)), as="text") textConnection(data) } ###Output _____no_output_____ ###Markdown 2.1.3 Read Configuration parametric values ###Code # Function to Read json parametric values f_getconfigval <- function(injsonstr, invarname) { # paramname, paramvalue injsonstr$paramvalue[injsonstr$paramname==invarname] } # Read json configuration file # Please read the documentation of 'jsonlite' to learn more about the possibilities # to adjust the data loading. # jsonlite documentation: https://cran.r-project.org/web/packages/jsonlite/jsonlite.pdf jsonstr <- fromJSON( readLines( getObjectStorageFileWithCredentials_273b1c76068e4fe4b6cb7633e12004f3("ChangePointDetection", v_sampleConfigFileName))) head(jsonstr) # Read json configuration parametric values # # Name of the column which holds the Time stamp of data # recorded by Sensor v_coltimestamp <- f_getconfigval(jsonstr, "coltimestamp") # Name of the column which holds the Sensor identification v_colsensorid <- f_getconfigval(jsonstr, "colsensorid") # Name of the column that stores the values measured by sensor v_colsensorvalue <- f_getconfigval(jsonstr, "colsensorvalue") # Sensor ID for which the analysis needs to be applied v_sensorid <- f_getconfigval(jsonstr, "sensorid") # Time format of the data in the data frame v_datatimeformat <- f_getconfigval(jsonstr, "datatimeformat") # Time zone for the Time stamps v_intimezone <- f_getconfigval(jsonstr, "intimezone") # Time format which is used for specifying the # time ranges in the below paraneters v_rangetimeformat <- f_getconfigval(jsonstr, "rangetimeformat") # Start Time for first series Time range v_Pfrom <- f_getconfigval(jsonstr, "Pfrom") # End Time for first series Time range v_Pto <- f_getconfigval(jsonstr, "Pto") # Start Time for second series Time range v_Cfrom <- f_getconfigval(jsonstr, "Cfrom") # End Time for second series Time range v_Cto <- f_getconfigval(jsonstr, "Cto") # Set the threshold percentage of change if detected v_thresholdpercent <- as.numeric(f_getconfigval(jsonstr, "thresholdpercent")) # Cross verify configuration parametric values print(c(v_coltimestamp, v_colsensorid, v_colsensorvalue, v_sensorid, v_datatimeformat, v_intimezone, v_rangetimeformat, v_Pfrom, v_Pto, v_Cfrom, v_Cto, v_thresholdpercent)) ###Output [1] "TIMESTAMP" "SENSORID" "SENSORVALUE" [4] "3B1" "%d-%m-%Y %H:%M:%S" "GMT" [7] "%Y%m%d %H:%M:%S" "20160324 00:00:00" "20160325 00:00:00" [10] "20160325 00:00:00" "20160326 00:00:00" "25" ###Markdown 3 Read IoT Sensor data from database ###Code # Read data from DB2 Warehouse in BMX library(ibmdbR) library(RODBC) library(Matrix) # Call function to read data for specific sensor # @hidden_cell # The section below needs to be modified: # Insert your credentials to read data from your data sources and replace # the idaConnect() section below # This connection object is used to access your data and contains your credentials. con_cpd <- idaConnect("DASHDB;DATABASE=BLUDB;HOSTNAME=dashdb-entry-yp-dal09-09.services.dal.bluemix.net;PORT=50000;PROTOCOL=TCPIP;", uid = "dash10720", pwd = "32uf_R_giSXX", conType = "odbc") idaInit(con_cpd) alarmdf <- ida.data.frame('DASH10720.CHANGEPOINTIOT') head(alarmdf) nrow(alarmdf) # You can close the connection with the following code: # idaClose(con_cpd) # Function to translate from one datetime format to another datetime format # Returns character strings in the converted format NOT in posix or datetime format <br/> # DateTime passed in also should be in character string format dtformatconvert <- function(indatetimes, fromdatetimeformat="%Y-%m-%d %H:%M:%S %p", todatetimeformat="%d-%m-%Y %H:%M:%S", fromtz="GMT", totz="", usetz=FALSE) { return(strftime(as.POSIXct(indatetimes, format=fromdatetimeformat, tz=fromtz, usetz=FALSE), format=todatetimeformat, tz=totz, usetz=FALSE)) } ###Output _____no_output_____ ###Markdown 3.1 Read data for 1 sensor for analysisYou can investigate the set of unique sensor ids in the data using the below command [unique(alarmdf$SensorID)] ###Code # Function to Standardise the Dataset with standard column names # <Timestamp, SensorID, SensorValue> f_readsensordata <- function(sensordf, sensorid, coltimestamp, colsensorid, colsensorval) { # sensordf <- read.csv(paste(dirname, filename, sep=""), sep=",", as.is=TRUE, header=TRUE) ##as.is=TRUE to ensure Timestamp read in as character type sensordf1 <- sensordf[,c(coltimestamp, colsensorid, colsensorval)] sensordf1 <- as.data.frame(sensordf1) names(sensordf1) <- c("TimeStamp","SensorID","SensorValue") if (sensorid != "ALL") { sensordf1 <- sensordf1[sensordf1$SensorID==sensorid,] } #View(alarmdf1) rm(sensordf) return(sensordf1) } # Read data and store in R data frame alarmdf <- f_readsensordata(sensordf = alarmdf, sensorid = v_sensorid, coltimestamp = v_coltimestamp, colsensorid=v_colsensorid, colsensorval=v_colsensorvalue) alarmdf$SensorValue <- as.numeric(alarmdf$SensorValue) head(alarmdf) ###Output _____no_output_____ ###Markdown 4 Prepare data 4.1 Sort the data by Time stamp in ascending order ###Code # Sort the data by Time stamp alarmdf <- alarmdf[with(alarmdf, order(SensorID, as.POSIXct(TimeStamp,format=v_datatimeformat, tz=v_intimezone))), ]; head(alarmdf) ###Output _____no_output_____ ###Markdown 4.2 Split data into 2 divergent sets for detecting changes ###Code # Function to split data into 2 datasets: Previous, Current # IN: Standard Data Frame, SensorID, # Previous From Time stamp, Previous To Time stamp, # Current From Time Stamp, Current To Time Stamp # OUT: Data series <br/> # series 1 (SensorID, TimeStamp, SensorValue), # series 2 (SensorID, TimeStamp, SensorValue) f_splitdataseries <- function(SensorID, Intimeformat, Datatimeformat, PFrom, PTo, CFrom, CTo) { PFromPOSIX = as.POSIXct(PFrom, format=Intimeformat, tz="GMT", usetz=FALSE); PToPOSIX = as.POSIXct(PTo, format=Intimeformat, tz="GMT", usetz=FALSE); CFromPOSIX = as.POSIXct(CFrom, format=Intimeformat, tz="GMT", usetz=FALSE); CToPOSIX = as.POSIXct(CTo, format=Intimeformat, tz="GMT", usetz=FALSE); alarmdf$TimeStampPOSIX <- as.POSIXct(alarmdf$TimeStamp, format=Datatimeformat, tz="GMT", usetz=FALSE); series1 = alarmdf[which(alarmdf$SensorID ==SensorID & alarmdf$TimeStampPOSIX >= PFromPOSIX & alarmdf$TimeStampPOSIX < PToPOSIX),]; series2 = alarmdf[which(alarmdf$SensorID ==SensorID & alarmdf$TimeStampPOSIX >= CFromPOSIX & alarmdf$TimeStampPOSIX < CToPOSIX),]; return(list(series1, series2)); } ###Output _____no_output_____ ###Markdown Get the 2 Data series to compare ###Code # Get the 2 Data series to compare (sample values shown for help) # s = f_splitdataseries (SensorID="3B1", # Intimeformat="%Y%m%d %H:%M:%S", Datatimeformat="%d-%m-%Y %H:%M:%S", # PFrom="20160324 00:00:00", PTo="20160325 00:00:00", # CFrom="20160325 00:00:00", CTo="20160326 00:00:00") s = f_splitdataseries (SensorID=v_sensorid, Intimeformat=v_rangetimeformat, Datatimeformat=v_datatimeformat, PFrom=v_Pfrom, PTo=v_Pto, CFrom=v_Cfrom, CTo=v_Cto) # Unpack the 2 list of data frames series1 <- s[[1]] series2 <- s[[2]] head(series1) head(series2) ###Output _____no_output_____ ###Markdown 5 Analyze the Data 5.1 Plot the line graphs for both the series ###Code # Function to plot the line graphs for both the series # IN parameters: (x,y values for both the Time series in sorted order) x1, y1, x2, y2 f_plot2lines <- function(x1, y1, x2, y2) { # Draw Line plot # dev.new() plot(y1,type="l",col="green",xlab=" ", ylab=" ",pch=21,xaxt="n",yaxt="n") par(new=T) plot(y2,type="l",col="red",xlab="Time", ylab="Sensor Values",pch=21,xaxt="n",yaxt="n") axis(2,las=3,cex.axis=0.8) axis(1,at=c(1:length(x1)),c(x1),las=2,cex.axis=0.4) title("Sensor readings by Time overlay") par(new=F) } # Line plot of the 2 Data series f_plot2lines(x1 <- series1$TimeStamp, y1 <- series1$SensorValue, x2 <- series2$TimeStamp, y2 <- series2$SensorValue) ###Output _____no_output_____ ###Markdown 5.2 Plot the box plots for both the distributions ###Code # Function to plot the box plots for both the distributions <br/> # Time series order does not matter for this <br/> # IN parameters: (Sensorvalues-series 1, Sensorvalues-series 2) f_plot2boxes <- function(s1sensorvalue, s2sensorvalue) { data_list = NULL col_list = c("green", "blue") names_list = c("Previous", "Current") data_list = list() data_list[[1]] = s1sensorvalue data_list[[2]] = s2sensorvalue # dev.new() # Works in PC only boxstats <- boxplot.stats(data_list[[1]], coef=1.57, do.conf = TRUE, do.out = TRUE) #par(new=T) boxplot(data_list, las = 2, col = col_list, ylim=c(-2.0,70), names= names_list, mar = c(12, 5, 4, 2) + 0.1, main="Change point detection", sub=paste("Spread of Sensor reading distributions", ":", sep=""), ylab="Sensor Readings", coef=1.57, do.conf = TRUE, do.out = TRUE) abline(h=boxstats$stats, col="green", las=2) } # Plot the 2 box plots for the distribution f_plot2boxes(s1sensorvalue = series1$SensorValue, s2sensorvalue = series2$SensorValue) ###Output _____no_output_____ ###Markdown 6 Statistical Change point detection 6.1 Calculate the stats for both the series ###Code # Function to calculate the stats for both the series <br/> # Avg, Median, p1sd, p2sd, p3sd, n1sd, n2sd, n3sd, q0, q1, q2, q3, q4, f1range, <br/> # iqrange, f2range, sku, kurt, outliers f_seriesstats <- function(series) { boxstats <- boxplot.stats(series, coef=1.57, do.conf = TRUE, do.out = TRUE) smin <- min(series) smax <- max(series) smean <- mean(series) #Spread measures sq0 <- boxstats$stats[1] sq1 <- boxstats$stats[2] sq2 <- boxstats$stats[3] sq3 <- boxstats$stats[4] sq4 <- boxstats$stats[5] siqr <- (sq3 - sq1) # Normal distribution s1sd <- sd(series) s1sdp <- smean + s1sd s1sdn <- smean - s1sd s2sdp <- smean + (2*s1sd) s2sdn <- smean - (2*s1sd) s3sdp <- smean + (3*s1sd) s3sdn <- smean - (3*s1sd) # Outlier counts @ 2sd s2sdout <- sum(series > s2sdp) + sum(series < s2sdn) # return(list(smin, smax, smean, sq0, sq1, sq2, sq3, sq4, siqr, s1sd, s1sdp, # s1sdn, s2sdp, s2sdn, s3sdp, s3sdn)) return(list(smin=smin, smax=smax, smean=smean, sq0=sq0, sq1=sq1, sq2=sq2, sq3=sq3, sq4=sq4, siqr=siqr, s1sd=s1sd, s1sdp=s1sdp, s1sdn=s1sdn, s2sdp=s2sdp, s2sdn=s2sdn, s3sdp=s3sdp, s3sdn=s3sdn)) } # Compute the statistics for both series and check results s1stats <- f_seriesstats(series1$SensorValue) s2stats <- f_seriesstats(series2$SensorValue) ###Output _____no_output_____ ###Markdown 6.2 Calculate change point deviations ###Code ## Function to calculate change point deviatrion percentages f_changepercent <- function(val1, val2) { return(((val2-val1)/val1)*100) } # Calculate percentage deviation for individual stats f_serieschangepercent <- function(series1stats, series2stats) { n <- length(series1stats) cols=names(series2stats) cpdf <- data.frame(statname=character(), series1val = numeric(), series2val=numeric(), changeper=numeric()); for (i in 1:length(series2stats)) { newrow = data.frame(statname=cols[i], series1val=series1stats[[i]], series2val=series2stats[[i]], changeper=f_changepercent(series1stats[[i]], series2stats[[i]])) cpdf <- rbind(cpdf, newrow) } return(cpdf) } # Calculate overall percentage deviation and detect change point f_detectchangepoint <- function(dfcp, threshold) { # Overall percentage deviation newrow = data.frame(statname='overall', series1val=NA, series2val=NA, changeper=mean(abs(dfcp$changeper))) dfcp <- rbind(dfcp, newrow) # Overall change point percentage changepointper <- dfcp[which(dfcp$statname=="overall"),c("changeper")] # Mark change point at threshold % if(changepointper > threshold) {return(paste("Change Point DETECTED exceeding threshold: ",threshold,"% ", sep=""))} else {return(paste("Change Point NOT DETECTED at threshold: ",threshold,"% ", sep=""))} } ###Output _____no_output_____ ###Markdown Overall change percentage and individual key statistics ###Code # Overall change percentage in key statistics dfallstats <- f_serieschangepercent(s1stats, s2stats) print(dfallstats) # Detect changepoint f_detectchangepoint(dfallstats, v_thresholdpercent) ###Output statname series1val series2val changeper 1 smin 8.610000 9.225000 7.142857 2 smax 52.890000 55.965000 5.813953 3 smean 13.073021 18.168125 38.974191 4 sq0 8.610000 9.225000 7.142857 5 sq1 8.610000 11.685000 35.714286 6 sq2 9.840000 13.530000 37.500000 7 sq3 12.915000 18.142500 40.476190 8 sq4 19.065000 27.060000 41.935484 9 siqr 4.305000 6.457500 50.000000 10 s1sd 8.861824 11.444024 29.138478 11 s1sdp 21.934845 29.612149 35.000497 12 s1sdn 4.211197 6.724101 59.671956 13 s2sdp 30.796669 41.056174 33.313685 14 s2sdn -4.650627 -4.719924 1.490057 15 s3sdp 39.658493 52.500198 32.380721 16 s3sdn -13.512451 -16.163948 19.622625
examples/jupyter_interactors.ipynb
###Markdown Basic Interactor Demo---------------------This demo shows off an interactive visualization using [Bokeh](https://bokeh.org) for plotting, and Ipython interactors for widgets. The demo runs entirely inside the Ipython notebook, with no Bokeh server required.The dropdown offers a choice of trig functions to plot, and the sliders control the frequency, amplitude, and phase. To run, click on, `Cell->Run All` in the top menu, then scroll to the bottom and move the sliders. ###Code from ipywidgets import interact import numpy as np from bokeh.io import push_notebook, show, output_notebook from bokeh.plotting import figure output_notebook() x = np.linspace(0, 2*np.pi, 2000) y = np.sin(x) p = figure(title="simple line example", plot_height=300, plot_width=600, y_range=(-5,5), background_fill_color='#efefef') r = p.line(x, y, color="#8888cc", line_width=1.5, alpha=0.8) def update(f, w=1, A=1, phi=0): if f == "sin": func = np.sin elif f == "cos": func = np.cos r.data_source.data['y'] = A * func(w * x + phi) push_notebook() show(p, notebook_handle=True) interact(update, f=["sin", "cos"], w=(0,50), A=(1,10), phi=(0, 20, 0.1)) ###Output _____no_output_____
Resources/Day 7 - Data Analyst Job Analysis.ipynb
###Markdown DAY 7 - DATA ANALYST JOB ANALYSIS USING THE DATA ANALYST DATASETThis dataset was created by picklesueat and contains more than 2000 job listing for data analyst positions, with features such as:* Salary Estimate* Location* Company Rating* Job Description and more. ###Code import pandas as pd import numpy as np df = pd.read_csv(r"C:\Users\seyi\Desktop\MY DATA SCIENCE WORKS, TUTORIAL AND JOURNEY\100 days of DS\Resources\DataAnalyst.csv") df ###Output _____no_output_____ ###Markdown Data Cleaning ###Code df.isnull() df.isnull().sum() df.isnull().any() # Performing Data Cleaning on the Company Name Column df['Company Name'] = df['Company Name'].fillna(method = 'bfill') # Checking if the data still contains null values df.isnull().sum() df.head() ###Output _____no_output_____ ###Markdown If we notice, we would see that some of the columns have -1 as their values, so we need to replace it with Nan and then clean ###Code df=df.replace(-1,np.nan) df=df.replace(-1.0,np.nan) df=df.replace('-1',np.nan) df.head() ###Output _____no_output_____ ###Markdown We can see that it has been changed, we can now clean ###Code df.isnull().sum() df['Company Name'],_=df['Company Name'].str.split('\n', 1).str df['Job Title'],df['Department']=df['Job Title'].str.split(',', 1).str df['Salary Estimate'],_=df['Salary Estimate'].str.split('(', 1).str df.head() ###Output _____no_output_____ ###Markdown Splitting the Salary Estimate into max and min salary ###Code df['Min_Salary'],df['Max_Salary']=df['Salary Estimate'].str.split('-').str df['Min_Salary']=df['Min_Salary'].str.strip(' ').str.lstrip('$').str.rstrip('K').fillna(0).astype('int') df['Max_Salary']=df['Max_Salary'].str.strip(' ').str.lstrip('$').str.rstrip('K').fillna(0).astype('int') df.head() # Remvoing the Unnamed and Salary Estimate column df.drop(['Unnamed: 0', 'Salary Estimate'], axis=1 , inplace = True) df.head() ###Output _____no_output_____ ###Markdown Showing the Salary of a Data Analyst (Min Salary) ###Code df[(df['Job Title'] == 'Data Analyst') & (df['Min_Salary'])] ###Output _____no_output_____ ###Markdown Showing the Salary of a Data Analyst (Max Salary) ###Code df[(df['Job Title'] == 'Data Analyst') & (df['Max_Salary'])] ###Output _____no_output_____ ###Markdown Top 20 cities with their minimum and maximum salaries ###Code df.groupby('Location')[['Max_Salary','Min_Salary']].mean().sort_values(['Max_Salary','Min_Salary'],ascending=False).head(20) ###Output _____no_output_____ ###Markdown Top 20 Roles with their minimum and maximum salaries ###Code df.groupby('Job Title')[['Max_Salary','Min_Salary']].mean().sort_values(['Max_Salary','Min_Salary'],ascending=False).head(20) ###Output _____no_output_____ ###Markdown Revenue Generation ###Code def filter_revenue(x): revenue=0 if(x== 'Unknown / Non-Applicable' or type(x)==float): revenue=0 elif(('million' in x) and ('billion' not in x)): maxRev = x.replace('(USD)','').replace("million",'').replace('$','').strip().split('to') if('Less than' in maxRev[0]): revenue = float(maxRev[0].replace('Less than','').strip()) else: if(len(maxRev)==2): revenue = float(maxRev[1]) elif(len(maxRev)<2): revenue = float(maxRev[0]) elif(('billion'in x)): maxRev = x.replace('(USD)','').replace("billion",'').replace('$','').strip().split('to') if('+' in maxRev[0]): revenue = float(maxRev[0].replace('+','').strip())*1000 else: if(len(maxRev)==2): revenue = float(maxRev[1])*1000 elif(len(maxRev)<2): revenue = float(maxRev[0])*1000 return revenue df['Max_revenue']=df['Revenue'].apply(lambda x: filter_revenue(x)) ###Output _____no_output_____ ###Markdown Revenue for the different Sectors ###Code df.groupby('Sector')[['Max_revenue']].mean().sort_values(['Max_revenue'],ascending=False).head(20) ###Output _____no_output_____ ###Markdown Revenue for the different Industries ###Code df.groupby('Industry')[['Max_revenue']].mean().sort_values(['Max_revenue'],ascending=False).head(20) ###Output _____no_output_____ ###Markdown The different Job Descriptions asked of for for the following Job titles and the various Comapnies ###Code df['Job Description'],_=df['Job Description'].str.split('\n', 1).str print('The following are the job description:'), df.groupby('Job Title')[['Job Description', 'Job Title', 'Company Name']].head(30) ###Output The following are the job description: ###Markdown What were the companies established after 2000 ###Code df[(df['Founded'] > 2000) & (df['Company Name'])] ###Output _____no_output_____ ###Markdown Looking at the Rating of the different Job Titles ###Code df.groupby('Rating').max() ###Output _____no_output_____ ###Markdown What Jobs are Easy to Apply ###Code df[(df['Easy Apply'] == 'True')].head(10) ###Output _____no_output_____
EDA_and_Regression.ipynb
###Markdown Variables Descriptionfrdwdm(公司名); year(年份); youbian(邮编); setup_year(开业年); setup_month(开业月); totaloutput(工业总产值); newproduct_output(工业总产值-新产品产值); total_sale(工业销售产值); export(工业销售产值-出口交货值); aemployment(全部职工); valueadded(工业增加值); longrun_invest(长期投资); totalfixed_asset(固定资产合计); fixed_asset_op(固定资产-生产经营用); depreciate(累计折旧); depreciate_tyear(累计折旧-本年折旧); fixedasset_net(固定资产净值年平均余额); total_asset(资产总计); owner_rights(所有者权益合计); paicl_up_capital(实收资本); so_capital(国家资本金); co_capital(集体资本金); lp_capital(法人资本金); p_capital(个人资本金); gat_capital(港澳台资本金); foreign_capital(外商资本金); mainbusiness_revenue(主营业务收入); mainbusiness_cost(主营业务成本); finance_expense(财务费用); fe_interest(财务费用-利息支出); total_profit(利润总额); advertise_expense(广告费); wage_payable(本年应付工资总额); wp_mainbusiness(本年应付福利费总额); totalintermediate_input(工业中间投入合计); drop variables: id(公司代码); frdm(法人代码); fddbr(法人); quhao(区号); dianhua(固定电话); fenjihao(分机号); chuanzhen(传真); email; web; registertype(登记注册类型); lishu(隶属关系); totalasset(资产总计含缺省值); e_state(!=so_capital); e_collective(=co_capital); e_individual(=p_capital); e_legal_person(=lp_capital); e_hmt(=gat_capital); e_foreign(=foreign_capital); e_total(=paicl_up_capital); soe_sh, collect_sh, lp_sh, p_sh, hmt_sh, foreign_sh(shares); state, collective, private, hmt, foreign(dummies); frdmid; firmid; newid(=frdmid-1);unclear: edu(职工教育费), rd_expense(研究开发费), code_cnty, cic_adj, BR_deflator, I, cic2(first two digits of cic_adj), lny, lnl, lnk, lnm, lnk_net, investment_net(投资收益), lninvest_net(=ln(investment_net)), lninvest, 'ownership', 'ownerships', ###Code # single plot: plot the number of firms in processing trade industry fig = plt.figure(figsize=(12,5)) ax1 = ts.plot(color='blue', grid=True, label='Number') h1, l1 = ax1.get_legend_handles_labels() plt.legend(h1, l1, loc=3, prop={'size': 17}) plt.show() fig.savefig('plot1.png') # Fixed Effect logit model on export temp_X4 = pd.concat([temp_y8, temp_X1], axis=1) FE1 = FixedEffectPanelModel() FE1.fit(temp_X4, 'ex', verbose=True) FE1.summary() # FE.beta # FE.beta_se # Fixed Effect logit model on import temp_X5 = pd.concat([temp_y9, temp_X1], axis=1) FE2 = FixedEffectPanelModel() FE2.fit(temp_X5, 'im', verbose=True) FE2.summary() ###Output _____no_output_____
Webscraping/Trak.in/TrakData_Webscraping.ipynb
###Markdown 6. Write a program to scrap details of all the funding deals for second quarter (i.e. July 20 – September 20) from trak.in. ###Code #Connect to web driver driver=webdriver.Chrome(r"D://chromedriver.exe") #r converts string to raw string #If not r, we can use executable_path = "C:/path name" #Getting the website to driver driver.get('https://trak.in/') #When we run this line, automatically the webpage will be opened #Getting the Funding Deals details by fetching the link funding_deals=driver.find_element_by_xpath("//li[@id='menu-item-51510']/a").get_attribute('href') driver.get(funding_deals) #Taking the empty lists of the details to be scraped Date=[] Startup=[] Industry=[] SubVertical=[] Location=[] Investor=[] Investment=[] Amount=[] #For quarter July 20 - September 20, the range numbers are between 48-51. So we will consider that range and scrap data for i in range(48,51): driver.find_element_by_xpath("//div[@id='tablepress-{}_wrapper']/div/label/select/option[4]".format(i)).click() #As the tablepress value differs for each quarter, we are iterating in the for loop and converting into raw string using format #Scrapping date data from the webpage date=driver.find_elements_by_xpath("//table[@id='tablepress-{}']/tbody/tr/td[2]".format(i)) for d in date: Date.append(d.text) #Scrapping startup details data from the webpage startup=driver.find_elements_by_xpath("//table[@id='tablepress-{}']/tbody/tr/td[3]".format(i)) for s in startup: Startup.append(s.text) #Scrapping Industry details data from the webpage industry=driver.find_elements_by_xpath("//table[@id='tablepress-{}']/tbody/tr/td[4]".format(i)) for ind in industry: Industry.append(ind.text) #Scrapping Subvertical details data from the webpage sv=driver.find_elements_by_xpath("//table[@id='tablepress-{}']/tbody/tr/td[5]".format(i)) for s in sv: SubVertical.append(s.text) #Scrapping Location details data from the webpage location=driver.find_elements_by_xpath("//table[@id='tablepress-{}']/tbody/tr/td[6]".format(i)) for l in location: Location.append(l.text) #Scrapping Investor details data from the webpage investor=driver.find_elements_by_xpath("//table[@id='tablepress-{}']/tbody/tr/td[7]".format(i)) for inv in investor: Investor.append(inv.text) #Scrapping Investment type details data from the webpage investment=driver.find_elements_by_xpath("//table[@id='tablepress-{}']/tbody/tr/td[8]".format(i)) for it in investment: Investment.append(it.text) #Scrapping the amount data from the webpage amount=driver.find_elements_by_xpath("//table[@id='tablepress-{}']/tbody/tr/td[9]".format(i)) for a in amount: Amount.append(a.text) #Creating a dataframe for saving the extracted data trak_fund=pd.DataFrame({}) trak_fund['Date']=Date trak_fund['Startup']=Startup trak_fund['Industry']=Industry trak_fund['SubVertical']=SubVertical trak_fund['Location']=Location trak_fund['Investor']=Investor trak_fund['Investment']=Investment trak_fund['Amount']=Amount #Checking the dataframe trak_fund #Closing the driver driver.close() ###Output _____no_output_____
Assignment_2_3.ipynb
###Markdown Question One PART ONEShow a breakdown of distance from home by job role and attrition. ###Code import pandas as pd import matplotlib.pyplot as plt %matplotlib inline import numpy as np import matplotlib.pyplot as plt import seaborn as sns employee_data = pd.read_csv("WA_Fn-UseC_-HR-Employee-Attrition.csv") ###Output _____no_output_____ ###Markdown Understanding my dataset ###Code employee_data.head() employee_data.tail() employee_data.dtypes employee_data.info() employee_data.describe() employee_data['JobRole'] print(employee_data['JobRole'].value_counts()) len(employee_data['JobRole'].value_counts()) employee_attrition = employee_data["Attrition"] plt.hist(employee_attrition, histtype='bar') plt.xlabel('Attrition') plt.ylabel('Count of Employees') plt.title(' Attrition Histogram') plt.legend(['Yes','No'], title = 'Attrition') plt.show() fig,ax = plt.subplots(figsize=(15,15)) sns.heatmap(employee_data.corr(),annot=True,ax=ax, cmap='plasma', fmt='.2f') sns.barplot(x='Education',y='MonthlyIncome',data=employee_data, hue='Attrition') sns.barplot(x='JobRole',y='DistanceFromHome',data=employee_data, hue='Attrition') ###Output _____no_output_____
ImageDetection/imageDetection.ipynb
###Markdown Mini Program - Image Detection using Python Objective - This program will deal with Image detection and Extraction. It will detect various objects of interest and store them in separate image files. We will use ImageAI library ImageAI is a Python Library. It is a powerful library that can be used by developers to support deep learning and computer vision. More about it can be found in the following link - [https://github.com/OlafenwaMoses/ImageAI] Need to install ImageAI as pip install https://github.com/OlafenwaMoses/ImageAI/releases/download/2.0.1/imageai-2.0.1-py3-none-any.whl We will require a pretrained model to generate predictions for new images. There are three types of pre trained models. They are - a) RetinaNet - resnet50_coco_best_v2.0.1.h5 b) YOLOv3 - yolo.h5 c) TinyYOLOv3 - yolo-tiny.h5 These pre-trained model need to be downloaded from site and kept in the same folder from where the python file is running. They can detect 80 different kind of everyday objects We will use RetinaNet pretrained model Step 1 - Import required libraries Libraries are the modules that are available in Python to help programmer perform specific operations. In this program, we are making use 2 libraries - ObjectDetection library from imageai.Detection - Image library from IPython.display. This library has been used to print the new image created with detected objects Note: When encoutering error, install the dependent libraries appearing in the error. We faced two errors here and this is how we fixed it -conda install opencv (It may take around 40 mins to complete) -conda install keras ###Code from imageai.Detection import ObjectDetection from IPython.display import Image ###Output Using TensorFlow backend. ###Markdown Step 2 - Create a detector In Line 1 - Initiate ObjectDetection In Line 2 - Set the model type as RetinaNet as we are using RetinaNet Model. Otherwise we could use setModelTypeAsYOLOv3() or setModelTypeAsTinyYOLOv3() based on YOLO or TinyYOLO respectively In Line 3 - Set the model path. It can be resnet50_coco_best_v2.0.1.h5 or yolo.h5 or yolo-tiny.h5 based on pre-trained model chosen In Line 4 - Load the model ###Code detector = ObjectDetection() detector.setModelTypeAsRetinaNet() detector.setModelPath("resnet50_coco_best_v2.0.1.h5") detector.loadModel() ###Output _____no_output_____ ###Markdown Step 3 - Get detections There are 80 possible objects that can be detected using Pre-trained model. These are - person, bicycle, car, motorcycle, airplane, bus, train, truck, boat, traffic light, fire hydrant, stop_sign, parking meter, bench, bird, cat, dog, horse, sheep, cow, elephant, bear, zebra, giraffe, backpack, umbrella, handbag, tie, suitcase, frisbee, skis, snowboard, sports ball, kite, baseball bat, baseball glove, skateboard, surfboard, tennis racket, bottle, wine glass, cup, fork, knife, spoon, bowl, banana, apple, sandwich, orange, broccoli, carrot, hot dog, pizza, donot, cake, chair, couch, potted plant, bed, dining table, toilet, tv, laptop, mouse, remote, keyboard, cell phone, microwave, oven, toaster, sink, refrigerator, book, clock, vase, scissors, teddy bear, hair dryer, toothbrush. In Line 1 - CustomObjects can help us detect only selected number of Objects. Suppose, we want to detect apple,orange and banana. Then these three must be set to True In Line 2 - detectCustomObjectsFromImage function accepts - input image - create an optional output image which is stored in the same path from where python file is run - It accepts the custom objects chosen above - Minimum Percentage Probability determines integrity of detection results In Line 3 - Each object in detection is iterated to get the name of the object and how accurately they have been predicted In Line 4 - The output image is printed with detected object Note- In Line 2, where output_image_path variable was used, the type of the variable should be png as JPEG/JPG was generating errors. Whereas type of input image could be JPG/JPEG and png ###Code custom_objects = detector.CustomObjects(apple=True, orange=True, banana=True) detections = detector.detectCustomObjectsFromImage(input_image="image_fruit.jpg", output_image_path="image_fruit_new.png", custom_objects=custom_objects, minimum_percentage_probability=65) for eachObject in detections: print(eachObject["name"] + " : " + eachObject["percentage_probability"] ) print("--------------------------------") Image("image_fruit_new.png") ###Output orange : 91.4738535881 -------------------------------- orange : 96.9522714615 -------------------------------- apple : 86.9643211365 -------------------------------- banana : 92.5892591476 --------------------------------
site/en/guide/sparse_tensor_guide.ipynb
###Markdown Copyright 2020 The TensorFlow Authors. ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown Working with sparse tensors View on TensorFlow.org Run in Google Colab View on GitHub Download notebook When working with tensors that contain a lot of zero values, it is important to store them in a space- and time-efficient manner. Sparse tensors enable efficient storage and processing of tensors that contain a lot of zero values. Sparse tensors are used extensively in encoding schemes like [TF-IDF](https://en.wikipedia.org/wiki/Tf%E2%80%93idf) as part of data pre-processing in NLP applications and for pre-processing images with a lot of dark pixels in computer vision applications. Sparse tensors in TensorFlowTensorFlow represents sparse tensors through the `tf.SparseTensor` object. Currently, sparse tensors in TensorFlow are encoded using the coordinate list (COO) format. This encoding format is optimized for hyper-sparse matrices such as embeddings.The COO encoding for sparse tensors is comprised of: * `values`: A 1D tensor with shape `[N]` containing all nonzero values. * `indices`: A 2D tensor with shape `[N, rank]`, containing the indices of the nonzero values. * `dense_shape`: A 2D tensor with shape `[rank]`, specifying the shape of the tensor.A ***nonzero*** value in the context of a `tf.SparseTensor` is a value that's not explicitly encoded. It is possible to explicitly include zero values in the `values` of a COO sparse matrix, but these "explicit zeros" are generally not included when referring to nonzero values in a sparse tensor.Note: `tf.SparseTensor` does not require that indices/values be in any particular order, but several ops assume that they're in row-major order. Use `tf.sparse.reorder` to create a copy of the sparse tensor that is sorted in the canonical row-major order. Creating a `tf.SparseTensor`Construct sparse tensors by directly specifying their `values`, `indices`, and `dense_shape`. ###Code import tensorflow as tf st1 = tf.SparseTensor(indices=[[0, 3], [2, 4]], values=[10, 20], dense_shape=[3, 10]) ###Output _____no_output_____ ###Markdown When you use the `print()` function to print a sparse tensor, it shows the contents of the three component tensors: ###Code print(st1) ###Output _____no_output_____ ###Markdown It is easier to understand the contents of a sparse tensor if the nonzero `values` are aligned with their corresponding `indices`. Define a helper function to pretty-print sparse tensors such that each nonzero value is shown on its own line. ###Code def pprint_sparse_tensor(st): s = "<SparseTensor shape=%s \n values={" % (st.dense_shape.numpy().tolist(),) for (index, value) in zip(st.indices, st.values): s += f"\n %s: %s" % (index.numpy().tolist(), value.numpy().tolist()) return s + "}>" print(pprint_sparse_tensor(st1)) ###Output _____no_output_____ ###Markdown You can also construct sparse tensors from dense tensors by using `tf.sparse.from_dense`, and convert them back to dense tensors by using `tf.sparse.to_dense`. ###Code st2 = tf.sparse.from_dense([[1, 0, 0, 8], [0, 0, 0, 0], [0, 0, 3, 0]]) print(pprint_sparse_tensor(st2)) st3 = tf.sparse.to_dense(st2) print(st3) ###Output _____no_output_____ ###Markdown Manipulating sparse tensorsUse the utilities in the `tf.sparse` package to manipulate sparse tensors. Ops like `tf.math.add` that you can use for arithmetic manipulation of dense tensors do not work with sparse tensors. Add sparse tensors of the same shape by using `tf.sparse.add`. ###Code st_a = tf.SparseTensor(indices=[[0, 2], [3, 4]], values=[31, 2], dense_shape=[4, 10]) st_b = tf.SparseTensor(indices=[[0, 2], [7, 0]], values=[56, 38], dense_shape=[4, 10]) st_sum = tf.sparse.add(st_a, st_b) print(pprint_sparse_tensor(st_sum)) ###Output _____no_output_____ ###Markdown Use `tf.sparse.sparse_dense_matmul` to multiply sparse tensors with dense matrices. ###Code st_c = tf.SparseTensor(indices=([0, 1], [1, 0], [1, 1]), values=[13, 15, 17], dense_shape=(2,2)) mb = tf.constant([[4], [6]]) product = tf.sparse.sparse_dense_matmul(st_c, mb) print(product) ###Output _____no_output_____ ###Markdown Put sparse tensors together by using `tf.sparse.concat` and take them apart by using `tf.sparse.slice`. ###Code sparse_pattern_A = tf.SparseTensor(indices = [[2,4], [3,3], [3,4], [4,3], [4,4], [5,4]], values = [1,1,1,1,1,1], dense_shape = [8,5]) sparse_pattern_B = tf.SparseTensor(indices = [[0,2], [1,1], [1,3], [2,0], [2,4], [2,5], [3,5], [4,5], [5,0], [5,4], [5,5], [6,1], [6,3], [7,2]], values = [1,1,1,1,1,1,1,1,1,1,1,1,1,1], #TODO: Shorten this using tf.ones dense_shape = [8,6]) sparse_pattern_C = tf.SparseTensor(indices = [[3,0], [4,0]], values = [1,1], dense_shape = [8,6]) sparse_patterns_list = [sparse_pattern_A, sparse_pattern_B, sparse_pattern_C] sparse_pattern = tf.sparse.concat(axis=1, sp_inputs=sparse_patterns_list) print(tf.sparse.to_dense(sparse_pattern)) sparse_slice_A = tf.sparse.slice(sparse_pattern_A, start = [0,0], size = [8,5]) sparse_slice_B = tf.sparse.slice(sparse_pattern_B, start = [0,5], size = [8,6]) sparse_slice_C = tf.sparse.slice(sparse_pattern_C, start = [0,10], size = [8,6]) print(tf.sparse.to_dense(sparse_slice_A)) print(tf.sparse.to_dense(sparse_slice_B)) print(tf.sparse.to_dense(sparse_slice_C)) ###Output _____no_output_____ ###Markdown If you're using TensorFlow 2.4 or above, use `tf.sparse.map_values` for elementwise operations on nonzero values in sparse tensors. ###Code st2_plus_5 = tf.sparse.map_values(tf.add, st2, 5) print(tf.sparse.to_dense(st2_plus_5)) ###Output _____no_output_____ ###Markdown Note that only the nonzero values were modified – the zero values stay zero.Equivalently, you can follow the design pattern below for earlier versions of TensorFlow: ###Code st2_plus_5 = tf.SparseTensor( st2.indices, st2.values + 5, st2.dense_shape) print(tf.sparse.to_dense(st2_plus_5)) ###Output _____no_output_____ ###Markdown Using `tf.SparseTensor` with other TensorFlow APIsSparse tensors work transparently with these TensorFlow APIs:* `tf.keras`* `tf.data`* `tf.Train.Example` protobuf* `tf.function`* `tf.while_loop`* `tf.cond`* `tf.identity`* `tf.cast`* `tf.print`* `tf.saved_model`* `tf.io.serialize_sparse`* `tf.io.serialize_many_sparse`* `tf.io.deserialize_many_sparse`* `tf.math.abs`* `tf.math.negative`* `tf.math.sign`* `tf.math.square`* `tf.math.sqrt`* `tf.math.erf`* `tf.math.tanh`* `tf.math.bessel_i0e`* `tf.math.bessel_i1e`Examples are shown below for a few of the above APIs. `tf.keras`The `tf.keras` API natively supports sparse tensors without any expensive casting or conversion ops. The Keras API lets you pass sparse tensors as inputs to a Keras model. Set `sparse=True` when calling `tf.keras.Input` or `tf.keras.layers.InputLayer`. You can pass sparse tensors between Keras layers, and also have Keras models return them as outputs. If you use sparse tensors in `tf.keras.layers.Dense` layers in your model, they will output dense tensors.The example below shows you how to pass a sparse tensor as an input to a Keras model. ###Code x = tf.keras.Input(shape=(4,), sparse=True) y = tf.keras.layers.Dense(4)(x) model = tf.keras.Model(x, y) sparse_data = tf.SparseTensor( indices = [(0,0),(0,1),(0,2), (4,3),(5,0),(5,1)], values = [1,1,1,1,1,1], dense_shape = (6,4) ) model(sparse_data) model.predict(sparse_data) ###Output _____no_output_____ ###Markdown `tf.data`The `tf.data` API enables you to build complex input pipelines from simple, reusable pieces. Its core data structure is `tf.data.Dataset`, which represents a sequence of elements in which each element consists of one or more components. Building datasets with sparse tensorsBuild datasets from sparse tensors using the same methods that are used to build them from `tf.Tensor`s or NumPy arrays, such as `tf.data.Dataset.from_tensor_slices`. This op preserves the sparsity (or sparse nature) of the data. ###Code dataset = tf.data.Dataset.from_tensor_slices(sparse_data) for element in dataset: print(pprint_sparse_tensor(element)) ###Output _____no_output_____ ###Markdown Batching and unbatching datasets with sparse tensorsYou can batch (combine consecutive elements into a single element) and unbatch datasets with sparse tensors using the `Dataset.batch` and `Dataset.unbatch` methods respectively. ###Code batched_dataset = dataset.batch(2) for element in batched_dataset: print (pprint_sparse_tensor(element)) unbatched_dataset = batched_dataset.unbatch() for element in unbatched_dataset: print (pprint_sparse_tensor(element)) ###Output _____no_output_____ ###Markdown You can also use `tf.data.experimental.dense_to_sparse_batch` to batch dataset elements of varying shapes into sparse tensors. Transforming Datasets with sparse tensorsTransform and create sparse tensors in Datasets using `Dataset.map`. ###Code transform_dataset = dataset.map(lambda x: x*2) for i in transform_dataset: print(pprint_sparse_tensor(i)) ###Output _____no_output_____ ###Markdown tf.train.Example`tf.train.Example` is a standard protobuf encoding for TensorFlow data. When using sparse tensors with `tf.train.Example`, you can:* Read variable-length data into a `tf.SparseTensor` using `tf.io.VarLenFeature`. However, you should consider using `tf.io.RaggedFeature` instead.* Read arbitrary sparse data into a `tf.SparseTensor` using `tf.io.SparseFeature`, which uses three separate feature keys to store the `indices`, `values`, and `dense_shape`. `tf.function`The `tf.function` decorator precomputes TensorFlow graphs for Python functions, which can substantially improve the performance of your TensorFlow code. Sparse tensors work transparently with both `tf.function` and [concrete functions](https://www.tensorflow.org/guide/functionobtaining_concrete_functions). ###Code @tf.function def f(x,y): return tf.sparse.sparse_dense_matmul(x,y) a = tf.SparseTensor(indices=[[0, 3], [2, 4]], values=[15, 25], dense_shape=[3, 10]) b = tf.sparse.to_dense(tf.sparse.transpose(a)) c = f(a,b) print(c) ###Output _____no_output_____ ###Markdown Distinguishing missing values from zero valuesMost ops on `tf.SparseTensor`s treat missing values and explicit zero values identically. This is by design — a `tf.SparseTensor` is supposed to act just like a dense tensor.However, there are a few cases where it can be useful to distinguish zero values from missing values. In particular, this allows for one way to encode missing/unknown data in your training data. For example, consider a use case where you have a tensor of scores (that can have any floating point value from -Inf to +Inf), with some missing scores. You can encode this tensor using a sparse tensor where the explicit zeros are known zero scores but the implicit zero values actually represent missing data and not zero. Note: This is generally not the intended usage of `tf.SparseTensor`s; and you might want to also consier other techniques for encoding this such as for example using a separate mask tensor that identifies the locations of known/unknown values. However, exercise caution while using this approach, since most sparse operations will treat explicit and implicit zero values identically. Note that some ops like `tf.sparse.reduce_max` do not treat missing values as if they were zero. For example, when you run the code block below, the expected output is `0`. However, because of this exception, the output is `-3`. ###Code print(tf.sparse.reduce_max(tf.sparse.from_dense([-5, 0, -3]))) ###Output _____no_output_____ ###Markdown In contrast, when you apply `tf.math.reduce_max` to a dense tensor, the output is 0 as expected. ###Code print(tf.math.reduce_max([-5, 0, -3])) ###Output _____no_output_____ ###Markdown Copyright 2020 The TensorFlow Authors. ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown Working with sparse tensors View on TensorFlow.org Run in Google Colab View on GitHub Download notebook When working with tensors that contain a lot of zero values, it is important to store them in a space- and time-efficient manner. Sparse tensors enable efficient storage and processing of tensors that contain a lot of zero values. Sparse tensors are used extensively in encoding schemes like [TF-IDF](https://en.wikipedia.org/wiki/Tf%E2%80%93idf) as part of data pre-processing in NLP applications and for pre-processing images with a lot of dark pixels in computer vision applications. Sparse tensors in TensorFlowTensorFlow represents sparse tensors through the `tf.SparseTensor` object. Currently, sparse tensors in TensorFlow are encoded using the coordinate list (COO) format. This encoding format is optimized for hyper-sparse matrices such as embeddings.The COO encoding for sparse tensors is comprised of: * `values`: A 1D tensor with shape `[N]` containing all nonzero values. * `indices`: A 2D tensor with shape `[N, rank]`, containing the indices of the nonzero values. * `dense_shape`: A 2D tensor with shape `[rank]`, specifying the shape of the tensor.A ***nonzero*** value in the context of a `tf.SparseTensor` is a value that's not explicitly encoded. It is possible to explicitly include zero values in the `values` of a COO sparse matrix, but these "explicit zeros" are generally not included when referring to nonzero values in a sparse tensor.Note: `tf.SparseTensor` does not require that indices/values be in any particular order, but several ops assume that they're in row-major order. Use `tf.sparse.reorder` to create a copy of the sparse tensor that is sorted in the canonical row-major order. Creating a `tf.SparseTensor`Construct sparse tensors by directly specifying their `values`, `indices`, and `dense_shape`. ###Code import tensorflow as tf st1 = tf.SparseTensor(indices=[[0, 3], [2, 4]], values=[10, 20], dense_shape=[3, 10]) ###Output _____no_output_____ ###Markdown When you use the `print()` function to print a sparse tensor, it shows the contents of the three component tensors: ###Code print(st1) ###Output _____no_output_____ ###Markdown It is easier to understand the contents of a sparse tensor if the nonzero `values` are aligned with their corresponding `indices`. Define a helper function to pretty-print sparse tensors such that each nonzero value is shown on its own line. ###Code def pprint_sparse_tensor(st): s = "<SparseTensor shape=%s \n values={" % (st.dense_shape.numpy().tolist(),) for (index, value) in zip(st.indices, st.values): s += f"\n %s: %s" % (index.numpy().tolist(), value.numpy().tolist()) return s + "}>" print(pprint_sparse_tensor(st1)) ###Output _____no_output_____ ###Markdown You can also construct sparse tensors from dense tensors by using `tf.sparse.from_dense`, and convert them back to dense tensors by using `tf.sparse.to_dense`. ###Code st2 = tf.sparse.from_dense([[1, 0, 0, 8], [0, 0, 0, 0], [0, 0, 3, 0]]) print(pprint_sparse_tensor(st2)) st3 = tf.sparse.to_dense(st2) print(st3) ###Output _____no_output_____ ###Markdown Manipulating sparse tensorsUse the utilities in the `tf.sparse` package to manipulate sparse tensors. Ops like `tf.math.add` that you can use for arithmetic manipulation of dense tensors do not work with sparse tensors. Add sparse tensors of the same shape by using `tf.sparse.add`. ###Code st_a = tf.SparseTensor(indices=[[0, 2], [3, 4]], values=[31, 2], dense_shape=[4, 10]) st_b = tf.SparseTensor(indices=[[0, 2], [7, 0]], values=[56, 38], dense_shape=[4, 10]) st_sum = tf.sparse.add(st_a, st_b) print(pprint_sparse_tensor(st_sum)) ###Output _____no_output_____ ###Markdown Use `tf.sparse.sparse_dense_matmul` to multiply sparse tensors with dense matrices. ###Code st_c = tf.SparseTensor(indices=([0, 1], [1, 0], [1, 1]), values=[13, 15, 17], dense_shape=(2,2)) mb = tf.constant([[4], [6]]) product = tf.sparse.sparse_dense_matmul(st_c, mb) print(product) ###Output _____no_output_____ ###Markdown Put sparse tensors together by using `tf.sparse.concat` and take them apart by using `tf.sparse.slice`. ###Code sparse_pattern_A = tf.SparseTensor(indices = [[2,4], [3,3], [3,4], [4,3], [4,4], [5,4]], values = [1,1,1,1,1,1], dense_shape = [8,5]) sparse_pattern_B = tf.SparseTensor(indices = [[0,2], [1,1], [1,3], [2,0], [2,4], [2,5], [3,5], [4,5], [5,0], [5,4], [5,5], [6,1], [6,3], [7,2]], values = [1,1,1,1,1,1,1,1,1,1,1,1,1,1], #TODO: Shorten this using tf.ones dense_shape = [8,6]) sparse_pattern_C = tf.SparseTensor(indices = [[3,0], [4,0]], values = [1,1], dense_shape = [8,6]) sparse_patterns_list = [sparse_pattern_A, sparse_pattern_B, sparse_pattern_C] sparse_pattern = tf.sparse.concat(axis=1, sp_inputs=sparse_patterns_list) print(tf.sparse.to_dense(sparse_pattern)) sparse_slice_A = tf.sparse.slice(sparse_pattern_A, start = [0,0], size = [8,5]) sparse_slice_B = tf.sparse.slice(sparse_pattern_B, start = [0,5], size = [8,6]) sparse_slice_C = tf.sparse.slice(sparse_pattern_C, start = [0,10], size = [8,6]) print(tf.sparse.to_dense(sparse_slice_A)) print(tf.sparse.to_dense(sparse_slice_B)) print(tf.sparse.to_dense(sparse_slice_C)) ###Output _____no_output_____ ###Markdown If you're using TensorFlow 2.4 or above, use `tf.sparse.map_values` for elementwise operations on nonzero values in sparse tensors. ###Code st2_plus_5 = tf.sparse.map_values(tf.add, st2, 5) print(tf.sparse.to_dense(st2_plus_5)) ###Output _____no_output_____ ###Markdown Note that only the nonzero values were modified – the zero values stay zero.Equivalently, you can follow the design pattern below for earlier versions of TensorFlow: ###Code st2_plus_5 = tf.SparseTensor( st2.indices, st2.values + 5, st2.dense_shape) print(tf.sparse.to_dense(st2_plus_5)) ###Output _____no_output_____ ###Markdown Using `tf.SparseTensor` with other TensorFlow APIsSparse tensors work transparently with these TensorFlow APIs:* `tf.keras`* `tf.data`* `tf.Train.Example` protobuf* `tf.function`* `tf.while_loop`* `tf.cond`* `tf.identity`* `tf.cast`* `tf.print`* `tf.saved_model`* `tf.io.serialize_sparse`* `tf.io.serialize_many_sparse`* `tf.io.deserialize_many_sparse`* `tf.math.abs`* `tf.math.negative`* `tf.math.sign`* `tf.math.square`* `tf.math.sqrt`* `tf.math.erf`* `tf.math.tanh`* `tf.math.bessel_i0e`* `tf.math.bessel_i1e`Examples are shown below for a few of the above APIs. `tf.keras`The `tf.keras` API natively supports sparse tensors without any expensive casting or conversion ops. The Keras API lets you pass sparse tensors as inputs to a Keras model. Set `sparse=True` when calling `tf.keras.Input` or `tf.keras.layers.InputLayer`. You can pass sparse tensors between Keras layers, and also have Keras models return them as outputs. If you use sparse tensors in `tf.keras.layers.Dense` layers in your model, they will output dense tensors.The example below shows you how to pass a sparse tensor as an input to a Keras model. ###Code x = tf.keras.Input(shape=(4,), sparse=True) y = tf.keras.layers.Dense(4)(x) model = tf.keras.Model(x, y) sparse_data = tf.SparseTensor( indices = [(0,0),(0,1),(0,2), (4,3),(5,0),(5,1)], values = [1,1,1,1,1,1], dense_shape = (6,4) ) model(sparse_data) model.predict(sparse_data) ###Output _____no_output_____ ###Markdown `tf.data`The `tf.data` API enables you to build complex input pipelines from simple, reusable pieces. Its core data structure is `tf.data.Dataset`, which represents a sequence of elements in which each element consists of one or more components. Building datasets with sparse tensorsBuild datasets from sparse tensors using the same methods that are used to build them from `tf.Tensor`s or NumPy arrays, such as `tf.data.Dataset.from_tensor_slices`. This op preserves the sparsity (or sparse nature) of the data. ###Code dataset = tf.data.Dataset.from_tensor_slices(sparse_data) for element in dataset: print(pprint_sparse_tensor(element)) ###Output _____no_output_____ ###Markdown Batching and unbatching datasets with sparse tensorsYou can batch (combine consecutive elements into a single element) and unbatch datasets with sparse tensors using the `Dataset.batch` and `Dataset.unbatch` methods respectively. ###Code batched_dataset = dataset.batch(2) for element in batched_dataset: print (pprint_sparse_tensor(element)) unbatched_dataset = batched_dataset.unbatch() for element in unbatched_dataset: print (pprint_sparse_tensor(element)) ###Output _____no_output_____ ###Markdown You can also use `tf.data.experimental.dense_to_sparse_batch` to batch dataset elements of varying shapes into sparse tensors. Transforming Datasets with sparse tensorsTransform and create sparse tensors in Datasets using `Dataset.map`. ###Code transform_dataset = dataset.map(lambda x: x*2) for i in transform_dataset: print(pprint_sparse_tensor(i)) ###Output _____no_output_____ ###Markdown tf.train.Example`tf.train.Example` is a standard protobuf encoding for TensorFlow data. When using sparse tensors with `tf.train.Example`, you can:* Read variable-length data into a `tf.SparseTensor` using `tf.io.VarLenFeature`. However, you should consider using `tf.io.RaggedFeature` instead.* Read arbitrary sparse data into a `tf.SparseTensor` using `tf.io.SparseFeature`, which uses three separate feature keys to store the `indices`, `values`, and `dense_shape`. `tf.function`The `tf.function` decorator precomputes TensorFlow graphs for Python functions, which can substantially improve the performance of your TensorFlow code. Sparse tensors work transparently with both `tf.function` and [concrete functions](https://www.tensorflow.org/guide/functionobtaining_concrete_functions). ###Code @tf.function def f(x,y): return tf.sparse.sparse_dense_matmul(x,y) a = tf.SparseTensor(indices=[[0, 3], [2, 4]], values=[15, 25], dense_shape=[3, 10]) b = tf.sparse.to_dense(tf.sparse.transpose(a)) c = f(a,b) print(c) ###Output _____no_output_____ ###Markdown Distinguishing missing values from zero valuesMost ops on `tf.SparseTensor`s treat missing values and explicit zero values identically. This is by design — a `tf.SparseTensor` is supposed to act just like a dense tensor.However, there are a few cases where it can be useful to distinguish zero values from missing values. In particular, this allows for one way to encode missing/unknown data in your training data. For example, consider a use case where you have a tensor of scores (that can have any floating point value from -Inf to +Inf), with some missing scores. You can encode this tensor using a sparse tensor where the explicit zeros are known zero scores but the implicit zero values actually represent missing data and not zero. Note: This is generally not the intended usage of `tf.SparseTensor`s; and you might want to also consier other techniques for encoding this such as for example using a separate mask tensor that identifies the locations of known/unknown values. However, exercise caution while using this approach, since most sparse operations will treat explicit and implicit zero values identically. Note that some ops like `tf.sparse.reduce_max` do not treat missing values as if they were zero. For example, when you run the code block below, the expected output is `0`. However, because of this exception, the output is `-3`. ###Code print(tf.sparse.reduce_max(tf.sparse.from_dense([-5, 0, -3]))) ###Output _____no_output_____ ###Markdown In contrast, when you apply `tf.math.reduce_max` to a dense tensor, the output is 0 as expected. ###Code print(tf.math.reduce_max([-5, 0, -3])) ###Output _____no_output_____
03 Analysis on Movies Data.ipynb
###Markdown --- **Table of Contents**---1. [**Introduction**](Section1)2. [**Problem Statement**](Section2)3. [**Installing & Importing Libraries**](Section3) 3.1 [**Installing Libraries**](Section31) 3.2 [**Upgrading Libraries**](Section32) 3.3 [**Importing Libraries**](Section33)4. [**Data Acquisition & Description**](Section4)5. [**Data Pre-Profiling**](Section5)6. [**Data Pre-Processing**](Section6)7. [**Data Post-Profiling**](Section7)8. [**Exploratory Data Analysis**](Section8)9. [**Summarization**](Section9) 9.1 [**Conclusion**](Section91) 9.2 [**Actionable Insights**](Section91)--- --- **1. Introduction**---- Write down some interesting introduction related to the topic.- Surf out over the internet and do some research about what is happening in real life.- Try out and make some concrete points about your point of view. --- **2. Problem Statement**---- This section is emphasised on providing some generic introduction to the problem that most companies confronts.- **Example Problem Statement:** - In last few years, the film industry has become more popular than ever. - In 2018, movies made total income around $ 41.7 billion throughout the world. - But what movies make the most money at the box office? - How much does a director matter?- Derive a scenario related to the problem statement and heads on to the journey of exploration.- **Example Scenario:** - Cinemania, an American Box Office where tickets are sold to the public for movies. - Over the past few years, it has become one of the most liked and visited place in different areas. - They are planning to add new services and enhance the quality of existing services. - To achieve a desired objective they need a guidance in a most effective way. - To tackle this problem they hired a genius team of data scientists. Consider you are one of them... --- **3. Installing & Importing Libraries**---- This section is emphasised on installing and importing the necessary libraries that will be required. **Installing Libraries** ###Code !pip install -q datascience # Package that is required by pandas profiling !pip install -q pandas-profiling # Library to generate basic statistics about data # To install more libraries insert your code here.. ###Output _____no_output_____ ###Markdown **Upgrading Libraries**- **After upgrading** the libraries, you need to **restart the runtime** to make the libraries in sync.- Make sure not to execute the cell under Installing Libraries and Upgrading Libraries again after restarting the runtime. ###Code !pip install -q --upgrade pandas-profiling # Upgrading pandas profiling to the latest version ###Output _____no_output_____ ###Markdown **Importing Libraries**- You can headstart with the basic libraries as imported inside the cell below.- If you want to import some additional libraries, feel free to do so. ###Code #------------------------------------------------------------------------------------------------------------------------------- import pandas as pd # Importing package pandas (For Panel Data Analysis) from pandas_profiling import ProfileReport # Import Pandas Profiling (To generate Univariate Analysis) #------------------------------------------------------------------------------------------------------------------------------- import numpy as np # Importing package numpys (For Numerical Python) #------------------------------------------------------------------------------------------------------------------------------- import matplotlib.pyplot as plt # Importing pyplot interface to use matplotlib import seaborn as sns # Importing seaborn library for interactive visualization %matplotlib inline #------------------------------------------------------------------------------------------------------------------------------- import scipy as sp # Importing library for scientific calculations #------------------------------------------------------------------------------------------------------------------------------- ###Output _____no_output_____ ###Markdown --- **4. Data Acquisition & Description**---- This section is emphasised on the accquiring the data and obtain some descriptive information out of it.- You could either scrap the data and then continue, or use a direct source of link (generally preferred in most cases).- You will be working with a direct source of link to head start your work without worrying about anything.- Before going further you must have a good idea about the features of the data set:|Id|Feature|Description||:--|:--|:--||01|Rank|Movie Rank| |02| Title | Title of the movie| |03| Genre | The various Genre that the movie can be associated with| |04| Description| Short description about the movie| |05| Director| Director of the movie||06| Actors| Main actors in the movie||07| Year| Year in which the movie was released||08| Runtime (minutes)| Total movie playing time||09| Rating | Movie rating||10| Votes| Vores for the movie||11| Revenue (Millions)| Revenue by the movie (in millions)||12| Metascore| Is the score of the movie on the metacritic website by critics| ###Code data = pd.read_csv(filepath_or_buffer = 'https://raw.githubusercontent.com/insaid2018/Term-1/master/Data/Projects/1000%20movies%20data.csv') print('Data Shape:', data.shape) data.head() ###Output Data Shape: (1000, 12) ###Markdown **Data Description**- To get some quick description out of the data you can use describe method defined in pandas library. ###Code # Insert your code here... ###Output _____no_output_____ ###Markdown **Data Information** ###Code # Insert your code here... ###Output _____no_output_____ ###Markdown --- **5. Data Pre-Profiling**---- This section is emphasised on getting a report about the data.- You need to perform pandas profiling and get some observations out of it... ###Code # Insert your code here... ###Output _____no_output_____ ###Markdown --- **6. Data Pre-Processing**---- This section is emphasised on performing data manipulation over unstructured data for further processing and analysis.- To modify unstructured data to strucuted data you need to verify and manipulate the integrity of the data by: - Handling missing data, - Handling redundant data, - Handling inconsistent data, - Handling outliers, - Handling typos ###Code # Insert your code here... ###Output _____no_output_____ ###Markdown --- **7. Data Post-Profiling**---- This section is emphasised on getting a report about the data after the data manipulation.- You may end up observing some new changes, so keep it under check and make right observations. ###Code # Insert your code here... ###Output _____no_output_____ ###Markdown --- **8. Exploratory Data Analysis**---- This section is emphasised on asking the right questions and perform analysis using the data.- Note that there is no limit how deep you can go, but make sure not to get distracted from right track. ###Code # Insert your code here... ###Output _____no_output_____
Data types(dictionaries).ipynb
###Markdown DATA TYPES - Dictionaries: ###Code person = ["John", "Blue", "1980", "Canada"] person = {"first_name" : "John", "Last_name" : "Blue", "birh_year" : "1980", "country_of_birth" : "Canada"} type(person) person["first_name"] person["birh_year"] = 1979 person person["marital_status"] = "married" person person["children"] = ["Nathalie", "Ethan"] person person["age"] print(person.get("age", "invalid property")) person.get("children", "invalid property") key = "first_name" person[key] person.clear() person ###Output _____no_output_____ ###Markdown Exercise : - Create a program with a predefined dictionary for a person. Include the following information: name, gender, age, address and phone. - Ask the user what information he wants to know about the person (example: "name"), then print the value associated to that key or display a message in case the key is not found. ###Code person = {"name" : "Bhagya", "gender" : "female", "age" : "21", "address" : "Veeravasaram", "phone" : "9381498248"} key = input("what information do you want to know about the person ? : ").lower() result = person.get(key, "That information is not available") print(result) ###Output what information do you want to know about the person ? : Name Bhagya
hw/2/cifar10_golden_sample.ipynb
###Markdown Copyright 2019 The TensorFlow Authors. ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown Convolutional Neural Network (CNN) View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This tutorial demonstrates training a simple [Convolutional Neural Network](https://developers.google.com/machine-learning/glossary/convolutional_neural_network) (CNN) to classify [CIFAR images](https://www.cs.toronto.edu/~kriz/cifar.html). Because this tutorial uses the [Keras Sequential API](https://www.tensorflow.org/guide/keras/overview), creating and training our model will take just a few lines of code. Import TensorFlow ###Code import tensorflow as tf from tensorflow.keras import datasets, layers, models import matplotlib.pyplot as plt ###Output _____no_output_____ ###Markdown Download and prepare the CIFAR10 datasetThe CIFAR10 dataset contains 60,000 color images in 10 classes, with 6,000 images in each class. The dataset is divided into 50,000 training images and 10,000 testing images. The classes are mutually exclusive and there is no overlap between them. ###Code (train_images, train_labels), (test_images, test_labels) = datasets.cifar10.load_data() # Normalize pixel values to be between 0 and 1 train_images, test_images = train_images / 255.0, test_images / 255.0 ###Output _____no_output_____ ###Markdown Verify the dataTo verify that the dataset looks correct, let's plot the first 25 images from the training set and display the class name below each image. ###Code class_names = ['airplane', 'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck'] plt.figure(figsize=(10,10)) for i in range(25): plt.subplot(5,5,i+1) plt.xticks([]) plt.yticks([]) plt.grid(False) plt.imshow(train_images[i], cmap=plt.cm.binary) # The CIFAR labels happen to be arrays, # which is why you need the extra index plt.xlabel(class_names[train_labels[i][0]]) plt.show() ###Output _____no_output_____ ###Markdown Create the convolutional base The 6 lines of code below define the convolutional base using a common pattern: a stack of [Conv2D](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D) and [MaxPooling2D](https://www.tensorflow.org/api_docs/python/tf/keras/layers/MaxPool2D) layers.As input, a CNN takes tensors of shape (image_height, image_width, color_channels), ignoring the batch size. If you are new to these dimensions, color_channels refers to (R,G,B). In this example, you will configure our CNN to process inputs of shape (32, 32, 3), which is the format of CIFAR images. You can do this by passing the argument `input_shape` to our first layer. ###Code model = models.Sequential() model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3))) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(64, (3, 3), activation='relu')) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(64, (3, 3), activation='relu')) ###Output WARNING:tensorflow:From /home/hsuchaochun/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/ops/resource_variable_ops.py:1630: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version. Instructions for updating: If using Keras pass *_constraint arguments to layers. ###Markdown Let's display the architecture of our model so far. ###Code model.summary() ###Output Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d (Conv2D) (None, 30, 30, 32) 896 _________________________________________________________________ max_pooling2d (MaxPooling2D) (None, 15, 15, 32) 0 _________________________________________________________________ conv2d_1 (Conv2D) (None, 13, 13, 64) 18496 _________________________________________________________________ max_pooling2d_1 (MaxPooling2 (None, 6, 6, 64) 0 _________________________________________________________________ conv2d_2 (Conv2D) (None, 4, 4, 64) 36928 ================================================================= Total params: 56,320 Trainable params: 56,320 Non-trainable params: 0 _________________________________________________________________ ###Markdown Above, you can see that the output of every Conv2D and MaxPooling2D layer is a 3D tensor of shape (height, width, channels). The width and height dimensions tend to shrink as you go deeper in the network. The number of output channels for each Conv2D layer is controlled by the first argument (e.g., 32 or 64). Typically, as the width and height shrink, you can afford (computationally) to add more output channels in each Conv2D layer. Add Dense layers on topTo complete our model, you will feed the last output tensor from the convolutional base (of shape (4, 4, 64)) into one or more Dense layers to perform classification. Dense layers take vectors as input (which are 1D), while the current output is a 3D tensor. First, you will flatten (or unroll) the 3D output to 1D, then add one or more Dense layers on top. CIFAR has 10 output classes, so you use a final Dense layer with 10 outputs and a softmax activation. ###Code model.add(layers.Flatten()) model.add(layers.Dense(64, activation='relu')) model.add(layers.Dense(10)) ###Output _____no_output_____ ###Markdown Here's the complete architecture of our model. ###Code model.summary() ###Output Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d (Conv2D) (None, 30, 30, 32) 896 _________________________________________________________________ max_pooling2d (MaxPooling2D) (None, 15, 15, 32) 0 _________________________________________________________________ conv2d_1 (Conv2D) (None, 13, 13, 64) 18496 _________________________________________________________________ max_pooling2d_1 (MaxPooling2 (None, 6, 6, 64) 0 _________________________________________________________________ conv2d_2 (Conv2D) (None, 4, 4, 64) 36928 _________________________________________________________________ flatten (Flatten) (None, 1024) 0 _________________________________________________________________ dense (Dense) (None, 64) 65600 _________________________________________________________________ dense_1 (Dense) (None, 10) 650 ================================================================= Total params: 122,570 Trainable params: 122,570 Non-trainable params: 0 _________________________________________________________________ ###Markdown As you can see, our (4, 4, 64) outputs were flattened into vectors of shape (1024) before going through two Dense layers. Compile and train the model ###Code model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) history = model.fit(train_images, train_labels, epochs=10, validation_data=(test_images, test_labels)) ###Output Train on 50000 samples, validate on 10000 samples Epoch 1/10 50000/50000 [==============================] - 17s 342us/sample - loss: 1.5147 - acc: 0.4497 - val_loss: 1.1804 - val_acc: 0.5753 Epoch 2/10 50000/50000 [==============================] - 17s 336us/sample - loss: 1.1227 - acc: 0.6031 - val_loss: 1.0590 - val_acc: 0.6312 Epoch 3/10 50000/50000 [==============================] - 18s 352us/sample - loss: 0.9726 - acc: 0.6582 - val_loss: 0.9367 - val_acc: 0.6737 Epoch 4/10 50000/50000 [==============================] - 20s 394us/sample - loss: 0.8779 - acc: 0.6938 - val_loss: 0.9324 - val_acc: 0.6769 Epoch 5/10 50000/50000 [==============================] - 17s 340us/sample - loss: 0.8066 - acc: 0.7182 - val_loss: 0.9770 - val_acc: 0.6658 Epoch 6/10 50000/50000 [==============================] - 17s 347us/sample - loss: 0.7522 - acc: 0.7346 - val_loss: 0.8415 - val_acc: 0.7157 Epoch 7/10 50000/50000 [==============================] - 18s 367us/sample - loss: 0.7029 - acc: 0.7539 - val_loss: 0.8434 - val_acc: 0.7095 Epoch 8/10 50000/50000 [==============================] - 17s 335us/sample - loss: 0.6612 - acc: 0.7685 - val_loss: 0.8396 - val_acc: 0.7191 Epoch 9/10 50000/50000 [==============================] - 17s 346us/sample - loss: 0.6250 - acc: 0.7796 - val_loss: 0.8499 - val_acc: 0.7176 Epoch 10/10 50000/50000 [==============================] - 17s 332us/sample - loss: 0.5921 - acc: 0.7929 - val_loss: 0.9085 - val_acc: 0.6974 ###Markdown Evaluate the model ###Code plt.plot(history.history['accuracy'], label='accuracy') plt.plot(history.history['val_accuracy'], label = 'val_accuracy') plt.xlabel('Epoch') plt.ylabel('Accuracy') plt.ylim([0.5, 1]) plt.legend(loc='lower right') test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2) print(test_acc) ###Output 0.6974
Cloud Pak for Data/WML/notebooks/misc/Watson OpenScale and Watson ML Engine with AI-function.ipynb
###Markdown Working with Watson Machine Learning This notebook should be run using with **Default Spark 3.0 & Python 3.8** runtime environment. **If you are viewing this in Watson Studio and do not see Python 3.8.x in the upper right corner of your screen, please update the runtime now.** It requires service credentials for the following services: * Watson OpenScale * Watson Machine Learning * DB2 The notebook will train, create and deploy a German Credit Risk model, configure OpenScale to monitor that deployment, and inject seven days' worth of historical records and measurements for viewing in the OpenScale Insights dashboard. Contents- [Setup](setup)- [Model building and deployment](model)- [OpenScale configuration](openscale)- [Quality monitor and feedback logging](quality)- [Fairness, drift monitoring and explanations](fairness)- [Custom monitors and metrics](custom)- [Historical data](historical) Setup Package installation ###Code import warnings warnings.filterwarnings('ignore') !pip install --upgrade pyspark==3.0.3 --no-cache | tail -n 1 !pip install --upgrade pandas==1.2.3 --no-cache | tail -n 1 !pip install --upgrade requests==2.23 --no-cache | tail -n 1 !pip install numpy==1.20.1 --no-cache | tail -n 1 !pip install SciPy --no-cache | tail -n 1 !pip install lime --no-cache | tail -n 1 !pip install --upgrade ibm-watson-machine-learning --user | tail -n 1 !pip install --upgrade ibm-watson-openscale --no-cache | tail -n 1 ###Output _____no_output_____ ###Markdown Action: restart the kernel! Configure credentials - WOS_CREDENTIALS (CP4D)- WML_CREDENTIALS (CP4D)- DATABASE_CREDENTIALS (DB2 on CP4D or Cloud Object Storage (COS))- SCHEMA_NAME ###Code WOS_CREDENTIALS = { "url": "<URL>", "username": "<USER>", "password": "<PASSWORD>" } WML_CREDENTIALS = { "url": WOS_CREDENTIALS['url'], "username": WOS_CREDENTIALS['username'], "password": WOS_CREDENTIALS['password'], "instance_id": "wml_local", "version" : "3.5" #If your env is CP4D 4.0 then specify "4.0" instead of "3.5" } #IBM DB2 database connection format example. This is required if you don't have any existing datamarts DATABASE_CREDENTIALS = { "hostname":"***", "username":"***", "password":"***", "database":"***", "port":"***", "ssl":"***", "sslmode":"***", "certificate_base64":"***"} ###Output _____no_output_____ ###Markdown Action: put created schema name below. ###Code #This is required if you don't have any existing datamarts SCHEMA_NAME = '<SCHEMA_NAME>' ###Output _____no_output_____ ###Markdown Run the notebookAt this point, the notebook is ready to run. You can either run the cells one at a time, or click the **Kernel** option above and select **Restart and Run All** to run all the cells. Model building and deployment In this section you will learn how to train Spark MLLib model and next deploy it as web-service using Watson Machine Learning service. Load the training data from github ###Code !rm german_credit_data_biased_training.csv !wget https://raw.githubusercontent.com/IBM/watson-openscale-samples/main/Cloud%20Pak%20for%20Data/WML/assets/data/credit_risk/german_credit_data_biased_training.csv from pyspark.sql import SparkSession import pandas as pd import json spark = SparkSession.builder.getOrCreate() pd_data = pd.read_csv("german_credit_data_biased_training.csv", sep=",", header=0) df_data = spark.read.csv(path="german_credit_data_biased_training.csv", sep=",", header=True, inferSchema=True) df_data.head() ###Output _____no_output_____ ###Markdown Explore data ###Code df_data.printSchema() print("Number of records: " + str(df_data.count())) display(df_data) ###Output _____no_output_____ ###Markdown Create a model ###Code spark_df = df_data (train_data, test_data) = spark_df.randomSplit([0.8, 0.2], 24) MODEL_NAME = "Spark German Risk Model - AI Function" DEPLOYMENT_NAME = "Spark German Risk Deployment - AI Function" print("Number of records for training: " + str(train_data.count())) print("Number of records for evaluation: " + str(test_data.count())) spark_df.printSchema() ###Output _____no_output_____ ###Markdown The code below creates a Random Forest Classifier with Spark, setting up string indexers for the categorical features and the label column. Finally, this notebook creates a pipeline including the indexers and the model, and does an initial Area Under ROC evaluation of the model. ###Code from pyspark.ml.feature import OneHotEncoder, StringIndexer, IndexToString, VectorAssembler from pyspark.ml.evaluation import BinaryClassificationEvaluator from pyspark.ml import Pipeline, Model from pyspark.ml.feature import SQLTransformer features = [x for x in spark_df.columns if x != 'Risk'] categorical_features = ['CheckingStatus', 'CreditHistory', 'LoanPurpose', 'ExistingSavings', 'EmploymentDuration', 'Sex', 'OthersOnLoan', 'OwnsProperty', 'InstallmentPlans', 'Housing', 'Job', 'Telephone', 'ForeignWorker'] categorical_num_features = [x + '_IX' for x in categorical_features] si_list = [StringIndexer(inputCol=x, outputCol=y) for x, y in zip(categorical_features, categorical_num_features)] va_features = VectorAssembler(inputCols=categorical_num_features + [x for x in features if x not in categorical_features], outputCol="features") si_label = StringIndexer(inputCol="Risk", outputCol="label").fit(spark_df) label_converter = IndexToString(inputCol="prediction", outputCol="predictedLabel", labels=si_label.labels) from pyspark.ml.classification import RandomForestClassifier classifier = RandomForestClassifier(featuresCol="features") pipeline = Pipeline(stages= si_list + [si_label, va_features, classifier, label_converter]) model = pipeline.fit(train_data) ###Output _____no_output_____ ###Markdown **Note**: If you want filter features from model output please replace `*` with feature names to be retained in `SQLTransformer` statement. ###Code predictions = model.transform(test_data) evaluatorDT = BinaryClassificationEvaluator(rawPredictionCol="prediction", metricName='areaUnderROC') area_under_curve = evaluatorDT.evaluate(predictions) evaluatorDT = BinaryClassificationEvaluator(rawPredictionCol="prediction", metricName='areaUnderPR') area_under_PR = evaluatorDT.evaluate(predictions) #default evaluation is areaUnderROC print("areaUnderROC = %g" % area_under_curve, "areaUnderPR = %g" % area_under_PR) # extra code: evaluate more metrics by exporting them into pandas and numpy from sklearn.metrics import classification_report y_pred = predictions.toPandas()['prediction'] y_pred = ['Risk' if pred == 1.0 else 'No Risk' for pred in y_pred] y_test = test_data.toPandas()['Risk'] print(classification_report(y_test, y_pred, target_names=['Risk', 'No Risk'])) ###Output _____no_output_____ ###Markdown Save training data to Cloud Object Storage Cloud object storage detailsIn next cells, you will need to paste some credentials to Cloud Object Storage. If you haven't worked with COS yet please visit getting started with COS tutorial. You can find COS_API_KEY_ID and COS_RESOURCE_CRN variables in Service Credentials in menu of your COS instance. Used COS Service Credentials must be created with Role parameter set as Writer. Later training data file will be loaded to the bucket of your instance and used as training refecence in subsription.COS_ENDPOINT variable can be found in Endpoint field of the menu. ###Code IAM_URL="https://iam.ng.bluemix.net/oidc/token" COS_API_KEY_ID = "<COS_API_KEY>" COS_RESOURCE_CRN = "<RESOURCE_INSTANCE_ID>" COS_ENDPOINT = "<COS_ENDPOINT>" # Current list avaiable at https://control.cloud-object-storage.cloud.ibm.com/v2/endpoints BUCKET_NAME = "<BUCKET_NAME>" #example: "credit-risk-training-data" training_data_file_name="german_credit_data_biased_training.csv" import ibm_boto3 from ibm_botocore.client import Config, ClientError cos_client = ibm_boto3.resource("s3", ibm_api_key_id=COS_API_KEY_ID, ibm_service_instance_id=COS_RESOURCE_CRN, ibm_auth_endpoint="https://iam.bluemix.net/oidc/token", config=Config(signature_version="oauth"), endpoint_url=COS_ENDPOINT ) with open(training_data_file_name, "rb") as file_data: cos_client.Object(BUCKET_NAME, training_data_file_name).upload_fileobj( Fileobj=file_data ) ###Output _____no_output_____ ###Markdown Publish the model In this section, the notebook uses Watson Machine Learning to save the model (including the pipeline) to the WML instance. Previous versions of the model are removed so that the notebook can be run again, resetting all data for another demo. ###Code import json from ibm_watson_machine_learning import APIClient wml_client = APIClient(WML_CREDENTIALS) wml_client.version space_name = "<SPACE_NAME>" # create the space and set it as default space_meta_data = { wml_client.spaces.ConfigurationMetaNames.NAME : space_name, wml_client.spaces.ConfigurationMetaNames.DESCRIPTION : 'tutorial_space' } spaces = wml_client.spaces.get_details()['resources'] space_id = None for space in spaces: if space['entity']['name'] == space_name: space_id = space["metadata"]["id"] if space_id is None: space_id = wml_client.spaces.store(meta_props=space_meta_data)["metadata"]["id"] print(space_id) wml_client.set.default_space(space_id) ###Output _____no_output_____ ###Markdown Remove existing model and deployment ###Code deployments_list = wml_client.deployments.get_details() for deployment in deployments_list["resources"]: model_id = deployment["entity"]["asset"]["id"] deployment_id = deployment["metadata"]["id"] if deployment["metadata"]["name"] == DEPLOYMENT_NAME: print("Deleting deployment id", deployment_id) wml_client.deployments.delete(deployment_id) print("Deleting model id", model_id) wml_client.repository.delete(model_id) wml_client.repository.list_models() ###Output _____no_output_____ ###Markdown Add training data reference either from DB2 on CP4D or Cloud Object Storage ###Code # COS training data reference example format training_data_references = [ { "id": "Credit Risk", "type": "s3", "connection": { "access_key_id": COS_API_KEY_ID, "endpoint_url": COS_ENDPOINT, "resource_instance_id":COS_RESOURCE_CRN }, "location": { "bucket": BUCKET_NAME, "path": training_data_file_name, } } ] software_spec_uid = wml_client.software_specifications.get_id_by_name("spark-mllib_3.0") print("Software Specification ID: {}".format(software_spec_uid)) model_props = { wml_client._models.ConfigurationMetaNames.NAME:"{}".format(MODEL_NAME), wml_client._models.ConfigurationMetaNames.TYPE: "mllib_3.0", wml_client._models.ConfigurationMetaNames.SOFTWARE_SPEC_UID: software_spec_uid, #wml_client._models.ConfigurationMetaNames.TRAINING_DATA_REFERENCES: training_data_references, wml_client._models.ConfigurationMetaNames.LABEL_FIELD: "Risk", } print("Storing model ...") published_model_details = wml_client.repository.store_model( model=model, meta_props=model_props, training_data=train_data, pipeline=pipeline) model_uid = wml_client.repository.get_model_uid(published_model_details) print("Done") print("Model ID: {}".format(model_uid)) wml_client.repository.list_models() ###Output _____no_output_____ ###Markdown Deploy the model The next section of the notebook deploys the model as a RESTful web service in Watson Machine Learning. The deployed model will have a scoring URL you can use to send data to the model for predictions. ###Code deployment_details = wml_client.deployments.create( model_uid, meta_props={ wml_client.deployments.ConfigurationMetaNames.NAME: "{}".format(DEPLOYMENT_NAME), wml_client.deployments.ConfigurationMetaNames.ONLINE: {} } ) scoring_url = wml_client.deployments.get_scoring_href(deployment_details) deployment_uid=wml_client.deployments.get_uid(deployment_details) print("Scoring URL:" + scoring_url) print("Model id: {}".format(model_uid)) print("Deployment id: {}".format(deployment_uid)) ###Output _____no_output_____ ###Markdown Define AI Function ###Code ai_params = {"wml_credentials": WML_CREDENTIALS, "deployment_uid": deployment_uid, "space_id": space_id } #AI function definition def score_generator(params=ai_params): import json from ibm_watson_machine_learning import APIClient wml_credentials = params["wml_credentials"] deployment_uid = params["deployment_uid"] space_id = params["space_id"] client = APIClient(wml_credentials) client.set.default_space(space_id) def score(payload): scores_area = client.deployments.score(deployment_uid, payload) return scores_area return score fields = ["CheckingStatus","LoanDuration","CreditHistory","LoanPurpose","LoanAmount","ExistingSavings","EmploymentDuration","InstallmentPercent","Sex","OthersOnLoan","CurrentResidenceDuration","OwnsProperty","Age","InstallmentPlans","Housing","ExistingCreditsCount","Job","Dependents","Telephone","ForeignWorker"] values = [ ["no_checking",13,"credits_paid_to_date","car_new",1343,"100_to_500","1_to_4",2,"female","none",3,"savings_insurance",46,"none","own",2,"skilled",1,"none","yes"] ] sample_payload = { "input_data": [ {"fields": fields,"values": values}]} score = score_generator() scores_ai = score(sample_payload) wml_client.set.default_space(space_id) print(scores_ai) #Store the function func_name = 'Credit Risk python Fn Model' meta_data = { wml_client.repository.FunctionMetaNames.NAME: func_name, #Note if there is specification related exception then use "default_py3.7_opence" instead of default_py3.8 wml_client.repository.FunctionMetaNames.SOFTWARE_SPEC_ID: wml_client.software_specifications.get_id_by_name("default_py3.8") } function_details = wml_client.repository.store_function(meta_props=meta_data, function=score_generator) function_details ai_function_uid = function_details['metadata']['id'] #Generate the deployment function_deployment_details = wml_client.deployments.create(artifact_uid=ai_function_uid, meta_props={wml_client.deployments.ConfigurationMetaNames.NAME: 'dep_' + func_name,wml_client.deployments.ConfigurationMetaNames.ONLINE: {}}) ai_func_deployment_uid = wml_client.deployments.get_uid(function_deployment_details) print("AI Function Deployment UID:" + ai_func_deployment_uid) scoring_url = function_deployment_details["entity"]["status"]["online_url"]["url"] print(scoring_url) ###Output _____no_output_____ ###Markdown Configure OpenScale The notebook will now import the necessary libraries and set up a Python OpenScale client. ###Code from ibm_cloud_sdk_core.authenticators import CloudPakForDataAuthenticator from ibm_watson_openscale import APIClient from ibm_watson_openscale import * from ibm_watson_openscale.supporting_classes.enums import * from ibm_watson_openscale.supporting_classes import * authenticator = CloudPakForDataAuthenticator( url=WOS_CREDENTIALS['url'], username=WOS_CREDENTIALS['username'], password=WOS_CREDENTIALS['password'], disable_ssl_verification=True ) instance_id='<SERVICE_INSTANCE_ID' #Datamart id wos_client = APIClient(service_url=WOS_CREDENTIALS['url'],authenticator=authenticator,service_instance_id = instance_id) wos_client.version ###Output _____no_output_____ ###Markdown Create datamart Set up datamart Watson OpenScale uses a database to store payload logs and calculated metrics. If database credentials were supplied, the datamart will be created there unless there is an existing datamart and the KEEP_MY_INTERNAL_POSTGRES variable is set to True. If an OpenScale datamart exists in Db2 or PostgreSQL, the existing datamart will be used and no data will be overwritten.Prior instances of the German Credit model will be removed from OpenScale monitoring. ###Code wos_client.data_marts.show() data_marts = wos_client.data_marts.list().result.data_marts if len(data_marts) == 0: if DB_CREDENTIALS is not None: if SCHEMA_NAME is None: print("Please specify the SCHEMA_NAME and rerun the cell") print('Setting up external datamart') added_data_mart_result = wos_client.data_marts.add( background_mode=False, name="WOS Data Mart", description="Data Mart created by WOS tutorial notebook", database_configuration=DatabaseConfigurationRequest( database_type=DatabaseType.DB2, credentials=PrimaryStorageCredentialsLong( hostname=DATABASE_CREDENTIALS['hostname'], username=DATABASE_CREDENTIALS['username'], password=DATABASE_CREDENTIALS['password'], db=DATABASE_CREDENTIALS['database'], port=DATABASE_CREDENTIALS['port'], ssl=DATABASE_CREDENTIALS['ssl'], sslmode=DATABASE_CREDENTIALS['sslmode'], certificate_base64=DATABASE_CREDENTIALS['certificate_base64'] ), location=LocationSchemaName( schema_name= SCHEMA_NAME ) ) ).result else: print('Setting up internal datamart') added_data_mart_result = wos_client.data_marts.add( background_mode=False, name="WOS Data Mart", description="Data Mart created by WOS tutorial notebook", internal_database = True).result data_mart_id = added_data_mart_result.metadata.id else: data_mart_id=data_marts[0].metadata.id print('Using existing datamart {}'.format(data_mart_id)) ###Output _____no_output_____ ###Markdown Remove existing service provider connected with used WML instance.Multiple service providers for the same engine instance are avaiable in Watson OpenScale. To avoid multiple service providers of used WML instance in the tutorial notebook the following code deletes existing service provder(s) and then adds new one. ###Code SERVICE_PROVIDER_NAME = "WML AI function - WOS notebook" SERVICE_PROVIDER_DESCRIPTION = "Added by tutorial WML AI function - WOS notebook." service_providers = wos_client.service_providers.list().result.service_providers for service_provider in service_providers: service_instance_name = service_provider.entity.name if service_instance_name == SERVICE_PROVIDER_NAME: service_provider_id = service_provider.metadata.id wos_client.service_providers.delete(service_provider_id) print("Deleted existing service_provider for WML instance: {}".format(service_provider_id)) ###Output _____no_output_____ ###Markdown Add service providerWatson OpenScale needs to be bound to the Watson Machine Learning instance to capture payload data into and out of the model.**Note:** You can bind more than one engine instance if needed by calling `wos_client.service_providers.add` method. Next, you can refer to particular service provider using `service_provider_id`. ###Code added_service_provider_result = wos_client.service_providers.add( name=SERVICE_PROVIDER_NAME, description=SERVICE_PROVIDER_DESCRIPTION, service_type=ServiceTypes.WATSON_MACHINE_LEARNING, deployment_space_id = space_id, operational_space_id = "production", credentials=WMLCredentialsCP4D( url=None, username=None, password=None, instance_id=None ), background_mode=False ).result service_provider_id = added_service_provider_result.metadata.id wos_client.service_providers.show() print(deployment_uid) asset_deployment_details = wos_client.service_providers.list_assets(data_mart_id=data_mart_id, service_provider_id=service_provider_id,deployment_id=ai_func_deployment_uid, deployment_space_id = space_id).result['resources'][0] asset_deployment_details model_asset_details_from_deployment=wos_client.service_providers.get_deployment_asset(data_mart_id=data_mart_id,service_provider_id=service_provider_id,deployment_id=ai_func_deployment_uid,deployment_space_id=space_id) model_asset_details_from_deployment ###Output _____no_output_____ ###Markdown Subscriptions Remove existing credit risk subscriptions This code removes previous subscriptions to the German Credit model to refresh the monitors with the new model and new data. ###Code wos_client.subscriptions.show() subscriptions = wos_client.subscriptions.list().result.subscriptions for subscription in subscriptions: sub_model_id = subscription.entity.asset.asset_id if sub_model_id == ai_function_uid: wos_client.subscriptions.delete(subscription.metadata.id) print('Deleted existing subscription for model', sub_model_id) ###Output _____no_output_____ ###Markdown This code creates the model subscription in OpenScale using the Python client API. Note that we need to provide the model unique identifier, and some information about the model itself. ###Code subscription_details = wos_client.subscriptions.add( data_mart_id=data_mart_id, service_provider_id=service_provider_id, asset=Asset( asset_id=model_asset_details_from_deployment["entity"]["asset"]["asset_id"], name=model_asset_details_from_deployment["entity"]["asset"]["name"], url=model_asset_details_from_deployment["entity"]["asset"]["url"], asset_type=AssetTypes.MODEL, input_data_type=InputDataType.STRUCTURED, problem_type=ProblemType.BINARY_CLASSIFICATION ), deployment=AssetDeploymentRequest( deployment_id=asset_deployment_details['metadata']['guid'], name=asset_deployment_details['entity']['name'], deployment_type= DeploymentTypes.ONLINE, url=asset_deployment_details['entity']['scoring_endpoint']['url'] ), asset_properties=AssetPropertiesRequest( label_column='Risk', probability_fields=['probability'], prediction_field='predictedLabel', feature_fields = ["CheckingStatus","LoanDuration","CreditHistory","LoanPurpose","LoanAmount","ExistingSavings","EmploymentDuration","InstallmentPercent","Sex","OthersOnLoan","CurrentResidenceDuration","OwnsProperty","Age","InstallmentPlans","Housing","ExistingCreditsCount","Job","Dependents","Telephone","ForeignWorker"], categorical_fields = ["CheckingStatus","CreditHistory","LoanPurpose","ExistingSavings","EmploymentDuration","Sex","OthersOnLoan","OwnsProperty","InstallmentPlans","Housing","Job","Telephone","ForeignWorker"], training_data_reference=TrainingDataReference(type='cos', location=COSTrainingDataReferenceLocation(bucket = BUCKET_NAME, file_name = training_data_file_name), connection=COSTrainingDataReferenceConnection.from_dict({ "resource_instance_id": COS_RESOURCE_CRN, "url": COS_ENDPOINT, "api_key": COS_API_KEY_ID, "iam_url": IAM_URL})), training_data_schema=None ), background_mode=False ).result subscription_id = subscription_details.metadata.id subscription_id import time time.sleep(5) payload_data_set_id = None payload_data_set_id = wos_client.data_sets.list(type=DataSetTypes.PAYLOAD_LOGGING, target_target_id=subscription_id, target_target_type=TargetTypes.SUBSCRIPTION).result.data_sets[0].metadata.id if payload_data_set_id is None: print("Payload data set not found. Please check subscription status.") else: print("Payload data set id: ", payload_data_set_id) wos_client.data_sets.show() ###Output _____no_output_____ ###Markdown Score the model so we can configure monitors Now that the WML service has been bound and the subscription has been created, we need to send a request to the model before we configure OpenScale. This allows OpenScale to create a payload log in the datamart with the correct schema, so it can capture data coming into and out of the model. ###Code fields = ["CheckingStatus","LoanDuration","CreditHistory","LoanPurpose","LoanAmount","ExistingSavings","EmploymentDuration","InstallmentPercent","Sex","OthersOnLoan","CurrentResidenceDuration","OwnsProperty","Age","InstallmentPlans","Housing","ExistingCreditsCount","Job","Dependents","Telephone","ForeignWorker"] values = [ ["no_checking",13,"credits_paid_to_date","car_new",1343,"100_to_500","1_to_4",2,"female","none",3,"savings_insurance",46,"none","own",2,"skilled",1,"none","yes"], ["no_checking",24,"prior_payments_delayed","furniture",4567,"500_to_1000","1_to_4",4,"male","none",4,"savings_insurance",36,"none","free",2,"management_self-employed",1,"none","yes"], ["0_to_200",26,"all_credits_paid_back","car_new",863,"less_100","less_1",2,"female","co-applicant",2,"real_estate",38,"none","own",1,"skilled",1,"none","yes"], ["0_to_200",14,"no_credits","car_new",2368,"less_100","1_to_4",3,"female","none",3,"real_estate",29,"none","own",1,"skilled",1,"none","yes"], ["0_to_200",4,"no_credits","car_new",250,"less_100","unemployed",2,"female","none",3,"real_estate",23,"none","rent",1,"management_self-employed",1,"none","yes"], ["no_checking",17,"credits_paid_to_date","car_new",832,"100_to_500","1_to_4",2,"male","none",2,"real_estate",42,"none","own",1,"skilled",1,"none","yes"], ["no_checking",33,"outstanding_credit","appliances",5696,"unknown","greater_7",4,"male","co-applicant",4,"unknown",54,"none","free",2,"skilled",1,"yes","yes"], ["0_to_200",13,"prior_payments_delayed","retraining",1375,"100_to_500","4_to_7",3,"male","none",3,"real_estate",37,"none","own",2,"management_self-employed",1,"none","yes"] ] payload_scoring = { "input_data": [ {"fields": fields,"values": values}]} scoring_response = wml_client.deployments.score(ai_func_deployment_uid, payload_scoring) print('Single record scoring result:', '\n fields:', scoring_response['predictions'][0]['fields'], '\n values: ', scoring_response['predictions'][0]['values'][0]) ###Output _____no_output_____ ###Markdown Check if WML payload logging worked else manually store payload records ###Code import uuid from ibm_watson_openscale.supporting_classes.payload_record import PayloadRecord time.sleep(5) pl_records_count = wos_client.data_sets.get_records_count(payload_data_set_id) print("Number of records in the payload logging table: {}".format(pl_records_count)) if pl_records_count == 0: print("Payload logging did not happen, performing explicit payload logging.") wos_client.data_sets.store_records(data_set_id=payload_data_set_id, request_body=[PayloadRecord( scoring_id=str(uuid.uuid4()), request=payload_scoring, response={"fields": scoring_response['predictions'][0]['fields'], "values":scoring_response['predictions'][0]['values']}, response_time=460 )]) time.sleep(5) pl_records_count = wos_client.data_sets.get_records_count(payload_data_set_id) print("Number of records in the payload logging table: {}".format(pl_records_count)) ###Output _____no_output_____ ###Markdown Quality monitoring and feedback logging Enable quality monitoring The code below waits ten seconds to allow the payload logging table to be set up before it begins enabling monitors. First, it turns on the quality (accuracy) monitor and sets an alert threshold of 70%. OpenScale will show an alert on the dashboard if the model accuracy measurement (area under the curve, in the case of a binary classifier) falls below this threshold.The second paramater supplied, min_records, specifies the minimum number of feedback records OpenScale needs before it calculates a new measurement. The quality monitor runs hourly, but the accuracy reading in the dashboard will not change until an additional 50 feedback records have been added, via the user interface, the Python client, or the supplied feedback endpoint. ###Code import time time.sleep(10) target = Target( target_type=TargetTypes.SUBSCRIPTION, target_id=subscription_id ) parameters = { "min_feedback_data_size": 50 } thresholds = [ { "metric_id": "area_under_roc", "type": "lower_limit", "value": .80 } ] quality_monitor_details = wos_client.monitor_instances.create( data_mart_id=data_mart_id, background_mode=False, monitor_definition_id=wos_client.monitor_definitions.MONITORS.QUALITY.ID, target=target, parameters=parameters, thresholds=thresholds ).result quality_monitor_instance_id = quality_monitor_details.metadata.id quality_monitor_instance_id ###Output _____no_output_____ ###Markdown Feedback logging The code below downloads and stores enough feedback data to meet the minimum threshold so that OpenScale can calculate a new accuracy measurement. It then kicks off the accuracy monitor. The monitors run hourly, or can be initiated via the Python API, the REST API, or the graphical user interface. ###Code !rm additional_feedback_data_v2.json !wget https://raw.githubusercontent.com/IBM/watson-openscale-samples/main/Cloud%20Pak%20for%20Data/WML/assets/data/credit_risk/additional_feedback_data_v2.json ###Output _____no_output_____ ###Markdown Get feedback logging dataset ID ###Code feedback_dataset_id = None feedback_dataset = wos_client.data_sets.list(type=DataSetTypes.FEEDBACK, target_target_id=subscription_id, target_target_type=TargetTypes.SUBSCRIPTION).result print(feedback_dataset) feedback_dataset_id = feedback_dataset.data_sets[0].metadata.id if feedback_dataset_id is None: print("Feedback data set not found. Please check quality monitor status.") with open('additional_feedback_data_v2.json') as feedback_file: additional_feedback_data = json.load(feedback_file) wos_client.data_sets.store_records(feedback_dataset_id, request_body=additional_feedback_data, background_mode=False) wos_client.data_sets.get_records_count(data_set_id=feedback_dataset_id) run_details = wos_client.monitor_instances.run(monitor_instance_id=quality_monitor_instance_id, background_mode=False).result wos_client.monitor_instances.show_metrics(monitor_instance_id=quality_monitor_instance_id) ###Output _____no_output_____ ###Markdown Fairness, drift monitoring and explanations The code below configures fairness monitoring for our model. It turns on monitoring for two features, Sex and Age. In each case, we must specify: * Which model feature to monitor * One or more **majority** groups, which are values of that feature that we expect to receive a higher percentage of favorable outcomes * One or more **minority** groups, which are values of that feature that we expect to receive a higher percentage of unfavorable outcomes * The threshold at which we would like OpenScale to display an alert if the fairness measurement falls below (in this case, 95%)Additionally, we must specify which outcomes from the model are favourable outcomes, and which are unfavourable. We must also provide the number of records OpenScale will use to calculate the fairness score. In this case, OpenScale's fairness monitor will run hourly, but will not calculate a new fairness rating until at least 200 records have been added. Finally, to calculate fairness, OpenScale must perform some calculations on the training data, so we provide the dataframe containing the data. ###Code wos_client.monitor_instances.show() target = Target( target_type=TargetTypes.SUBSCRIPTION, target_id=subscription_id ) parameters = { "features": [ {"feature": "Sex", "majority": ['male'], "minority": ['female'], "threshold": 0.95 }, {"feature": "Age", "majority": [[26, 75]], "minority": [[18, 25]], "threshold": 0.95 } ], "favourable_class": ["No Risk"], "unfavourable_class": ["Risk"], "min_records": 100 } fairness_monitor_details = wos_client.monitor_instances.create( data_mart_id=data_mart_id, background_mode=False, monitor_definition_id=wos_client.monitor_definitions.MONITORS.FAIRNESS.ID, target=target, parameters=parameters).result fairness_monitor_instance_id =fairness_monitor_details.metadata.id fairness_monitor_instance_id ###Output _____no_output_____ ###Markdown Drift configuration ###Code monitor_instances = wos_client.monitor_instances.list().result.monitor_instances for monitor_instance in monitor_instances: monitor_def_id=monitor_instance.entity.monitor_definition_id if monitor_def_id == "drift" and monitor_instance.entity.target.target_id == subscription_id: wos_client.monitor_instances.delete(monitor_instance.metadata.id) print('Deleted existing drift monitor instance with id: ', monitor_instance.metadata.id) target = Target( target_type=TargetTypes.SUBSCRIPTION, target_id=subscription_id ) parameters = { "min_samples": 100, "drift_threshold": 0.1, "train_drift_model": True, "enable_model_drift": False, "enable_data_drift": True } drift_monitor_details = wos_client.monitor_instances.create( data_mart_id=data_mart_id, background_mode=False, monitor_definition_id=wos_client.monitor_definitions.MONITORS.DRIFT.ID, target=target, parameters=parameters ).result drift_monitor_instance_id = drift_monitor_details.metadata.id drift_monitor_instance_id ###Output _____no_output_____ ###Markdown Score the model again now that monitoring is configured This next section randomly selects 200 records from the data feed and sends those records to the model for predictions. This is enough to exceed the minimum threshold for records set in the previous section, which allows OpenScale to begin calculating fairness. ###Code !wget https://raw.githubusercontent.com/IBM/watson-openscale-samples/main/Cloud%20Pak%20for%20Data/WML/assets/data/credit_risk/german_credit_feed.json !ls -lh german_credit_feed.json ###Output _____no_output_____ ###Markdown Score 200 randomly chosen records ###Code import random with open('german_credit_feed.json', 'r') as scoring_file: scoring_data = json.load(scoring_file) fields = scoring_data['fields'] values = [] for _ in range(200): values.append(random.choice(scoring_data['values'])) payload_scoring = {"input_data": [{"fields": fields, "values": values}]} scoring_response = wml_client.deployments.score(ai_func_deployment_uid, payload_scoring) time.sleep(5) if pl_records_count == 8: print("Payload logging did not happen, performing explicit payload logging.") wos_client.data_sets.store_records(data_set_id=payload_data_set_id, request_body=[PayloadRecord( scoring_id=str(uuid.uuid4()), request=payload_scoring, response=scoring_response, response_time=460 )]) time.sleep(5) pl_records_count = wos_client.data_sets.get_records_count(payload_data_set_id) print("Number of records in the payload logging table: {}".format(pl_records_count)) print('Number of records in payload table: ', wos_client.data_sets.get_records_count(data_set_id=payload_data_set_id)) ###Output _____no_output_____ ###Markdown Run fairness monitor Kick off a fairness monitor run on current data. The monitor runs hourly, but can be manually initiated using the Python client, the REST API, or the graphical user interface. ###Code time.sleep(5) run_details = wos_client.monitor_instances.run(monitor_instance_id=fairness_monitor_instance_id, background_mode=False) time.sleep(10) wos_client.monitor_instances.show_metrics(monitor_instance_id=fairness_monitor_instance_id) ###Output _____no_output_____ ###Markdown Run drift monitor Kick off a drift monitor run on current data. The monitor runs every hour, but can be manually initiated using the Python client, the REST API. ###Code drift_run_details = wos_client.monitor_instances.run(monitor_instance_id=drift_monitor_instance_id, background_mode=False) time.sleep(5) wos_client.monitor_instances.show_metrics(monitor_instance_id=drift_monitor_instance_id) ###Output _____no_output_____ ###Markdown Configure Explainability Finally, we provide OpenScale with the training data to enable and configure the explainability features. ###Code target = Target( target_type=TargetTypes.SUBSCRIPTION, target_id=subscription_id ) parameters = { "enabled": True } explainability_details = wos_client.monitor_instances.create( data_mart_id=data_mart_id, background_mode=False, monitor_definition_id=wos_client.monitor_definitions.MONITORS.EXPLAINABILITY.ID, target=target, parameters=parameters ).result explainability_monitor_id = explainability_details.metadata.id ###Output _____no_output_____ ###Markdown Run explanation for sample record ###Code pl_records_resp = wos_client.data_sets.get_list_of_records(data_set_id=payload_data_set_id, limit=1, offset=0).result scoring_ids = [pl_records_resp["records"][0]["entity"]["values"]["scoring_id"]] print("Running explanations on scoring IDs: {}".format(scoring_ids)) explanation_types = ["lime", "contrastive"] result = wos_client.monitor_instances.explanation_tasks(scoring_ids=scoring_ids, explanation_types=explanation_types).result print(result) ###Output _____no_output_____ ###Markdown Custom monitors and metrics Register custom monitor ###Code def get_definition(monitor_name): monitor_definitions = wos_client.monitor_definitions.list().result.monitor_definitions for definition in monitor_definitions: if monitor_name == definition.entity.name: return definition return None monitor_name = 'my model performance' metrics = [MonitorMetricRequest(name='sensitivity', thresholds=[MetricThreshold(type=MetricThresholdTypes.LOWER_LIMIT, default=0.8)]), MonitorMetricRequest(name='specificity', thresholds=[MetricThreshold(type=MetricThresholdTypes.LOWER_LIMIT, default=0.75)])] tags = [MonitorTagRequest(name='region', description='customer geographical region')] existing_definition = get_definition(monitor_name) if existing_definition is None: custom_monitor_details = wos_client.monitor_definitions.add(name=monitor_name, metrics=metrics, tags=tags, background_mode=False).result else: custom_monitor_details = existing_definition ###Output _____no_output_____ ###Markdown Show available monitors types ###Code wos_client.monitor_definitions.show() ###Output _____no_output_____ ###Markdown Get monitors uids and details ###Code custom_monitor_id = custom_monitor_details.metadata.id print(custom_monitor_id) custom_monitor_details = wos_client.monitor_definitions.get(monitor_definition_id=custom_monitor_id).result print('Monitor definition details:', custom_monitor_details) ###Output _____no_output_____ ###Markdown Enable custom monitor for subscription ###Code target = Target( target_type=TargetTypes.SUBSCRIPTION, target_id=subscription_id ) thresholds = [MetricThresholdOverride(metric_id='sensitivity', type = MetricThresholdTypes.LOWER_LIMIT, value=0.9)] custom_monitor_instance_details = wos_client.monitor_instances.create( data_mart_id=data_mart_id, background_mode=False, monitor_definition_id=custom_monitor_id, target=target, thresholds=thresholds ).result ###Output _____no_output_____ ###Markdown Get monitor instance id and configuration details ###Code custom_monitor_instance_id = custom_monitor_instance_details.metadata.id custom_monitor_instance_details = wos_client.monitor_instances.get(custom_monitor_instance_id).result print(custom_monitor_instance_details) ###Output _____no_output_____ ###Markdown Storing custom metrics ###Code from datetime import datetime, timezone, timedelta from ibm_watson_openscale.base_classes.watson_open_scale_v2 import MonitorMeasurementRequest custom_monitoring_run_id = "11122223333111abc" measurement_request = [MonitorMeasurementRequest(timestamp=datetime.now(timezone.utc), metrics=[{"specificity": 0.78, "sensitivity": 0.67, "region": "us-south"}], run_id=custom_monitoring_run_id)] print(measurement_request[0]) published_measurement_response = wos_client.monitor_instances.measurements.add( monitor_instance_id=custom_monitor_instance_id, monitor_measurement_request=measurement_request).result published_measurement_id = published_measurement_response[0]["measurement_id"] print(published_measurement_response) ###Output _____no_output_____ ###Markdown List and get custom metrics ###Code time.sleep(5) published_measurement = wos_client.monitor_instances.measurements.get(monitor_instance_id=custom_monitor_instance_id, measurement_id=published_measurement_id).result print(published_measurement) ###Output _____no_output_____ ###Markdown Historical data ###Code historyDays = 7 ###Output _____no_output_____ ###Markdown Insert historical payloads The next section of the notebook downloads and writes historical data to the payload and measurement tables to simulate a production model that has been monitored and receiving regular traffic for the last seven days. This historical data can be viewed in the Watson OpenScale user interface. The code uses the Python and REST APIs to write this data. ###Code !wget https://raw.githubusercontent.com/IBM/watson-openscale-samples/main/IBM%20Cloud/WML/assets/data/historical_data/credit_risk/history_fairness_v2.json !ls -lh history_fairness_v2.json from datetime import datetime, timedelta, timezone with open('history_fairness_v2.json', 'r') as history_file: payloads = json.load(history_file) for day in range(historyDays): print('Loading day', day + 1) daily_measurement_requests = [] for hour in range(24): score_time = datetime.now(timezone.utc) + timedelta(hours=(-(24*day + hour + 1))) index = (day * 24 + hour) % len(payloads) # wrap around and reuse values if needed measurement_request = MonitorMeasurementRequest(timestamp=score_time,metrics = [payloads[index][0], payloads[index][1]]) daily_measurement_requests.append(measurement_request) response = wos_client.monitor_instances.measurements.add( monitor_instance_id=fairness_monitor_instance_id, monitor_measurement_request=daily_measurement_requests).result print('Finished') ###Output _____no_output_____ ###Markdown Insert historical debias metrics ###Code !wget https://raw.githubusercontent.com/IBM/watson-openscale-samples/main/IBM%20Cloud/WML/assets/data/historical_data/credit_risk/history_debias_v2.json !ls -lh history_debias_v2.json with open('history_debias_v2.json', 'r') as history_file: payloads = json.load(history_file) for day in range(historyDays): print('Loading day', day + 1) daily_measurement_requests = [] for hour in range(24): score_time = datetime.now(timezone.utc) + timedelta(hours=(-(24*day + hour + 1))) index = (day * 24 + hour) % len(payloads) # wrap around and reuse values if needed measurement_request = MonitorMeasurementRequest(timestamp=score_time,metrics = [payloads[index][0], payloads[index][1]]) daily_measurement_requests.append(measurement_request) response = wos_client.monitor_instances.measurements.add( monitor_instance_id=fairness_monitor_instance_id, monitor_measurement_request=daily_measurement_requests).result print('Finished') ###Output _____no_output_____ ###Markdown Insert historical quality metrics ###Code measurements = [0.76, 0.78, 0.68, 0.72, 0.73, 0.77, 0.80] for day in range(historyDays): quality_measurement_requests = [] print('Loading day', day + 1) for hour in range(24): score_time = datetime.utcnow() + timedelta(hours=(-(24*day + hour + 1))) score_time = score_time.isoformat() + "Z" metric = {"area_under_roc": measurements[day]} measurement_request = MonitorMeasurementRequest(timestamp=score_time,metrics = [metric]) quality_measurement_requests.append(measurement_request) response = wos_client.monitor_instances.measurements.add( monitor_instance_id=quality_monitor_instance_id, monitor_measurement_request=quality_measurement_requests).result print('Finished') ###Output _____no_output_____ ###Markdown Insert historical confusion matrixes ###Code !rm history_quality_metrics.json !wget https://raw.githubusercontent.com/IBM/watson-openscale-samples/main/IBM%20Cloud/WML/assets/data/historical_data/credit_risk/history_quality_metrics.json !ls -lh history_quality_metrics.json from ibm_watson_openscale.base_classes.watson_open_scale_v2 import Source with open('history_quality_metrics.json') as json_file: records = json.load(json_file) for day in range(historyDays): index = 0 cm_measurement_requests = [] print('Loading day', day + 1) for hour in range(24): score_time = datetime.utcnow() + timedelta(hours=(-(24*day + hour + 1))) score_time = score_time.isoformat() + "Z" metric = records[index]['metrics'] source = records[index]['sources'] measurement_request = {"timestamp": score_time, "metrics": [metric], "sources": [source]} cm_measurement_requests.append(measurement_request) index+=1 response = wos_client.monitor_instances.measurements.add(monitor_instance_id=quality_monitor_instance_id, monitor_measurement_request=cm_measurement_requests).result print('Finished') ###Output _____no_output_____ ###Markdown Insert historical performance metrics ###Code target = Target( target_type=TargetTypes.INSTANCE, target_id=payload_data_set_id ) performance_monitor_instance_details = wos_client.monitor_instances.create( data_mart_id=data_mart_id, background_mode=False, monitor_definition_id=wos_client.monitor_definitions.MONITORS.PERFORMANCE.ID, target=target ).result performance_monitor_instance_id = performance_monitor_instance_details.metadata.id for day in range(historyDays): performance_measurement_requests = [] print('Loading day', day + 1) for hour in range(24): score_time = datetime.utcnow() + timedelta(hours=(-(24*day + hour + 1))) score_time = score_time.isoformat() + "Z" score_count = random.randint(60, 600) metric = {"record_count": score_count, "data_set_type": "scoring_payload"} measurement_request = {"timestamp": score_time, "metrics": [metric]} performance_measurement_requests.append(measurement_request) response = wos_client.monitor_instances.measurements.add( monitor_instance_id=performance_monitor_instance_id, monitor_measurement_request=performance_measurement_requests).result print('Finished') ###Output _____no_output_____ ###Markdown Insert historical drift measurements ###Code !rm history_drift_measurement_*.json !wget https://raw.githubusercontent.com/IBM/watson-openscale-samples/main/IBM%20Cloud/WML/assets/data/historical_data/credit_risk/history_drift_measurement_0.json !wget https://raw.githubusercontent.com/IBM/watson-openscale-samples/main/IBM%20Cloud/WML/assets/data/historical_data/credit_risk/history_drift_measurement_1.json !wget https://raw.githubusercontent.com/IBM/watson-openscale-samples/main/IBM%20Cloud/WML/assets/data/historical_data/credit_risk/history_drift_measurement_2.json !wget https://raw.githubusercontent.com/IBM/watson-openscale-samples/main/IBM%20Cloud/WML/assets/data/historical_data/credit_risk/history_drift_measurement_3.json !wget https://raw.githubusercontent.com/IBM/watson-openscale-samples/main/IBM%20Cloud/WML/assets/data/historical_data/credit_risk/history_drift_measurement_4.json !wget https://raw.githubusercontent.com/IBM/watson-openscale-samples/main/IBM%20Cloud/WML/assets/data/historical_data/credit_risk/history_drift_measurement_5.json !wget https://raw.githubusercontent.com/IBM/watson-openscale-samples/main/IBM%20Cloud/WML/assets/data/historical_data/credit_risk/history_drift_measurement_6.json !ls -lh history_drift_measurement_*.json for day in range(historyDays): drift_measurements = [] with open("history_drift_measurement_{}.json".format(day), 'r') as history_file: drift_daily_measurements = json.load(history_file) print('Loading day', day + 1) #Historical data contains 8 records per day - each represents 3 hour drift window. for nb_window, records in enumerate(drift_daily_measurements): for record in records: window_start = datetime.utcnow() + timedelta(hours=(-(24 * day + (nb_window+1)*3 + 1))) # first_payload_record_timestamp_in_window (oldest) window_end = datetime.utcnow() + timedelta(hours=(-(24 * day + nb_window*3 + 1)))# last_payload_record_timestamp_in_window (most recent) #modify start and end time for each record record['sources'][0]['data']['start'] = window_start.isoformat() + "Z" record['sources'][0]['data']['end'] = window_end.isoformat() + "Z" metric = record['metrics'][0] source = record['sources'][0] measurement_request = {"timestamp": window_start.isoformat() + "Z", "metrics": [metric], "sources": [source]} drift_measurements.append(measurement_request) response = wos_client.monitor_instances.measurements.add( monitor_instance_id=drift_monitor_instance_id, monitor_measurement_request=drift_measurements).result print("Daily loading finished.") ###Output _____no_output_____ ###Markdown Additional data to help debugging ###Code print('Datamart:', data_mart_id) print('Model:', model_uid) print('Deployment:', ai_func_deployment_uid) ###Output _____no_output_____ ###Markdown Identify transactions for Explainability Transaction IDs identified by the cells below can be copied and pasted into the Explainability tab of the OpenScale dashboard. ###Code wos_client.data_sets.show_records(payload_data_set_id, limit=5) ###Output _____no_output_____ ###Markdown Working with Watson Machine Learning This notebook should be run using with **Python 3.7.x** runtime environment. **If you are viewing this in Watson Studio and do not see Python 3.7.x in the upper right corner of your screen, please update the runtime now.** It requires service credentials for the following services: * Watson OpenScale * Watson Machine Learning * DB2 The notebook will train, create and deploy a German Credit Risk model, configure OpenScale to monitor that deployment, and inject seven days' worth of historical records and measurements for viewing in the OpenScale Insights dashboard. Contents- [Setup](setup)- [Model building and deployment](model)- [OpenScale configuration](openscale)- [Quality monitor and feedback logging](quality)- [Fairness, drift monitoring and explanations](fairness)- [Custom monitors and metrics](custom)- [Historical data](historical) Setup Package installation ###Code import warnings warnings.filterwarnings('ignore') !pip install --upgrade pyspark==2.4 --no-cache | tail -n 1 !pip install --upgrade pandas==1.2.3 --no-cache | tail -n 1 !pip install --upgrade requests==2.23 --no-cache | tail -n 1 !pip install numpy==1.20.1 --no-cache | tail -n 1 !pip install SciPy --no-cache | tail -n 1 !pip install lime --no-cache | tail -n 1 !pip install --upgrade ibm-watson-machine-learning --user | tail -n 1 !pip install --upgrade ibm-watson-openscale --no-cache | tail -n 1 ###Output _____no_output_____ ###Markdown Action: restart the kernel! Configure credentials - WOS_CREDENTIALS (CP4D)- WML_CREDENTIALS (CP4D)- DATABASE_CREDENTIALS (DB2 on CP4D or Cloud Object Storage (COS))- SCHEMA_NAME ###Code WOS_CREDENTIALS = { "url": "<URL>", "username": "<USER>", "password": "<PASSWORD>" } WML_CREDENTIALS = { "url": WOS_CREDENTIALS['url'], "username": WOS_CREDENTIALS['username'], "password": WOS_CREDENTIALS['password'], "instance_id": "wml_local", "version" : "3.5" #If your env is CP4D 4.0 then specify "4.0" instead of "3.5" } #IBM DB2 database connection format example. This is required if you don't have any existing datamarts DATABASE_CREDENTIALS = { "hostname":"***", "username":"***", "password":"***", "database":"***", "port":"***", "ssl":"***", "sslmode":"***", "certificate_base64":"***"} ###Output _____no_output_____ ###Markdown Action: put created schema name below. ###Code #This is required if you don't have any existing datamarts SCHEMA_NAME = '<SCHEMA_NAME>' ###Output _____no_output_____ ###Markdown Run the notebookAt this point, the notebook is ready to run. You can either run the cells one at a time, or click the **Kernel** option above and select **Restart and Run All** to run all the cells. Model building and deployment In this section you will learn how to train Spark MLLib model and next deploy it as web-service using Watson Machine Learning service. Load the training data from github ###Code !rm german_credit_data_biased_training.csv !wget https://raw.githubusercontent.com/IBM/watson-openscale-samples/main/Cloud%20Pak%20for%20Data/WML/assets/data/credit_risk/german_credit_data_biased_training.csv from pyspark.sql import SparkSession import pandas as pd import json spark = SparkSession.builder.getOrCreate() pd_data = pd.read_csv("german_credit_data_biased_training.csv", sep=",", header=0) df_data = spark.read.csv(path="german_credit_data_biased_training.csv", sep=",", header=True, inferSchema=True) df_data.head() ###Output _____no_output_____ ###Markdown Explore data ###Code df_data.printSchema() print("Number of records: " + str(df_data.count())) display(df_data) ###Output _____no_output_____ ###Markdown Create a model ###Code spark_df = df_data (train_data, test_data) = spark_df.randomSplit([0.8, 0.2], 24) MODEL_NAME = "Spark German Risk Model - AI Function" DEPLOYMENT_NAME = "Spark German Risk Deployment - AI Function" print("Number of records for training: " + str(train_data.count())) print("Number of records for evaluation: " + str(test_data.count())) spark_df.printSchema() ###Output _____no_output_____ ###Markdown The code below creates a Random Forest Classifier with Spark, setting up string indexers for the categorical features and the label column. Finally, this notebook creates a pipeline including the indexers and the model, and does an initial Area Under ROC evaluation of the model. ###Code from pyspark.ml.feature import OneHotEncoder, StringIndexer, IndexToString, VectorAssembler from pyspark.ml.evaluation import BinaryClassificationEvaluator from pyspark.ml import Pipeline, Model from pyspark.ml.feature import SQLTransformer features = [x for x in spark_df.columns if x != 'Risk'] categorical_features = ['CheckingStatus', 'CreditHistory', 'LoanPurpose', 'ExistingSavings', 'EmploymentDuration', 'Sex', 'OthersOnLoan', 'OwnsProperty', 'InstallmentPlans', 'Housing', 'Job', 'Telephone', 'ForeignWorker'] categorical_num_features = [x + '_IX' for x in categorical_features] si_list = [StringIndexer(inputCol=x, outputCol=y) for x, y in zip(categorical_features, categorical_num_features)] va_features = VectorAssembler(inputCols=categorical_num_features + [x for x in features if x not in categorical_features], outputCol="features") si_label = StringIndexer(inputCol="Risk", outputCol="label").fit(spark_df) label_converter = IndexToString(inputCol="prediction", outputCol="predictedLabel", labels=si_label.labels) from pyspark.ml.classification import RandomForestClassifier classifier = RandomForestClassifier(featuresCol="features") pipeline = Pipeline(stages= si_list + [si_label, va_features, classifier, label_converter]) model = pipeline.fit(train_data) ###Output _____no_output_____ ###Markdown **Note**: If you want filter features from model output please replace `*` with feature names to be retained in `SQLTransformer` statement. ###Code predictions = model.transform(test_data) evaluatorDT = BinaryClassificationEvaluator(rawPredictionCol="prediction", metricName='areaUnderROC') area_under_curve = evaluatorDT.evaluate(predictions) evaluatorDT = BinaryClassificationEvaluator(rawPredictionCol="prediction", metricName='areaUnderPR') area_under_PR = evaluatorDT.evaluate(predictions) #default evaluation is areaUnderROC print("areaUnderROC = %g" % area_under_curve, "areaUnderPR = %g" % area_under_PR) # extra code: evaluate more metrics by exporting them into pandas and numpy from sklearn.metrics import classification_report y_pred = predictions.toPandas()['prediction'] y_pred = ['Risk' if pred == 1.0 else 'No Risk' for pred in y_pred] y_test = test_data.toPandas()['Risk'] print(classification_report(y_test, y_pred, target_names=['Risk', 'No Risk'])) ###Output _____no_output_____ ###Markdown Save training data to Cloud Object Storage Cloud object storage detailsIn next cells, you will need to paste some credentials to Cloud Object Storage. If you haven't worked with COS yet please visit getting started with COS tutorial. You can find COS_API_KEY_ID and COS_RESOURCE_CRN variables in Service Credentials in menu of your COS instance. Used COS Service Credentials must be created with Role parameter set as Writer. Later training data file will be loaded to the bucket of your instance and used as training refecence in subsription.COS_ENDPOINT variable can be found in Endpoint field of the menu. ###Code IAM_URL="https://iam.ng.bluemix.net/oidc/token" COS_API_KEY_ID = "<COS_API_KEY>" COS_RESOURCE_CRN = "<RESOURCE_INSTANCE_ID>" COS_ENDPOINT = "<COS_ENDPOINT>" # Current list avaiable at https://control.cloud-object-storage.cloud.ibm.com/v2/endpoints BUCKET_NAME = "<BUCKET_NAME>" #example: "credit-risk-training-data" training_data_file_name="german_credit_data_biased_training.csv" import ibm_boto3 from ibm_botocore.client import Config, ClientError cos_client = ibm_boto3.resource("s3", ibm_api_key_id=COS_API_KEY_ID, ibm_service_instance_id=COS_RESOURCE_CRN, ibm_auth_endpoint="https://iam.bluemix.net/oidc/token", config=Config(signature_version="oauth"), endpoint_url=COS_ENDPOINT ) with open(training_data_file_name, "rb") as file_data: cos_client.Object(BUCKET_NAME, training_data_file_name).upload_fileobj( Fileobj=file_data ) ###Output _____no_output_____ ###Markdown Publish the model In this section, the notebook uses Watson Machine Learning to save the model (including the pipeline) to the WML instance. Previous versions of the model are removed so that the notebook can be run again, resetting all data for another demo. ###Code import json from ibm_watson_machine_learning import APIClient wml_client = APIClient(WML_CREDENTIALS) wml_client.version space_name = "<SPACE_NAME>" # create the space and set it as default space_meta_data = { wml_client.spaces.ConfigurationMetaNames.NAME : space_name, wml_client.spaces.ConfigurationMetaNames.DESCRIPTION : 'tutorial_space' } spaces = wml_client.spaces.get_details()['resources'] space_id = None for space in spaces: if space['entity']['name'] == space_name: space_id = space["metadata"]["id"] if space_id is None: space_id = wml_client.spaces.store(meta_props=space_meta_data)["metadata"]["id"] print(space_id) wml_client.set.default_space(space_id) ###Output _____no_output_____ ###Markdown Remove existing model and deployment ###Code deployments_list = wml_client.deployments.get_details() for deployment in deployments_list["resources"]: model_id = deployment["entity"]["asset"]["id"] deployment_id = deployment["metadata"]["id"] if deployment["metadata"]["name"] == DEPLOYMENT_NAME: print("Deleting deployment id", deployment_id) wml_client.deployments.delete(deployment_id) print("Deleting model id", model_id) wml_client.repository.delete(model_id) wml_client.repository.list_models() ###Output _____no_output_____ ###Markdown Add training data reference either from DB2 on CP4D or Cloud Object Storage ###Code # COS training data reference example format training_data_references = [ { "id": "Credit Risk", "type": "s3", "connection": { "access_key_id": COS_API_KEY_ID, "endpoint_url": COS_ENDPOINT, "resource_instance_id":COS_RESOURCE_CRN }, "location": { "bucket": BUCKET_NAME, "path": training_data_file_name, } } ] software_spec_uid = wml_client.software_specifications.get_id_by_name("spark-mllib_2.4") print("Software Specification ID: {}".format(software_spec_uid)) model_props = { wml_client._models.ConfigurationMetaNames.NAME:"{}".format(MODEL_NAME), wml_client._models.ConfigurationMetaNames.TYPE: "mllib_2.4", wml_client._models.ConfigurationMetaNames.SOFTWARE_SPEC_UID: software_spec_uid, wml_client._models.ConfigurationMetaNames.TRAINING_DATA_REFERENCES: training_data_references, wml_client._models.ConfigurationMetaNames.LABEL_FIELD: "Risk", } print("Storing model ...") published_model_details = wml_client.repository.store_model( model=model, meta_props=model_props, training_data=train_data, pipeline=pipeline) model_uid = wml_client.repository.get_model_uid(published_model_details) print("Done") print("Model ID: {}".format(model_uid)) wml_client.repository.list_models() ###Output _____no_output_____ ###Markdown Deploy the model The next section of the notebook deploys the model as a RESTful web service in Watson Machine Learning. The deployed model will have a scoring URL you can use to send data to the model for predictions. ###Code deployment_details = wml_client.deployments.create( model_uid, meta_props={ wml_client.deployments.ConfigurationMetaNames.NAME: "{}".format(DEPLOYMENT_NAME), wml_client.deployments.ConfigurationMetaNames.ONLINE: {} } ) scoring_url = wml_client.deployments.get_scoring_href(deployment_details) deployment_uid=wml_client.deployments.get_uid(deployment_details) print("Scoring URL:" + scoring_url) print("Model id: {}".format(model_uid)) print("Deployment id: {}".format(deployment_uid)) ###Output _____no_output_____ ###Markdown Define AI Function ###Code ai_params = {"wml_credentials": WML_CREDENTIALS, "deployment_uid": deployment_uid, "space_id": space_id } #AI function definition def score_generator(params=ai_params): import json from ibm_watson_machine_learning import APIClient wml_credentials = params["wml_credentials"] deployment_uid = params["deployment_uid"] space_id = params["space_id"] client = APIClient(wml_credentials) client.set.default_space(space_id) def score(payload): scores_area = client.deployments.score(deployment_uid, payload) return scores_area return score fields = ["CheckingStatus","LoanDuration","CreditHistory","LoanPurpose","LoanAmount","ExistingSavings","EmploymentDuration","InstallmentPercent","Sex","OthersOnLoan","CurrentResidenceDuration","OwnsProperty","Age","InstallmentPlans","Housing","ExistingCreditsCount","Job","Dependents","Telephone","ForeignWorker"] values = [ ["no_checking",13,"credits_paid_to_date","car_new",1343,"100_to_500","1_to_4",2,"female","none",3,"savings_insurance",46,"none","own",2,"skilled",1,"none","yes"] ] sample_payload = { "input_data": [ {"fields": fields,"values": values}]} score = score_generator() scores_ai = score(sample_payload) wml_client.set.default_space(space_id) print(scores_ai) #Store the function func_name = 'Credit Risk python Fn Model' meta_data = { wml_client.repository.FunctionMetaNames.NAME: func_name, #Note if there is specification related exception then use "default_py3.7_opence" instead of default_py3.8 wml_client.repository.FunctionMetaNames.SOFTWARE_SPEC_ID: wml_client.software_specifications.get_id_by_name("default_py3.8") } function_details = wml_client.repository.store_function(meta_props=meta_data, function=score_generator) function_details ai_function_uid = function_details['metadata']['id'] #Generate the deployment function_deployment_details = wml_client.deployments.create(artifact_uid=ai_function_uid, meta_props={wml_client.deployments.ConfigurationMetaNames.NAME: 'dep_' + func_name,wml_client.deployments.ConfigurationMetaNames.ONLINE: {}}) ai_func_deployment_uid = wml_client.deployments.get_uid(function_deployment_details) print("AI Function Deployment UID:" + ai_func_deployment_uid) scoring_url = function_deployment_details["entity"]["status"]["online_url"]["url"] print(scoring_url) ###Output _____no_output_____ ###Markdown Configure OpenScale The notebook will now import the necessary libraries and set up a Python OpenScale client. ###Code from ibm_cloud_sdk_core.authenticators import CloudPakForDataAuthenticator from ibm_watson_openscale import APIClient from ibm_watson_openscale import * from ibm_watson_openscale.supporting_classes.enums import * from ibm_watson_openscale.supporting_classes import * authenticator = CloudPakForDataAuthenticator( url=WOS_CREDENTIALS['url'], username=WOS_CREDENTIALS['username'], password=WOS_CREDENTIALS['password'], disable_ssl_verification=True ) instance_id='<SERVICE_INSTANCE_ID' #Datamart id wos_client = APIClient(service_url=WOS_CREDENTIALS['url'],authenticator=authenticator,service_instance_id = instance_id) wos_client.version ###Output _____no_output_____ ###Markdown Create datamart Set up datamart Watson OpenScale uses a database to store payload logs and calculated metrics. If database credentials were not supplied above, the notebook will use the free, internal lite database. If database credentials were supplied, the datamart will be created there unless there is an existing datamart and the KEEP_MY_INTERNAL_POSTGRES variable is set to True. If an OpenScale datamart exists in Db2 or PostgreSQL, the existing datamart will be used and no data will be overwritten.Prior instances of the German Credit model will be removed from OpenScale monitoring. ###Code wos_client.data_marts.show() data_marts = wos_client.data_marts.list().result.data_marts if len(data_marts) == 0: if DB_CREDENTIALS is not None: if SCHEMA_NAME is None: print("Please specify the SCHEMA_NAME and rerun the cell") print('Setting up external datamart') added_data_mart_result = wos_client.data_marts.add( background_mode=False, name="WOS Data Mart", description="Data Mart created by WOS tutorial notebook", database_configuration=DatabaseConfigurationRequest( database_type=DatabaseType.DB2, credentials=PrimaryStorageCredentialsLong( hostname=DATABASE_CREDENTIALS['hostname'], username=DATABASE_CREDENTIALS['username'], password=DATABASE_CREDENTIALS['password'], db=DATABASE_CREDENTIALS['database'], port=DATABASE_CREDENTIALS['port'], ssl=DATABASE_CREDENTIALS['ssl'], sslmode=DATABASE_CREDENTIALS['sslmode'], certificate_base64=DATABASE_CREDENTIALS['certificate_base64'] ), location=LocationSchemaName( schema_name= SCHEMA_NAME ) ) ).result else: print('Setting up internal datamart') added_data_mart_result = wos_client.data_marts.add( background_mode=False, name="WOS Data Mart", description="Data Mart created by WOS tutorial notebook", internal_database = True).result data_mart_id = added_data_mart_result.metadata.id else: data_mart_id=data_marts[0].metadata.id print('Using existing datamart {}'.format(data_mart_id)) ###Output _____no_output_____ ###Markdown Remove existing service provider connected with used WML instance.Multiple service providers for the same engine instance are avaiable in Watson OpenScale. To avoid multiple service providers of used WML instance in the tutorial notebook the following code deletes existing service provder(s) and then adds new one. ###Code SERVICE_PROVIDER_NAME = "WML AI function - WOS notebook" SERVICE_PROVIDER_DESCRIPTION = "Added by tutorial WML AI function - WOS notebook." service_providers = wos_client.service_providers.list().result.service_providers for service_provider in service_providers: service_instance_name = service_provider.entity.name if service_instance_name == SERVICE_PROVIDER_NAME: service_provider_id = service_provider.metadata.id wos_client.service_providers.delete(service_provider_id) print("Deleted existing service_provider for WML instance: {}".format(service_provider_id)) ###Output _____no_output_____ ###Markdown Add service providerWatson OpenScale needs to be bound to the Watson Machine Learning instance to capture payload data into and out of the model.**Note:** You can bind more than one engine instance if needed by calling `wos_client.service_providers.add` method. Next, you can refer to particular service provider using `service_provider_id`. ###Code added_service_provider_result = wos_client.service_providers.add( name=SERVICE_PROVIDER_NAME, description=SERVICE_PROVIDER_DESCRIPTION, service_type=ServiceTypes.WATSON_MACHINE_LEARNING, deployment_space_id = space_id, operational_space_id = "production", credentials=WMLCredentialsCP4D( url=None, username=None, password=None, instance_id=None ), background_mode=False ).result service_provider_id = added_service_provider_result.metadata.id wos_client.service_providers.show() print(deployment_uid) asset_deployment_details = wos_client.service_providers.list_assets(data_mart_id=data_mart_id, service_provider_id=service_provider_id,deployment_id=ai_func_deployment_uid, deployment_space_id = space_id).result['resources'][0] asset_deployment_details model_asset_details_from_deployment=wos_client.service_providers.get_deployment_asset(data_mart_id=data_mart_id,service_provider_id=service_provider_id,deployment_id=ai_func_deployment_uid,deployment_space_id=space_id) model_asset_details_from_deployment ###Output _____no_output_____ ###Markdown Subscriptions Remove existing credit risk subscriptions This code removes previous subscriptions to the German Credit model to refresh the monitors with the new model and new data. ###Code wos_client.subscriptions.show() subscriptions = wos_client.subscriptions.list().result.subscriptions for subscription in subscriptions: sub_model_id = subscription.entity.asset.asset_id if sub_model_id == ai_function_uid: wos_client.subscriptions.delete(subscription.metadata.id) print('Deleted existing subscription for model', sub_model_id) ###Output _____no_output_____ ###Markdown This code creates the model subscription in OpenScale using the Python client API. Note that we need to provide the model unique identifier, and some information about the model itself. ###Code subscription_details = wos_client.subscriptions.add( data_mart_id=data_mart_id, service_provider_id=service_provider_id, asset=Asset( asset_id=model_asset_details_from_deployment["entity"]["asset"]["asset_id"], name=model_asset_details_from_deployment["entity"]["asset"]["name"], url=model_asset_details_from_deployment["entity"]["asset"]["url"], asset_type=AssetTypes.MODEL, input_data_type=InputDataType.STRUCTURED, problem_type=ProblemType.BINARY_CLASSIFICATION ), deployment=AssetDeploymentRequest( deployment_id=asset_deployment_details['metadata']['guid'], name=asset_deployment_details['entity']['name'], deployment_type= DeploymentTypes.ONLINE, url=asset_deployment_details['entity']['scoring_endpoint']['url'] ), asset_properties=AssetPropertiesRequest( label_column='Risk', probability_fields=['probability'], prediction_field='predictedLabel', feature_fields = ["CheckingStatus","LoanDuration","CreditHistory","LoanPurpose","LoanAmount","ExistingSavings","EmploymentDuration","InstallmentPercent","Sex","OthersOnLoan","CurrentResidenceDuration","OwnsProperty","Age","InstallmentPlans","Housing","ExistingCreditsCount","Job","Dependents","Telephone","ForeignWorker"], categorical_fields = ["CheckingStatus","CreditHistory","LoanPurpose","ExistingSavings","EmploymentDuration","Sex","OthersOnLoan","OwnsProperty","InstallmentPlans","Housing","Job","Telephone","ForeignWorker"], training_data_reference=TrainingDataReference(type='cos', location=COSTrainingDataReferenceLocation(bucket = BUCKET_NAME, file_name = training_data_file_name), connection=COSTrainingDataReferenceConnection.from_dict({ "resource_instance_id": COS_RESOURCE_CRN, "url": COS_ENDPOINT, "api_key": COS_API_KEY_ID, "iam_url": IAM_URL})), training_data_schema=None ), background_mode=False ).result subscription_id = subscription_details.metadata.id subscription_id import time time.sleep(5) payload_data_set_id = None payload_data_set_id = wos_client.data_sets.list(type=DataSetTypes.PAYLOAD_LOGGING, target_target_id=subscription_id, target_target_type=TargetTypes.SUBSCRIPTION).result.data_sets[0].metadata.id if payload_data_set_id is None: print("Payload data set not found. Please check subscription status.") else: print("Payload data set id: ", payload_data_set_id) wos_client.data_sets.show() ###Output _____no_output_____ ###Markdown Score the model so we can configure monitors Now that the WML service has been bound and the subscription has been created, we need to send a request to the model before we configure OpenScale. This allows OpenScale to create a payload log in the datamart with the correct schema, so it can capture data coming into and out of the model. ###Code fields = ["CheckingStatus","LoanDuration","CreditHistory","LoanPurpose","LoanAmount","ExistingSavings","EmploymentDuration","InstallmentPercent","Sex","OthersOnLoan","CurrentResidenceDuration","OwnsProperty","Age","InstallmentPlans","Housing","ExistingCreditsCount","Job","Dependents","Telephone","ForeignWorker"] values = [ ["no_checking",13,"credits_paid_to_date","car_new",1343,"100_to_500","1_to_4",2,"female","none",3,"savings_insurance",46,"none","own",2,"skilled",1,"none","yes"], ["no_checking",24,"prior_payments_delayed","furniture",4567,"500_to_1000","1_to_4",4,"male","none",4,"savings_insurance",36,"none","free",2,"management_self-employed",1,"none","yes"], ["0_to_200",26,"all_credits_paid_back","car_new",863,"less_100","less_1",2,"female","co-applicant",2,"real_estate",38,"none","own",1,"skilled",1,"none","yes"], ["0_to_200",14,"no_credits","car_new",2368,"less_100","1_to_4",3,"female","none",3,"real_estate",29,"none","own",1,"skilled",1,"none","yes"], ["0_to_200",4,"no_credits","car_new",250,"less_100","unemployed",2,"female","none",3,"real_estate",23,"none","rent",1,"management_self-employed",1,"none","yes"], ["no_checking",17,"credits_paid_to_date","car_new",832,"100_to_500","1_to_4",2,"male","none",2,"real_estate",42,"none","own",1,"skilled",1,"none","yes"], ["no_checking",33,"outstanding_credit","appliances",5696,"unknown","greater_7",4,"male","co-applicant",4,"unknown",54,"none","free",2,"skilled",1,"yes","yes"], ["0_to_200",13,"prior_payments_delayed","retraining",1375,"100_to_500","4_to_7",3,"male","none",3,"real_estate",37,"none","own",2,"management_self-employed",1,"none","yes"] ] payload_scoring = { "input_data": [ {"fields": fields,"values": values}]} scoring_response = wml_client.deployments.score(ai_func_deployment_uid, payload_scoring) print('Single record scoring result:', '\n fields:', scoring_response['predictions'][0]['fields'], '\n values: ', scoring_response['predictions'][0]['values'][0]) ###Output _____no_output_____ ###Markdown Check if WML payload logging worked else manually store payload records ###Code import uuid from ibm_watson_openscale.supporting_classes.payload_record import PayloadRecord time.sleep(5) pl_records_count = wos_client.data_sets.get_records_count(payload_data_set_id) print("Number of records in the payload logging table: {}".format(pl_records_count)) if pl_records_count == 0: print("Payload logging did not happen, performing explicit payload logging.") wos_client.data_sets.store_records(data_set_id=payload_data_set_id, request_body=[PayloadRecord( scoring_id=str(uuid.uuid4()), request=payload_scoring, response={"fields": scoring_response['predictions'][0]['fields'], "values":scoring_response['predictions'][0]['values']}, response_time=460 )]) time.sleep(5) pl_records_count = wos_client.data_sets.get_records_count(payload_data_set_id) print("Number of records in the payload logging table: {}".format(pl_records_count)) ###Output _____no_output_____ ###Markdown Quality monitoring and feedback logging Enable quality monitoring The code below waits ten seconds to allow the payload logging table to be set up before it begins enabling monitors. First, it turns on the quality (accuracy) monitor and sets an alert threshold of 70%. OpenScale will show an alert on the dashboard if the model accuracy measurement (area under the curve, in the case of a binary classifier) falls below this threshold.The second paramater supplied, min_records, specifies the minimum number of feedback records OpenScale needs before it calculates a new measurement. The quality monitor runs hourly, but the accuracy reading in the dashboard will not change until an additional 50 feedback records have been added, via the user interface, the Python client, or the supplied feedback endpoint. ###Code import time time.sleep(10) target = Target( target_type=TargetTypes.SUBSCRIPTION, target_id=subscription_id ) parameters = { "min_feedback_data_size": 50 } thresholds = [ { "metric_id": "area_under_roc", "type": "lower_limit", "value": .80 } ] quality_monitor_details = wos_client.monitor_instances.create( data_mart_id=data_mart_id, background_mode=False, monitor_definition_id=wos_client.monitor_definitions.MONITORS.QUALITY.ID, target=target, parameters=parameters, thresholds=thresholds ).result quality_monitor_instance_id = quality_monitor_details.metadata.id quality_monitor_instance_id ###Output _____no_output_____ ###Markdown Feedback logging The code below downloads and stores enough feedback data to meet the minimum threshold so that OpenScale can calculate a new accuracy measurement. It then kicks off the accuracy monitor. The monitors run hourly, or can be initiated via the Python API, the REST API, or the graphical user interface. ###Code !rm additional_feedback_data_v2.json !wget https://raw.githubusercontent.com/IBM/watson-openscale-samples/main/Cloud%20Pak%20for%20Data/WML/assets/data/credit_risk/additional_feedback_data_v2.json ###Output _____no_output_____ ###Markdown Get feedback logging dataset ID ###Code feedback_dataset_id = None feedback_dataset = wos_client.data_sets.list(type=DataSetTypes.FEEDBACK, target_target_id=subscription_id, target_target_type=TargetTypes.SUBSCRIPTION).result print(feedback_dataset) feedback_dataset_id = feedback_dataset.data_sets[0].metadata.id if feedback_dataset_id is None: print("Feedback data set not found. Please check quality monitor status.") with open('additional_feedback_data_v2.json') as feedback_file: additional_feedback_data = json.load(feedback_file) wos_client.data_sets.store_records(feedback_dataset_id, request_body=additional_feedback_data, background_mode=False) wos_client.data_sets.get_records_count(data_set_id=feedback_dataset_id) run_details = wos_client.monitor_instances.run(monitor_instance_id=quality_monitor_instance_id, background_mode=False).result wos_client.monitor_instances.show_metrics(monitor_instance_id=quality_monitor_instance_id) ###Output _____no_output_____ ###Markdown Fairness, drift monitoring and explanations The code below configures fairness monitoring for our model. It turns on monitoring for two features, Sex and Age. In each case, we must specify: * Which model feature to monitor * One or more **majority** groups, which are values of that feature that we expect to receive a higher percentage of favorable outcomes * One or more **minority** groups, which are values of that feature that we expect to receive a higher percentage of unfavorable outcomes * The threshold at which we would like OpenScale to display an alert if the fairness measurement falls below (in this case, 95%)Additionally, we must specify which outcomes from the model are favourable outcomes, and which are unfavourable. We must also provide the number of records OpenScale will use to calculate the fairness score. In this case, OpenScale's fairness monitor will run hourly, but will not calculate a new fairness rating until at least 200 records have been added. Finally, to calculate fairness, OpenScale must perform some calculations on the training data, so we provide the dataframe containing the data. ###Code wos_client.monitor_instances.show() target = Target( target_type=TargetTypes.SUBSCRIPTION, target_id=subscription_id ) parameters = { "features": [ {"feature": "Sex", "majority": ['male'], "minority": ['female'], "threshold": 0.95 }, {"feature": "Age", "majority": [[26, 75]], "minority": [[18, 25]], "threshold": 0.95 } ], "favourable_class": ["No Risk"], "unfavourable_class": ["Risk"], "min_records": 100 } fairness_monitor_details = wos_client.monitor_instances.create( data_mart_id=data_mart_id, background_mode=False, monitor_definition_id=wos_client.monitor_definitions.MONITORS.FAIRNESS.ID, target=target, parameters=parameters).result fairness_monitor_instance_id =fairness_monitor_details.metadata.id fairness_monitor_instance_id ###Output _____no_output_____ ###Markdown Drift configuration ###Code monitor_instances = wos_client.monitor_instances.list().result.monitor_instances for monitor_instance in monitor_instances: monitor_def_id=monitor_instance.entity.monitor_definition_id if monitor_def_id == "drift" and monitor_instance.entity.target.target_id == subscription_id: wos_client.monitor_instances.delete(monitor_instance.metadata.id) print('Deleted existing drift monitor instance with id: ', monitor_instance.metadata.id) target = Target( target_type=TargetTypes.SUBSCRIPTION, target_id=subscription_id ) parameters = { "min_samples": 100, "drift_threshold": 0.1, "train_drift_model": True, "enable_model_drift": False, "enable_data_drift": True } drift_monitor_details = wos_client.monitor_instances.create( data_mart_id=data_mart_id, background_mode=False, monitor_definition_id=wos_client.monitor_definitions.MONITORS.DRIFT.ID, target=target, parameters=parameters ).result drift_monitor_instance_id = drift_monitor_details.metadata.id drift_monitor_instance_id ###Output _____no_output_____ ###Markdown Score the model again now that monitoring is configured This next section randomly selects 200 records from the data feed and sends those records to the model for predictions. This is enough to exceed the minimum threshold for records set in the previous section, which allows OpenScale to begin calculating fairness. ###Code !wget https://raw.githubusercontent.com/IBM/watson-openscale-samples/main/Cloud%20Pak%20for%20Data/WML/assets/data/credit_risk/german_credit_feed.json !ls -lh german_credit_feed.json ###Output _____no_output_____ ###Markdown Score 200 randomly chosen records ###Code import random with open('german_credit_feed.json', 'r') as scoring_file: scoring_data = json.load(scoring_file) fields = scoring_data['fields'] values = [] for _ in range(200): values.append(random.choice(scoring_data['values'])) payload_scoring = {"input_data": [{"fields": fields, "values": values}]} scoring_response = wml_client.deployments.score(ai_func_deployment_uid, payload_scoring) time.sleep(5) if pl_records_count == 8: print("Payload logging did not happen, performing explicit payload logging.") wos_client.data_sets.store_records(data_set_id=payload_data_set_id, request_body=[PayloadRecord( scoring_id=str(uuid.uuid4()), request=payload_scoring, response=scoring_response, response_time=460 )]) time.sleep(5) pl_records_count = wos_client.data_sets.get_records_count(payload_data_set_id) print("Number of records in the payload logging table: {}".format(pl_records_count)) print('Number of records in payload table: ', wos_client.data_sets.get_records_count(data_set_id=payload_data_set_id)) ###Output _____no_output_____ ###Markdown Run fairness monitor Kick off a fairness monitor run on current data. The monitor runs hourly, but can be manually initiated using the Python client, the REST API, or the graphical user interface. ###Code time.sleep(5) run_details = wos_client.monitor_instances.run(monitor_instance_id=fairness_monitor_instance_id, background_mode=False) time.sleep(10) wos_client.monitor_instances.show_metrics(monitor_instance_id=fairness_monitor_instance_id) ###Output _____no_output_____ ###Markdown Run drift monitor Kick off a drift monitor run on current data. The monitor runs every hour, but can be manually initiated using the Python client, the REST API. ###Code drift_run_details = wos_client.monitor_instances.run(monitor_instance_id=drift_monitor_instance_id, background_mode=False) time.sleep(5) wos_client.monitor_instances.show_metrics(monitor_instance_id=drift_monitor_instance_id) ###Output _____no_output_____ ###Markdown Configure Explainability Finally, we provide OpenScale with the training data to enable and configure the explainability features. ###Code target = Target( target_type=TargetTypes.SUBSCRIPTION, target_id=subscription_id ) parameters = { "enabled": True } explainability_details = wos_client.monitor_instances.create( data_mart_id=data_mart_id, background_mode=False, monitor_definition_id=wos_client.monitor_definitions.MONITORS.EXPLAINABILITY.ID, target=target, parameters=parameters ).result explainability_monitor_id = explainability_details.metadata.id ###Output _____no_output_____ ###Markdown Run explanation for sample record ###Code pl_records_resp = wos_client.data_sets.get_list_of_records(data_set_id=payload_data_set_id, limit=1, offset=0).result scoring_ids = [pl_records_resp["records"][0]["entity"]["values"]["scoring_id"]] print("Running explanations on scoring IDs: {}".format(scoring_ids)) explanation_types = ["lime", "contrastive"] result = wos_client.monitor_instances.explanation_tasks(scoring_ids=scoring_ids, explanation_types=explanation_types).result print(result) ###Output _____no_output_____ ###Markdown Custom monitors and metrics Register custom monitor ###Code def get_definition(monitor_name): monitor_definitions = wos_client.monitor_definitions.list().result.monitor_definitions for definition in monitor_definitions: if monitor_name == definition.entity.name: return definition return None monitor_name = 'my model performance' metrics = [MonitorMetricRequest(name='sensitivity', thresholds=[MetricThreshold(type=MetricThresholdTypes.LOWER_LIMIT, default=0.8)]), MonitorMetricRequest(name='specificity', thresholds=[MetricThreshold(type=MetricThresholdTypes.LOWER_LIMIT, default=0.75)])] tags = [MonitorTagRequest(name='region', description='customer geographical region')] existing_definition = get_definition(monitor_name) if existing_definition is None: custom_monitor_details = wos_client.monitor_definitions.add(name=monitor_name, metrics=metrics, tags=tags, background_mode=False).result else: custom_monitor_details = existing_definition ###Output _____no_output_____ ###Markdown Show available monitors types ###Code wos_client.monitor_definitions.show() ###Output _____no_output_____ ###Markdown Get monitors uids and details ###Code custom_monitor_id = custom_monitor_details.metadata.id print(custom_monitor_id) custom_monitor_details = wos_client.monitor_definitions.get(monitor_definition_id=custom_monitor_id).result print('Monitor definition details:', custom_monitor_details) ###Output _____no_output_____ ###Markdown Enable custom monitor for subscription ###Code target = Target( target_type=TargetTypes.SUBSCRIPTION, target_id=subscription_id ) thresholds = [MetricThresholdOverride(metric_id='sensitivity', type = MetricThresholdTypes.LOWER_LIMIT, value=0.9)] custom_monitor_instance_details = wos_client.monitor_instances.create( data_mart_id=data_mart_id, background_mode=False, monitor_definition_id=custom_monitor_id, target=target ).result ###Output _____no_output_____ ###Markdown Get monitor instance id and configuration details ###Code custom_monitor_instance_id = custom_monitor_instance_details.metadata.id custom_monitor_instance_details = wos_client.monitor_instances.get(custom_monitor_instance_id).result print(custom_monitor_instance_details) ###Output _____no_output_____ ###Markdown Storing custom metrics ###Code from datetime import datetime, timezone, timedelta from ibm_watson_openscale.base_classes.watson_open_scale_v2 import MonitorMeasurementRequest custom_monitoring_run_id = "11122223333111abc" measurement_request = [MonitorMeasurementRequest(timestamp=datetime.now(timezone.utc), metrics=[{"specificity": 0.78, "sensitivity": 0.67, "region": "us-south"}], run_id=custom_monitoring_run_id)] print(measurement_request[0]) published_measurement_response = wos_client.monitor_instances.measurements.add( monitor_instance_id=custom_monitor_instance_id, monitor_measurement_request=measurement_request).result published_measurement_id = published_measurement_response[0]["measurement_id"] print(published_measurement_response) ###Output _____no_output_____ ###Markdown List and get custom metrics ###Code time.sleep(5) published_measurement = wos_client.monitor_instances.measurements.get(monitor_instance_id=custom_monitor_instance_id, measurement_id=published_measurement_id).result print(published_measurement) ###Output _____no_output_____ ###Markdown Historical data ###Code historyDays = 7 ###Output _____no_output_____ ###Markdown Insert historical payloads The next section of the notebook downloads and writes historical data to the payload and measurement tables to simulate a production model that has been monitored and receiving regular traffic for the last seven days. This historical data can be viewed in the Watson OpenScale user interface. The code uses the Python and REST APIs to write this data. ###Code !wget https://raw.githubusercontent.com/IBM/watson-openscale-samples/main/IBM%20Cloud/WML/assets/data/historical_data/credit_risk/history_fairness_v2.json !ls -lh history_fairness_v2.json from datetime import datetime, timedelta, timezone with open('history_fairness_v2.json', 'r') as history_file: payloads = json.load(history_file) for day in range(historyDays): print('Loading day', day + 1) daily_measurement_requests = [] for hour in range(24): score_time = datetime.now(timezone.utc) + timedelta(hours=(-(24*day + hour + 1))) index = (day * 24 + hour) % len(payloads) # wrap around and reuse values if needed measurement_request = MonitorMeasurementRequest(timestamp=score_time,metrics = [payloads[index][0], payloads[index][1]]) daily_measurement_requests.append(measurement_request) response = wos_client.monitor_instances.measurements.add( monitor_instance_id=fairness_monitor_instance_id, monitor_measurement_request=daily_measurement_requests).result print('Finished') ###Output _____no_output_____ ###Markdown Insert historical debias metrics ###Code !wget https://raw.githubusercontent.com/IBM/watson-openscale-samples/main/IBM%20Cloud/WML/assets/data/historical_data/credit_risk/history_debias_v2.json !ls -lh history_debias_v2.json with open('history_debias_v2.json', 'r') as history_file: payloads = json.load(history_file) for day in range(historyDays): print('Loading day', day + 1) daily_measurement_requests = [] for hour in range(24): score_time = datetime.now(timezone.utc) + timedelta(hours=(-(24*day + hour + 1))) index = (day * 24 + hour) % len(payloads) # wrap around and reuse values if needed measurement_request = MonitorMeasurementRequest(timestamp=score_time,metrics = [payloads[index][0], payloads[index][1]]) daily_measurement_requests.append(measurement_request) response = wos_client.monitor_instances.measurements.add( monitor_instance_id=fairness_monitor_instance_id, monitor_measurement_request=daily_measurement_requests).result print('Finished') ###Output _____no_output_____ ###Markdown Insert historical quality metrics ###Code measurements = [0.76, 0.78, 0.68, 0.72, 0.73, 0.77, 0.80] for day in range(historyDays): quality_measurement_requests = [] print('Loading day', day + 1) for hour in range(24): score_time = datetime.utcnow() + timedelta(hours=(-(24*day + hour + 1))) score_time = score_time.isoformat() + "Z" metric = {"area_under_roc": measurements[day]} measurement_request = MonitorMeasurementRequest(timestamp=score_time,metrics = [metric]) quality_measurement_requests.append(measurement_request) response = wos_client.monitor_instances.measurements.add( monitor_instance_id=quality_monitor_instance_id, monitor_measurement_request=quality_measurement_requests).result print('Finished') ###Output _____no_output_____ ###Markdown Insert historical confusion matrixes ###Code !rm history_quality_metrics.json !wget https://raw.githubusercontent.com/IBM/watson-openscale-samples/main/IBM%20Cloud/WML/assets/data/historical_data/credit_risk/history_quality_metrics.json !ls -lh history_quality_metrics.json from ibm_watson_openscale.base_classes.watson_open_scale_v2 import Source with open('history_quality_metrics.json') as json_file: records = json.load(json_file) for day in range(historyDays): index = 0 cm_measurement_requests = [] print('Loading day', day + 1) for hour in range(24): score_time = datetime.utcnow() + timedelta(hours=(-(24*day + hour + 1))) score_time = score_time.isoformat() + "Z" metric = records[index]['metrics'] source = records[index]['sources'] measurement_request = {"timestamp": score_time, "metrics": [metric], "sources": [source]} cm_measurement_requests.append(measurement_request) index+=1 response = wos_client.monitor_instances.measurements.add(monitor_instance_id=quality_monitor_instance_id, monitor_measurement_request=cm_measurement_requests).result print('Finished') ###Output _____no_output_____ ###Markdown Insert historical performance metrics ###Code target = Target( target_type=TargetTypes.INSTANCE, target_id=payload_data_set_id ) performance_monitor_instance_details = wos_client.monitor_instances.create( data_mart_id=data_mart_id, background_mode=False, monitor_definition_id=wos_client.monitor_definitions.MONITORS.PERFORMANCE.ID, target=target ).result performance_monitor_instance_id = performance_monitor_instance_details.metadata.id for day in range(historyDays): performance_measurement_requests = [] print('Loading day', day + 1) for hour in range(24): score_time = datetime.utcnow() + timedelta(hours=(-(24*day + hour + 1))) score_time = score_time.isoformat() + "Z" score_count = random.randint(60, 600) metric = {"record_count": score_count, "data_set_type": "scoring_payload"} measurement_request = {"timestamp": score_time, "metrics": [metric]} performance_measurement_requests.append(measurement_request) response = wos_client.monitor_instances.measurements.add( monitor_instance_id=performance_monitor_instance_id, monitor_measurement_request=performance_measurement_requests).result print('Finished') ###Output _____no_output_____ ###Markdown Insert historical drift measurements ###Code !rm history_drift_measurement_*.json !wget https://raw.githubusercontent.com/IBM/watson-openscale-samples/main/IBM%20Cloud/WML/assets/data/historical_data/credit_risk/history_drift_measurement_0.json !wget https://raw.githubusercontent.com/IBM/watson-openscale-samples/main/IBM%20Cloud/WML/assets/data/historical_data/credit_risk/history_drift_measurement_1.json !wget https://raw.githubusercontent.com/IBM/watson-openscale-samples/main/IBM%20Cloud/WML/assets/data/historical_data/credit_risk/history_drift_measurement_2.json !wget https://raw.githubusercontent.com/IBM/watson-openscale-samples/main/IBM%20Cloud/WML/assets/data/historical_data/credit_risk/history_drift_measurement_3.json !wget https://raw.githubusercontent.com/IBM/watson-openscale-samples/main/IBM%20Cloud/WML/assets/data/historical_data/credit_risk/history_drift_measurement_4.json !wget https://raw.githubusercontent.com/IBM/watson-openscale-samples/main/IBM%20Cloud/WML/assets/data/historical_data/credit_risk/history_drift_measurement_5.json !wget https://raw.githubusercontent.com/IBM/watson-openscale-samples/main/IBM%20Cloud/WML/assets/data/historical_data/credit_risk/history_drift_measurement_6.json !ls -lh history_drift_measurement_*.json for day in range(historyDays): drift_measurements = [] with open("history_drift_measurement_{}.json".format(day), 'r') as history_file: drift_daily_measurements = json.load(history_file) print('Loading day', day + 1) #Historical data contains 8 records per day - each represents 3 hour drift window. for nb_window, records in enumerate(drift_daily_measurements): for record in records: window_start = datetime.utcnow() + timedelta(hours=(-(24 * day + (nb_window+1)*3 + 1))) # first_payload_record_timestamp_in_window (oldest) window_end = datetime.utcnow() + timedelta(hours=(-(24 * day + nb_window*3 + 1)))# last_payload_record_timestamp_in_window (most recent) #modify start and end time for each record record['sources'][0]['data']['start'] = window_start.isoformat() + "Z" record['sources'][0]['data']['end'] = window_end.isoformat() + "Z" metric = record['metrics'][0] source = record['sources'][0] measurement_request = {"timestamp": window_start.isoformat() + "Z", "metrics": [metric], "sources": [source]} drift_measurements.append(measurement_request) response = wos_client.monitor_instances.measurements.add( monitor_instance_id=drift_monitor_instance_id, monitor_measurement_request=drift_measurements).result print("Daily loading finished.") ###Output _____no_output_____ ###Markdown Additional data to help debugging ###Code print('Datamart:', data_mart_id) print('Model:', model_uid) print('Deployment:', ai_func_deployment_uid) ###Output _____no_output_____ ###Markdown Identify transactions for Explainability Transaction IDs identified by the cells below can be copied and pasted into the Explainability tab of the OpenScale dashboard. ###Code wos_client.data_sets.show_records(payload_data_set_id, limit=5) ###Output _____no_output_____ ###Markdown Working with Watson Machine Learning This notebook should be run using with **Python 3.7.x** runtime environment. **If you are viewing this in Watson Studio and do not see Python 3.7.x in the upper right corner of your screen, please update the runtime now.** It requires service credentials for the following services: * Watson OpenScale * Watson Machine Learning * DB2 The notebook will train, create and deploy a German Credit Risk model, configure OpenScale to monitor that deployment, and inject seven days' worth of historical records and measurements for viewing in the OpenScale Insights dashboard. Contents- [Setup](setup)- [Model building and deployment](model)- [OpenScale configuration](openscale)- [Quality monitor and feedback logging](quality)- [Fairness, drift monitoring and explanations](fairness)- [Custom monitors and metrics](custom)- [Historical data](historical) Setup Package installation ###Code import warnings warnings.filterwarnings('ignore') !pip install --upgrade pyspark==2.4 --no-cache | tail -n 1 !pip install --upgrade pandas==1.2.3 --no-cache | tail -n 1 !pip install --upgrade requests==2.23 --no-cache | tail -n 1 !pip install numpy==1.20.1 --no-cache | tail -n 1 !pip install SciPy --no-cache | tail -n 1 !pip install lime --no-cache | tail -n 1 !pip install --upgrade ibm-watson-machine-learning --user | tail -n 1 !pip install --upgrade ibm-watson-openscale --no-cache | tail -n 1 ###Output _____no_output_____ ###Markdown Action: restart the kernel! Configure credentials - WOS_CREDENTIALS (CP4D)- WML_CREDENTIALS (CP4D)- DATABASE_CREDENTIALS (DB2 on CP4D or Cloud Object Storage (COS))- SCHEMA_NAME ###Code WOS_CREDENTIALS = { "url": "<URL>", "username": "<USER>", "password": "<PASSWORD>" } WML_CREDENTIALS = { "url": WOS_CREDENTIALS['url'], "username": WOS_CREDENTIALS['username'], "password": WOS_CREDENTIALS['password'], "instance_id": "wml_local", "version" : "3.5" #If your env is CP4D 4.0 then specify "4.0" instead of "3.5" } #IBM DB2 database connection format example. This is required if you don't have any existing datamarts DATABASE_CREDENTIALS = { "hostname":"***", "username":"***", "password":"***", "database":"***", "port":"***", "ssl":"***", "sslmode":"***", "certificate_base64":"***"} ###Output _____no_output_____ ###Markdown Action: put created schema name below. ###Code #This is required if you don't have any existing datamarts SCHEMA_NAME = '<SCHEMA_NAME>' ###Output _____no_output_____ ###Markdown Run the notebookAt this point, the notebook is ready to run. You can either run the cells one at a time, or click the **Kernel** option above and select **Restart and Run All** to run all the cells. Model building and deployment In this section you will learn how to train Spark MLLib model and next deploy it as web-service using Watson Machine Learning service. Load the training data from github ###Code !rm german_credit_data_biased_training.csv !wget https://raw.githubusercontent.com/IBM/watson-openscale-samples/main/Cloud%20Pak%20for%20Data/WML/assets/data/credit_risk/german_credit_data_biased_training.csv from pyspark.sql import SparkSession import pandas as pd import json spark = SparkSession.builder.getOrCreate() pd_data = pd.read_csv("german_credit_data_biased_training.csv", sep=",", header=0) df_data = spark.read.csv(path="german_credit_data_biased_training.csv", sep=",", header=True, inferSchema=True) df_data.head() ###Output _____no_output_____ ###Markdown Explore data ###Code df_data.printSchema() print("Number of records: " + str(df_data.count())) display(df_data) ###Output _____no_output_____ ###Markdown Create a model ###Code spark_df = df_data (train_data, test_data) = spark_df.randomSplit([0.8, 0.2], 24) MODEL_NAME = "Spark German Risk Model - AI Function" DEPLOYMENT_NAME = "Spark German Risk Deployment - AI Function" print("Number of records for training: " + str(train_data.count())) print("Number of records for evaluation: " + str(test_data.count())) spark_df.printSchema() ###Output _____no_output_____ ###Markdown The code below creates a Random Forest Classifier with Spark, setting up string indexers for the categorical features and the label column. Finally, this notebook creates a pipeline including the indexers and the model, and does an initial Area Under ROC evaluation of the model. ###Code from pyspark.ml.feature import OneHotEncoder, StringIndexer, IndexToString, VectorAssembler from pyspark.ml.evaluation import BinaryClassificationEvaluator from pyspark.ml import Pipeline, Model from pyspark.ml.feature import SQLTransformer features = [x for x in spark_df.columns if x != 'Risk'] categorical_features = ['CheckingStatus', 'CreditHistory', 'LoanPurpose', 'ExistingSavings', 'EmploymentDuration', 'Sex', 'OthersOnLoan', 'OwnsProperty', 'InstallmentPlans', 'Housing', 'Job', 'Telephone', 'ForeignWorker'] categorical_num_features = [x + '_IX' for x in categorical_features] si_list = [StringIndexer(inputCol=x, outputCol=y) for x, y in zip(categorical_features, categorical_num_features)] va_features = VectorAssembler(inputCols=categorical_num_features + [x for x in features if x not in categorical_features], outputCol="features") si_label = StringIndexer(inputCol="Risk", outputCol="label").fit(spark_df) label_converter = IndexToString(inputCol="prediction", outputCol="predictedLabel", labels=si_label.labels) from pyspark.ml.classification import RandomForestClassifier classifier = RandomForestClassifier(featuresCol="features") pipeline = Pipeline(stages= si_list + [si_label, va_features, classifier, label_converter]) model = pipeline.fit(train_data) ###Output _____no_output_____ ###Markdown **Note**: If you want filter features from model output please replace `*` with feature names to be retained in `SQLTransformer` statement. ###Code predictions = model.transform(test_data) evaluatorDT = BinaryClassificationEvaluator(rawPredictionCol="prediction", metricName='areaUnderROC') area_under_curve = evaluatorDT.evaluate(predictions) evaluatorDT = BinaryClassificationEvaluator(rawPredictionCol="prediction", metricName='areaUnderPR') area_under_PR = evaluatorDT.evaluate(predictions) #default evaluation is areaUnderROC print("areaUnderROC = %g" % area_under_curve, "areaUnderPR = %g" % area_under_PR) # extra code: evaluate more metrics by exporting them into pandas and numpy from sklearn.metrics import classification_report y_pred = predictions.toPandas()['prediction'] y_pred = ['Risk' if pred == 1.0 else 'No Risk' for pred in y_pred] y_test = test_data.toPandas()['Risk'] print(classification_report(y_test, y_pred, target_names=['Risk', 'No Risk'])) ###Output _____no_output_____ ###Markdown Save training data to Cloud Object Storage Cloud object storage detailsIn next cells, you will need to paste some credentials to Cloud Object Storage. If you haven't worked with COS yet please visit getting started with COS tutorial. You can find COS_API_KEY_ID and COS_RESOURCE_CRN variables in Service Credentials in menu of your COS instance. Used COS Service Credentials must be created with Role parameter set as Writer. Later training data file will be loaded to the bucket of your instance and used as training refecence in subsription.COS_ENDPOINT variable can be found in Endpoint field of the menu. ###Code IAM_URL="https://iam.ng.bluemix.net/oidc/token" COS_API_KEY_ID = "<COS_API_KEY>" COS_RESOURCE_CRN = "<RESOURCE_INSTANCE_ID>" COS_ENDPOINT = "<COS_ENDPOINT>" # Current list avaiable at https://control.cloud-object-storage.cloud.ibm.com/v2/endpoints BUCKET_NAME = "<BUCKET_NAME>" #example: "credit-risk-training-data" training_data_file_name="german_credit_data_biased_training.csv" import ibm_boto3 from ibm_botocore.client import Config, ClientError cos_client = ibm_boto3.resource("s3", ibm_api_key_id=COS_API_KEY_ID, ibm_service_instance_id=COS_RESOURCE_CRN, ibm_auth_endpoint="https://iam.bluemix.net/oidc/token", config=Config(signature_version="oauth"), endpoint_url=COS_ENDPOINT ) with open(training_data_file_name, "rb") as file_data: cos_client.Object(BUCKET_NAME, training_data_file_name).upload_fileobj( Fileobj=file_data ) ###Output _____no_output_____ ###Markdown Publish the model In this section, the notebook uses Watson Machine Learning to save the model (including the pipeline) to the WML instance. Previous versions of the model are removed so that the notebook can be run again, resetting all data for another demo. ###Code import json from ibm_watson_machine_learning import APIClient wml_client = APIClient(WML_CREDENTIALS) wml_client.version space_name = "<SPACE_NAME>" # create the space and set it as default space_meta_data = { wml_client.spaces.ConfigurationMetaNames.NAME : space_name, wml_client.spaces.ConfigurationMetaNames.DESCRIPTION : 'tutorial_space' } spaces = wml_client.spaces.get_details()['resources'] space_id = None for space in spaces: if space['entity']['name'] == space_name: space_id = space["metadata"]["id"] if space_id is None: space_id = wml_client.spaces.store(meta_props=space_meta_data)["metadata"]["id"] print(space_id) wml_client.set.default_space(space_id) ###Output _____no_output_____ ###Markdown Remove existing model and deployment ###Code deployments_list = wml_client.deployments.get_details() for deployment in deployments_list["resources"]: model_id = deployment["entity"]["asset"]["id"] deployment_id = deployment["metadata"]["id"] if deployment["metadata"]["name"] == DEPLOYMENT_NAME: print("Deleting deployment id", deployment_id) wml_client.deployments.delete(deployment_id) print("Deleting model id", model_id) wml_client.repository.delete(model_id) wml_client.repository.list_models() ###Output _____no_output_____ ###Markdown Add training data reference either from DB2 on CP4D or Cloud Object Storage ###Code # COS training data reference example format training_data_references = [ { "id": "Credit Risk", "type": "s3", "connection": { "access_key_id": COS_API_KEY_ID, "endpoint_url": COS_ENDPOINT, "resource_instance_id":COS_RESOURCE_CRN }, "location": { "bucket": BUCKET_NAME, "path": training_data_file_name, } } ] software_spec_uid = wml_client.software_specifications.get_id_by_name("spark-mllib_2.4") print("Software Specification ID: {}".format(software_spec_uid)) model_props = { wml_client._models.ConfigurationMetaNames.NAME:"{}".format(MODEL_NAME), wml_client._models.ConfigurationMetaNames.TYPE: "mllib_2.4", wml_client._models.ConfigurationMetaNames.SOFTWARE_SPEC_UID: software_spec_uid, #wml_client._models.ConfigurationMetaNames.TRAINING_DATA_REFERENCES: training_data_references, wml_client._models.ConfigurationMetaNames.LABEL_FIELD: "Risk", } print("Storing model ...") published_model_details = wml_client.repository.store_model( model=model, meta_props=model_props, training_data=train_data, pipeline=pipeline) model_uid = wml_client.repository.get_model_uid(published_model_details) print("Done") print("Model ID: {}".format(model_uid)) wml_client.repository.list_models() ###Output _____no_output_____ ###Markdown Deploy the model The next section of the notebook deploys the model as a RESTful web service in Watson Machine Learning. The deployed model will have a scoring URL you can use to send data to the model for predictions. ###Code deployment_details = wml_client.deployments.create( model_uid, meta_props={ wml_client.deployments.ConfigurationMetaNames.NAME: "{}".format(DEPLOYMENT_NAME), wml_client.deployments.ConfigurationMetaNames.ONLINE: {} } ) scoring_url = wml_client.deployments.get_scoring_href(deployment_details) deployment_uid=wml_client.deployments.get_uid(deployment_details) print("Scoring URL:" + scoring_url) print("Model id: {}".format(model_uid)) print("Deployment id: {}".format(deployment_uid)) ###Output _____no_output_____ ###Markdown Define AI Function ###Code ai_params = {"wml_credentials": WML_CREDENTIALS, "deployment_uid": deployment_uid, "space_id": space_id } #AI function definition def score_generator(params=ai_params): import json from ibm_watson_machine_learning import APIClient wml_credentials = params["wml_credentials"] deployment_uid = params["deployment_uid"] space_id = params["space_id"] client = APIClient(wml_credentials) client.set.default_space(space_id) def score(payload): scores_area = client.deployments.score(deployment_uid, payload) return scores_area return score fields = ["CheckingStatus","LoanDuration","CreditHistory","LoanPurpose","LoanAmount","ExistingSavings","EmploymentDuration","InstallmentPercent","Sex","OthersOnLoan","CurrentResidenceDuration","OwnsProperty","Age","InstallmentPlans","Housing","ExistingCreditsCount","Job","Dependents","Telephone","ForeignWorker"] values = [ ["no_checking",13,"credits_paid_to_date","car_new",1343,"100_to_500","1_to_4",2,"female","none",3,"savings_insurance",46,"none","own",2,"skilled",1,"none","yes"] ] sample_payload = { "input_data": [ {"fields": fields,"values": values}]} score = score_generator() scores_ai = score(sample_payload) wml_client.set.default_space(space_id) print(scores_ai) #Store the function func_name = 'Credit Risk python Fn Model' meta_data = { wml_client.repository.FunctionMetaNames.NAME: func_name, #Note if there is specification related exception then use "default_py3.7_opence" instead of default_py3.8 wml_client.repository.FunctionMetaNames.SOFTWARE_SPEC_ID: wml_client.software_specifications.get_id_by_name("default_py3.8") } function_details = wml_client.repository.store_function(meta_props=meta_data, function=score_generator) function_details ai_function_uid = function_details['metadata']['id'] #Generate the deployment function_deployment_details = wml_client.deployments.create(artifact_uid=ai_function_uid, meta_props={wml_client.deployments.ConfigurationMetaNames.NAME: 'dep_' + func_name,wml_client.deployments.ConfigurationMetaNames.ONLINE: {}}) ai_func_deployment_uid = wml_client.deployments.get_uid(function_deployment_details) print("AI Function Deployment UID:" + ai_func_deployment_uid) scoring_url = function_deployment_details["entity"]["status"]["online_url"]["url"] print(scoring_url) ###Output _____no_output_____ ###Markdown Configure OpenScale The notebook will now import the necessary libraries and set up a Python OpenScale client. ###Code from ibm_cloud_sdk_core.authenticators import CloudPakForDataAuthenticator from ibm_watson_openscale import APIClient from ibm_watson_openscale import * from ibm_watson_openscale.supporting_classes.enums import * from ibm_watson_openscale.supporting_classes import * authenticator = CloudPakForDataAuthenticator( url=WOS_CREDENTIALS['url'], username=WOS_CREDENTIALS['username'], password=WOS_CREDENTIALS['password'], disable_ssl_verification=True ) instance_id='<SERVICE_INSTANCE_ID' #Datamart id wos_client = APIClient(service_url=WOS_CREDENTIALS['url'],authenticator=authenticator,service_instance_id = instance_id) wos_client.version ###Output _____no_output_____ ###Markdown Create datamart Set up datamart Watson OpenScale uses a database to store payload logs and calculated metrics. If database credentials were not supplied above, the notebook will use the free, internal lite database. If database credentials were supplied, the datamart will be created there unless there is an existing datamart and the KEEP_MY_INTERNAL_POSTGRES variable is set to True. If an OpenScale datamart exists in Db2 or PostgreSQL, the existing datamart will be used and no data will be overwritten.Prior instances of the German Credit model will be removed from OpenScale monitoring. ###Code wos_client.data_marts.show() data_marts = wos_client.data_marts.list().result.data_marts if len(data_marts) == 0: if DB_CREDENTIALS is not None: if SCHEMA_NAME is None: print("Please specify the SCHEMA_NAME and rerun the cell") print('Setting up external datamart') added_data_mart_result = wos_client.data_marts.add( background_mode=False, name="WOS Data Mart", description="Data Mart created by WOS tutorial notebook", database_configuration=DatabaseConfigurationRequest( database_type=DatabaseType.DB2, credentials=PrimaryStorageCredentialsLong( hostname=DATABASE_CREDENTIALS['hostname'], username=DATABASE_CREDENTIALS['username'], password=DATABASE_CREDENTIALS['password'], db=DATABASE_CREDENTIALS['database'], port=DATABASE_CREDENTIALS['port'], ssl=DATABASE_CREDENTIALS['ssl'], sslmode=DATABASE_CREDENTIALS['sslmode'], certificate_base64=DATABASE_CREDENTIALS['certificate_base64'] ), location=LocationSchemaName( schema_name= SCHEMA_NAME ) ) ).result else: print('Setting up internal datamart') added_data_mart_result = wos_client.data_marts.add( background_mode=False, name="WOS Data Mart", description="Data Mart created by WOS tutorial notebook", internal_database = True).result data_mart_id = added_data_mart_result.metadata.id else: data_mart_id=data_marts[0].metadata.id print('Using existing datamart {}'.format(data_mart_id)) ###Output _____no_output_____ ###Markdown Remove existing service provider connected with used WML instance.Multiple service providers for the same engine instance are avaiable in Watson OpenScale. To avoid multiple service providers of used WML instance in the tutorial notebook the following code deletes existing service provder(s) and then adds new one. ###Code SERVICE_PROVIDER_NAME = "WML AI function - WOS notebook" SERVICE_PROVIDER_DESCRIPTION = "Added by tutorial WML AI function - WOS notebook." service_providers = wos_client.service_providers.list().result.service_providers for service_provider in service_providers: service_instance_name = service_provider.entity.name if service_instance_name == SERVICE_PROVIDER_NAME: service_provider_id = service_provider.metadata.id wos_client.service_providers.delete(service_provider_id) print("Deleted existing service_provider for WML instance: {}".format(service_provider_id)) ###Output _____no_output_____ ###Markdown Add service providerWatson OpenScale needs to be bound to the Watson Machine Learning instance to capture payload data into and out of the model.**Note:** You can bind more than one engine instance if needed by calling `wos_client.service_providers.add` method. Next, you can refer to particular service provider using `service_provider_id`. ###Code added_service_provider_result = wos_client.service_providers.add( name=SERVICE_PROVIDER_NAME, description=SERVICE_PROVIDER_DESCRIPTION, service_type=ServiceTypes.WATSON_MACHINE_LEARNING, deployment_space_id = space_id, operational_space_id = "production", credentials=WMLCredentialsCP4D( url=None, username=None, password=None, instance_id=None ), background_mode=False ).result service_provider_id = added_service_provider_result.metadata.id wos_client.service_providers.show() print(deployment_uid) asset_deployment_details = wos_client.service_providers.list_assets(data_mart_id=data_mart_id, service_provider_id=service_provider_id,deployment_id=ai_func_deployment_uid, deployment_space_id = space_id).result['resources'][0] asset_deployment_details model_asset_details_from_deployment=wos_client.service_providers.get_deployment_asset(data_mart_id=data_mart_id,service_provider_id=service_provider_id,deployment_id=ai_func_deployment_uid,deployment_space_id=space_id) model_asset_details_from_deployment ###Output _____no_output_____ ###Markdown Subscriptions Remove existing credit risk subscriptions This code removes previous subscriptions to the German Credit model to refresh the monitors with the new model and new data. ###Code wos_client.subscriptions.show() subscriptions = wos_client.subscriptions.list().result.subscriptions for subscription in subscriptions: sub_model_id = subscription.entity.asset.asset_id if sub_model_id == ai_function_uid: wos_client.subscriptions.delete(subscription.metadata.id) print('Deleted existing subscription for model', sub_model_id) ###Output _____no_output_____ ###Markdown This code creates the model subscription in OpenScale using the Python client API. Note that we need to provide the model unique identifier, and some information about the model itself. ###Code subscription_details = wos_client.subscriptions.add( data_mart_id=data_mart_id, service_provider_id=service_provider_id, asset=Asset( asset_id=model_asset_details_from_deployment["entity"]["asset"]["asset_id"], name=model_asset_details_from_deployment["entity"]["asset"]["name"], url=model_asset_details_from_deployment["entity"]["asset"]["url"], asset_type=AssetTypes.MODEL, input_data_type=InputDataType.STRUCTURED, problem_type=ProblemType.BINARY_CLASSIFICATION ), deployment=AssetDeploymentRequest( deployment_id=asset_deployment_details['metadata']['guid'], name=asset_deployment_details['entity']['name'], deployment_type= DeploymentTypes.ONLINE, url=asset_deployment_details['entity']['scoring_endpoint']['url'] ), asset_properties=AssetPropertiesRequest( label_column='Risk', probability_fields=['probability'], prediction_field='predictedLabel', feature_fields = ["CheckingStatus","LoanDuration","CreditHistory","LoanPurpose","LoanAmount","ExistingSavings","EmploymentDuration","InstallmentPercent","Sex","OthersOnLoan","CurrentResidenceDuration","OwnsProperty","Age","InstallmentPlans","Housing","ExistingCreditsCount","Job","Dependents","Telephone","ForeignWorker"], categorical_fields = ["CheckingStatus","CreditHistory","LoanPurpose","ExistingSavings","EmploymentDuration","Sex","OthersOnLoan","OwnsProperty","InstallmentPlans","Housing","Job","Telephone","ForeignWorker"], training_data_reference=TrainingDataReference(type='cos', location=COSTrainingDataReferenceLocation(bucket = BUCKET_NAME, file_name = training_data_file_name), connection=COSTrainingDataReferenceConnection.from_dict({ "resource_instance_id": COS_RESOURCE_CRN, "url": COS_ENDPOINT, "api_key": COS_API_KEY_ID, "iam_url": IAM_URL})), training_data_schema=None ), background_mode=False ).result subscription_id = subscription_details.metadata.id subscription_id import time time.sleep(5) payload_data_set_id = None payload_data_set_id = wos_client.data_sets.list(type=DataSetTypes.PAYLOAD_LOGGING, target_target_id=subscription_id, target_target_type=TargetTypes.SUBSCRIPTION).result.data_sets[0].metadata.id if payload_data_set_id is None: print("Payload data set not found. Please check subscription status.") else: print("Payload data set id: ", payload_data_set_id) wos_client.data_sets.show() ###Output _____no_output_____ ###Markdown Score the model so we can configure monitors Now that the WML service has been bound and the subscription has been created, we need to send a request to the model before we configure OpenScale. This allows OpenScale to create a payload log in the datamart with the correct schema, so it can capture data coming into and out of the model. ###Code fields = ["CheckingStatus","LoanDuration","CreditHistory","LoanPurpose","LoanAmount","ExistingSavings","EmploymentDuration","InstallmentPercent","Sex","OthersOnLoan","CurrentResidenceDuration","OwnsProperty","Age","InstallmentPlans","Housing","ExistingCreditsCount","Job","Dependents","Telephone","ForeignWorker"] values = [ ["no_checking",13,"credits_paid_to_date","car_new",1343,"100_to_500","1_to_4",2,"female","none",3,"savings_insurance",46,"none","own",2,"skilled",1,"none","yes"], ["no_checking",24,"prior_payments_delayed","furniture",4567,"500_to_1000","1_to_4",4,"male","none",4,"savings_insurance",36,"none","free",2,"management_self-employed",1,"none","yes"], ["0_to_200",26,"all_credits_paid_back","car_new",863,"less_100","less_1",2,"female","co-applicant",2,"real_estate",38,"none","own",1,"skilled",1,"none","yes"], ["0_to_200",14,"no_credits","car_new",2368,"less_100","1_to_4",3,"female","none",3,"real_estate",29,"none","own",1,"skilled",1,"none","yes"], ["0_to_200",4,"no_credits","car_new",250,"less_100","unemployed",2,"female","none",3,"real_estate",23,"none","rent",1,"management_self-employed",1,"none","yes"], ["no_checking",17,"credits_paid_to_date","car_new",832,"100_to_500","1_to_4",2,"male","none",2,"real_estate",42,"none","own",1,"skilled",1,"none","yes"], ["no_checking",33,"outstanding_credit","appliances",5696,"unknown","greater_7",4,"male","co-applicant",4,"unknown",54,"none","free",2,"skilled",1,"yes","yes"], ["0_to_200",13,"prior_payments_delayed","retraining",1375,"100_to_500","4_to_7",3,"male","none",3,"real_estate",37,"none","own",2,"management_self-employed",1,"none","yes"] ] payload_scoring = { "input_data": [ {"fields": fields,"values": values}]} scoring_response = wml_client.deployments.score(ai_func_deployment_uid, payload_scoring) print('Single record scoring result:', '\n fields:', scoring_response['predictions'][0]['fields'], '\n values: ', scoring_response['predictions'][0]['values'][0]) ###Output _____no_output_____ ###Markdown Check if WML payload logging worked else manually store payload records ###Code import uuid from ibm_watson_openscale.supporting_classes.payload_record import PayloadRecord time.sleep(5) pl_records_count = wos_client.data_sets.get_records_count(payload_data_set_id) print("Number of records in the payload logging table: {}".format(pl_records_count)) if pl_records_count == 0: print("Payload logging did not happen, performing explicit payload logging.") wos_client.data_sets.store_records(data_set_id=payload_data_set_id, request_body=[PayloadRecord( scoring_id=str(uuid.uuid4()), request=payload_scoring, response={"fields": scoring_response['predictions'][0]['fields'], "values":scoring_response['predictions'][0]['values']}, response_time=460 )]) time.sleep(5) pl_records_count = wos_client.data_sets.get_records_count(payload_data_set_id) print("Number of records in the payload logging table: {}".format(pl_records_count)) ###Output _____no_output_____ ###Markdown Quality monitoring and feedback logging Enable quality monitoring The code below waits ten seconds to allow the payload logging table to be set up before it begins enabling monitors. First, it turns on the quality (accuracy) monitor and sets an alert threshold of 70%. OpenScale will show an alert on the dashboard if the model accuracy measurement (area under the curve, in the case of a binary classifier) falls below this threshold.The second paramater supplied, min_records, specifies the minimum number of feedback records OpenScale needs before it calculates a new measurement. The quality monitor runs hourly, but the accuracy reading in the dashboard will not change until an additional 50 feedback records have been added, via the user interface, the Python client, or the supplied feedback endpoint. ###Code import time time.sleep(10) target = Target( target_type=TargetTypes.SUBSCRIPTION, target_id=subscription_id ) parameters = { "min_feedback_data_size": 50 } thresholds = [ { "metric_id": "area_under_roc", "type": "lower_limit", "value": .80 } ] quality_monitor_details = wos_client.monitor_instances.create( data_mart_id=data_mart_id, background_mode=False, monitor_definition_id=wos_client.monitor_definitions.MONITORS.QUALITY.ID, target=target, parameters=parameters, thresholds=thresholds ).result quality_monitor_instance_id = quality_monitor_details.metadata.id quality_monitor_instance_id ###Output _____no_output_____ ###Markdown Feedback logging The code below downloads and stores enough feedback data to meet the minimum threshold so that OpenScale can calculate a new accuracy measurement. It then kicks off the accuracy monitor. The monitors run hourly, or can be initiated via the Python API, the REST API, or the graphical user interface. ###Code !rm additional_feedback_data_v2.json !wget https://raw.githubusercontent.com/IBM/watson-openscale-samples/main/Cloud%20Pak%20for%20Data/WML/assets/data/credit_risk/additional_feedback_data_v2.json ###Output _____no_output_____ ###Markdown Get feedback logging dataset ID ###Code feedback_dataset_id = None feedback_dataset = wos_client.data_sets.list(type=DataSetTypes.FEEDBACK, target_target_id=subscription_id, target_target_type=TargetTypes.SUBSCRIPTION).result print(feedback_dataset) feedback_dataset_id = feedback_dataset.data_sets[0].metadata.id if feedback_dataset_id is None: print("Feedback data set not found. Please check quality monitor status.") with open('additional_feedback_data_v2.json') as feedback_file: additional_feedback_data = json.load(feedback_file) wos_client.data_sets.store_records(feedback_dataset_id, request_body=additional_feedback_data, background_mode=False) wos_client.data_sets.get_records_count(data_set_id=feedback_dataset_id) run_details = wos_client.monitor_instances.run(monitor_instance_id=quality_monitor_instance_id, background_mode=False).result wos_client.monitor_instances.show_metrics(monitor_instance_id=quality_monitor_instance_id) ###Output _____no_output_____ ###Markdown Fairness, drift monitoring and explanations The code below configures fairness monitoring for our model. It turns on monitoring for two features, Sex and Age. In each case, we must specify: * Which model feature to monitor * One or more **majority** groups, which are values of that feature that we expect to receive a higher percentage of favorable outcomes * One or more **minority** groups, which are values of that feature that we expect to receive a higher percentage of unfavorable outcomes * The threshold at which we would like OpenScale to display an alert if the fairness measurement falls below (in this case, 95%)Additionally, we must specify which outcomes from the model are favourable outcomes, and which are unfavourable. We must also provide the number of records OpenScale will use to calculate the fairness score. In this case, OpenScale's fairness monitor will run hourly, but will not calculate a new fairness rating until at least 200 records have been added. Finally, to calculate fairness, OpenScale must perform some calculations on the training data, so we provide the dataframe containing the data. ###Code wos_client.monitor_instances.show() target = Target( target_type=TargetTypes.SUBSCRIPTION, target_id=subscription_id ) parameters = { "features": [ {"feature": "Sex", "majority": ['male'], "minority": ['female'], "threshold": 0.95 }, {"feature": "Age", "majority": [[26, 75]], "minority": [[18, 25]], "threshold": 0.95 } ], "favourable_class": ["No Risk"], "unfavourable_class": ["Risk"], "min_records": 100 } fairness_monitor_details = wos_client.monitor_instances.create( data_mart_id=data_mart_id, background_mode=False, monitor_definition_id=wos_client.monitor_definitions.MONITORS.FAIRNESS.ID, target=target, parameters=parameters).result fairness_monitor_instance_id =fairness_monitor_details.metadata.id fairness_monitor_instance_id ###Output _____no_output_____ ###Markdown Drift configuration ###Code monitor_instances = wos_client.monitor_instances.list().result.monitor_instances for monitor_instance in monitor_instances: monitor_def_id=monitor_instance.entity.monitor_definition_id if monitor_def_id == "drift" and monitor_instance.entity.target.target_id == subscription_id: wos_client.monitor_instances.delete(monitor_instance.metadata.id) print('Deleted existing drift monitor instance with id: ', monitor_instance.metadata.id) target = Target( target_type=TargetTypes.SUBSCRIPTION, target_id=subscription_id ) parameters = { "min_samples": 100, "drift_threshold": 0.1, "train_drift_model": True, "enable_model_drift": False, "enable_data_drift": True } drift_monitor_details = wos_client.monitor_instances.create( data_mart_id=data_mart_id, background_mode=False, monitor_definition_id=wos_client.monitor_definitions.MONITORS.DRIFT.ID, target=target, parameters=parameters ).result drift_monitor_instance_id = drift_monitor_details.metadata.id drift_monitor_instance_id ###Output _____no_output_____ ###Markdown Score the model again now that monitoring is configured This next section randomly selects 200 records from the data feed and sends those records to the model for predictions. This is enough to exceed the minimum threshold for records set in the previous section, which allows OpenScale to begin calculating fairness. ###Code !wget https://raw.githubusercontent.com/IBM/watson-openscale-samples/main/Cloud%20Pak%20for%20Data/WML/assets/data/credit_risk/german_credit_feed.json !ls -lh german_credit_feed.json ###Output _____no_output_____ ###Markdown Score 200 randomly chosen records ###Code import random with open('german_credit_feed.json', 'r') as scoring_file: scoring_data = json.load(scoring_file) fields = scoring_data['fields'] values = [] for _ in range(200): values.append(random.choice(scoring_data['values'])) payload_scoring = {"input_data": [{"fields": fields, "values": values}]} scoring_response = wml_client.deployments.score(ai_func_deployment_uid, payload_scoring) time.sleep(5) if pl_records_count == 8: print("Payload logging did not happen, performing explicit payload logging.") wos_client.data_sets.store_records(data_set_id=payload_data_set_id, request_body=[PayloadRecord( scoring_id=str(uuid.uuid4()), request=payload_scoring, response=scoring_response, response_time=460 )]) time.sleep(5) pl_records_count = wos_client.data_sets.get_records_count(payload_data_set_id) print("Number of records in the payload logging table: {}".format(pl_records_count)) print('Number of records in payload table: ', wos_client.data_sets.get_records_count(data_set_id=payload_data_set_id)) ###Output _____no_output_____ ###Markdown Run fairness monitor Kick off a fairness monitor run on current data. The monitor runs hourly, but can be manually initiated using the Python client, the REST API, or the graphical user interface. ###Code time.sleep(5) run_details = wos_client.monitor_instances.run(monitor_instance_id=fairness_monitor_instance_id, background_mode=False) time.sleep(10) wos_client.monitor_instances.show_metrics(monitor_instance_id=fairness_monitor_instance_id) ###Output _____no_output_____ ###Markdown Run drift monitor Kick off a drift monitor run on current data. The monitor runs every hour, but can be manually initiated using the Python client, the REST API. ###Code drift_run_details = wos_client.monitor_instances.run(monitor_instance_id=drift_monitor_instance_id, background_mode=False) time.sleep(5) wos_client.monitor_instances.show_metrics(monitor_instance_id=drift_monitor_instance_id) ###Output _____no_output_____ ###Markdown Configure Explainability Finally, we provide OpenScale with the training data to enable and configure the explainability features. ###Code target = Target( target_type=TargetTypes.SUBSCRIPTION, target_id=subscription_id ) parameters = { "enabled": True } explainability_details = wos_client.monitor_instances.create( data_mart_id=data_mart_id, background_mode=False, monitor_definition_id=wos_client.monitor_definitions.MONITORS.EXPLAINABILITY.ID, target=target, parameters=parameters ).result explainability_monitor_id = explainability_details.metadata.id ###Output _____no_output_____ ###Markdown Run explanation for sample record ###Code pl_records_resp = wos_client.data_sets.get_list_of_records(data_set_id=payload_data_set_id, limit=1, offset=0).result scoring_ids = [pl_records_resp["records"][0]["entity"]["values"]["scoring_id"]] print("Running explanations on scoring IDs: {}".format(scoring_ids)) explanation_types = ["lime", "contrastive"] result = wos_client.monitor_instances.explanation_tasks(scoring_ids=scoring_ids, explanation_types=explanation_types).result print(result) ###Output _____no_output_____ ###Markdown Custom monitors and metrics Register custom monitor ###Code def get_definition(monitor_name): monitor_definitions = wos_client.monitor_definitions.list().result.monitor_definitions for definition in monitor_definitions: if monitor_name == definition.entity.name: return definition return None monitor_name = 'my model performance' metrics = [MonitorMetricRequest(name='sensitivity', thresholds=[MetricThreshold(type=MetricThresholdTypes.LOWER_LIMIT, default=0.8)]), MonitorMetricRequest(name='specificity', thresholds=[MetricThreshold(type=MetricThresholdTypes.LOWER_LIMIT, default=0.75)])] tags = [MonitorTagRequest(name='region', description='customer geographical region')] existing_definition = get_definition(monitor_name) if existing_definition is None: custom_monitor_details = wos_client.monitor_definitions.add(name=monitor_name, metrics=metrics, tags=tags, background_mode=False).result else: custom_monitor_details = existing_definition ###Output _____no_output_____ ###Markdown Show available monitors types ###Code wos_client.monitor_definitions.show() ###Output _____no_output_____ ###Markdown Get monitors uids and details ###Code custom_monitor_id = custom_monitor_details.metadata.id print(custom_monitor_id) custom_monitor_details = wos_client.monitor_definitions.get(monitor_definition_id=custom_monitor_id).result print('Monitor definition details:', custom_monitor_details) ###Output _____no_output_____ ###Markdown Enable custom monitor for subscription ###Code target = Target( target_type=TargetTypes.SUBSCRIPTION, target_id=subscription_id ) thresholds = [MetricThresholdOverride(metric_id='sensitivity', type = MetricThresholdTypes.LOWER_LIMIT, value=0.9)] custom_monitor_instance_details = wos_client.monitor_instances.create( data_mart_id=data_mart_id, background_mode=False, monitor_definition_id=custom_monitor_id, target=target ).result ###Output _____no_output_____ ###Markdown Get monitor instance id and configuration details ###Code custom_monitor_instance_id = custom_monitor_instance_details.metadata.id custom_monitor_instance_details = wos_client.monitor_instances.get(custom_monitor_instance_id).result print(custom_monitor_instance_details) ###Output _____no_output_____ ###Markdown Storing custom metrics ###Code from datetime import datetime, timezone, timedelta from ibm_watson_openscale.base_classes.watson_open_scale_v2 import MonitorMeasurementRequest custom_monitoring_run_id = "11122223333111abc" measurement_request = [MonitorMeasurementRequest(timestamp=datetime.now(timezone.utc), metrics=[{"specificity": 0.78, "sensitivity": 0.67, "region": "us-south"}], run_id=custom_monitoring_run_id)] print(measurement_request[0]) published_measurement_response = wos_client.monitor_instances.measurements.add( monitor_instance_id=custom_monitor_instance_id, monitor_measurement_request=measurement_request).result published_measurement_id = published_measurement_response[0]["measurement_id"] print(published_measurement_response) ###Output _____no_output_____ ###Markdown List and get custom metrics ###Code time.sleep(5) published_measurement = wos_client.monitor_instances.measurements.get(monitor_instance_id=custom_monitor_instance_id, measurement_id=published_measurement_id).result print(published_measurement) ###Output _____no_output_____ ###Markdown Historical data ###Code historyDays = 7 ###Output _____no_output_____ ###Markdown Insert historical payloads The next section of the notebook downloads and writes historical data to the payload and measurement tables to simulate a production model that has been monitored and receiving regular traffic for the last seven days. This historical data can be viewed in the Watson OpenScale user interface. The code uses the Python and REST APIs to write this data. ###Code !wget https://raw.githubusercontent.com/IBM/watson-openscale-samples/main/IBM%20Cloud/WML/assets/data/historical_data/credit_risk/history_fairness_v2.json !ls -lh history_fairness_v2.json from datetime import datetime, timedelta, timezone with open('history_fairness_v2.json', 'r') as history_file: payloads = json.load(history_file) for day in range(historyDays): print('Loading day', day + 1) daily_measurement_requests = [] for hour in range(24): score_time = datetime.now(timezone.utc) + timedelta(hours=(-(24*day + hour + 1))) index = (day * 24 + hour) % len(payloads) # wrap around and reuse values if needed measurement_request = MonitorMeasurementRequest(timestamp=score_time,metrics = [payloads[index][0], payloads[index][1]]) daily_measurement_requests.append(measurement_request) response = wos_client.monitor_instances.measurements.add( monitor_instance_id=fairness_monitor_instance_id, monitor_measurement_request=daily_measurement_requests).result print('Finished') ###Output _____no_output_____ ###Markdown Insert historical debias metrics ###Code !wget https://raw.githubusercontent.com/IBM/watson-openscale-samples/main/IBM%20Cloud/WML/assets/data/historical_data/credit_risk/history_debias_v2.json !ls -lh history_debias_v2.json with open('history_debias_v2.json', 'r') as history_file: payloads = json.load(history_file) for day in range(historyDays): print('Loading day', day + 1) daily_measurement_requests = [] for hour in range(24): score_time = datetime.now(timezone.utc) + timedelta(hours=(-(24*day + hour + 1))) index = (day * 24 + hour) % len(payloads) # wrap around and reuse values if needed measurement_request = MonitorMeasurementRequest(timestamp=score_time,metrics = [payloads[index][0], payloads[index][1]]) daily_measurement_requests.append(measurement_request) response = wos_client.monitor_instances.measurements.add( monitor_instance_id=fairness_monitor_instance_id, monitor_measurement_request=daily_measurement_requests).result print('Finished') ###Output _____no_output_____ ###Markdown Insert historical quality metrics ###Code measurements = [0.76, 0.78, 0.68, 0.72, 0.73, 0.77, 0.80] for day in range(historyDays): quality_measurement_requests = [] print('Loading day', day + 1) for hour in range(24): score_time = datetime.utcnow() + timedelta(hours=(-(24*day + hour + 1))) score_time = score_time.isoformat() + "Z" metric = {"area_under_roc": measurements[day]} measurement_request = MonitorMeasurementRequest(timestamp=score_time,metrics = [metric]) quality_measurement_requests.append(measurement_request) response = wos_client.monitor_instances.measurements.add( monitor_instance_id=quality_monitor_instance_id, monitor_measurement_request=quality_measurement_requests).result print('Finished') ###Output _____no_output_____ ###Markdown Insert historical confusion matrixes ###Code !rm history_quality_metrics.json !wget https://raw.githubusercontent.com/IBM/watson-openscale-samples/main/IBM%20Cloud/WML/assets/data/historical_data/credit_risk/history_quality_metrics.json !ls -lh history_quality_metrics.json from ibm_watson_openscale.base_classes.watson_open_scale_v2 import Source with open('history_quality_metrics.json') as json_file: records = json.load(json_file) for day in range(historyDays): index = 0 cm_measurement_requests = [] print('Loading day', day + 1) for hour in range(24): score_time = datetime.utcnow() + timedelta(hours=(-(24*day + hour + 1))) score_time = score_time.isoformat() + "Z" metric = records[index]['metrics'] source = records[index]['sources'] measurement_request = {"timestamp": score_time, "metrics": [metric], "sources": [source]} cm_measurement_requests.append(measurement_request) index+=1 response = wos_client.monitor_instances.measurements.add(monitor_instance_id=quality_monitor_instance_id, monitor_measurement_request=cm_measurement_requests).result print('Finished') ###Output _____no_output_____ ###Markdown Insert historical performance metrics ###Code target = Target( target_type=TargetTypes.INSTANCE, target_id=payload_data_set_id ) performance_monitor_instance_details = wos_client.monitor_instances.create( data_mart_id=data_mart_id, background_mode=False, monitor_definition_id=wos_client.monitor_definitions.MONITORS.PERFORMANCE.ID, target=target ).result performance_monitor_instance_id = performance_monitor_instance_details.metadata.id for day in range(historyDays): performance_measurement_requests = [] print('Loading day', day + 1) for hour in range(24): score_time = datetime.utcnow() + timedelta(hours=(-(24*day + hour + 1))) score_time = score_time.isoformat() + "Z" score_count = random.randint(60, 600) metric = {"record_count": score_count, "data_set_type": "scoring_payload"} measurement_request = {"timestamp": score_time, "metrics": [metric]} performance_measurement_requests.append(measurement_request) response = wos_client.monitor_instances.measurements.add( monitor_instance_id=performance_monitor_instance_id, monitor_measurement_request=performance_measurement_requests).result print('Finished') ###Output _____no_output_____ ###Markdown Insert historical drift measurements ###Code !rm history_drift_measurement_*.json !wget https://raw.githubusercontent.com/IBM/watson-openscale-samples/main/IBM%20Cloud/WML/assets/data/historical_data/credit_risk/history_drift_measurement_0.json !wget https://raw.githubusercontent.com/IBM/watson-openscale-samples/main/IBM%20Cloud/WML/assets/data/historical_data/credit_risk/history_drift_measurement_1.json !wget https://raw.githubusercontent.com/IBM/watson-openscale-samples/main/IBM%20Cloud/WML/assets/data/historical_data/credit_risk/history_drift_measurement_2.json !wget https://raw.githubusercontent.com/IBM/watson-openscale-samples/main/IBM%20Cloud/WML/assets/data/historical_data/credit_risk/history_drift_measurement_3.json !wget https://raw.githubusercontent.com/IBM/watson-openscale-samples/main/IBM%20Cloud/WML/assets/data/historical_data/credit_risk/history_drift_measurement_4.json !wget https://raw.githubusercontent.com/IBM/watson-openscale-samples/main/IBM%20Cloud/WML/assets/data/historical_data/credit_risk/history_drift_measurement_5.json !wget https://raw.githubusercontent.com/IBM/watson-openscale-samples/main/IBM%20Cloud/WML/assets/data/historical_data/credit_risk/history_drift_measurement_6.json !ls -lh history_drift_measurement_*.json for day in range(historyDays): drift_measurements = [] with open("history_drift_measurement_{}.json".format(day), 'r') as history_file: drift_daily_measurements = json.load(history_file) print('Loading day', day + 1) #Historical data contains 8 records per day - each represents 3 hour drift window. for nb_window, records in enumerate(drift_daily_measurements): for record in records: window_start = datetime.utcnow() + timedelta(hours=(-(24 * day + (nb_window+1)*3 + 1))) # first_payload_record_timestamp_in_window (oldest) window_end = datetime.utcnow() + timedelta(hours=(-(24 * day + nb_window*3 + 1)))# last_payload_record_timestamp_in_window (most recent) #modify start and end time for each record record['sources'][0]['data']['start'] = window_start.isoformat() + "Z" record['sources'][0]['data']['end'] = window_end.isoformat() + "Z" metric = record['metrics'][0] source = record['sources'][0] measurement_request = {"timestamp": window_start.isoformat() + "Z", "metrics": [metric], "sources": [source]} drift_measurements.append(measurement_request) response = wos_client.monitor_instances.measurements.add( monitor_instance_id=drift_monitor_instance_id, monitor_measurement_request=drift_measurements).result print("Daily loading finished.") ###Output _____no_output_____ ###Markdown Additional data to help debugging ###Code print('Datamart:', data_mart_id) print('Model:', model_uid) print('Deployment:', ai_func_deployment_uid) ###Output _____no_output_____ ###Markdown Identify transactions for Explainability Transaction IDs identified by the cells below can be copied and pasted into the Explainability tab of the OpenScale dashboard. ###Code wos_client.data_sets.show_records(payload_data_set_id, limit=5) ###Output _____no_output_____
.ipynb_checkpoints/200716_Simple_RL_SC_Example-checkpoint.ipynb
###Markdown A simple supply chain model using Reinforcement learningDaniel Sepulveda-Estay 2020 What is reinforcement learning?Reinformcement learning is an area of machine learning that models agents that take actions in an environment and attempt to maximize a reward that they accumulate over time. It is therefore interested in optimizing the sequential decision making of these agents in a potentially complex environment. Along with Supervised and Unsupervised learning, Reinforcement learning is one of three basic machine learning paradigms.The agent uses trial and error to discover a solution to the problem of maximizing the reward and minimizing the penalties it obtains depending on the decisions it takes.The designer of a test environment and of the agent defines the the reward policy, this is the rules of the game, yet no hints or siggestions are given in the model about how to solve the problem. The agent will maximize the reward, first by performing random trials that gradually evolve to the use of sophisticated tactics. What are the parts of a reinforcement learning model?A reinforcement learning model is formed by an Agent that interacts with and learns from an Environment. The interaction is through Actions, and the learning happens through States and Rewards that the Agent receives from the Environment. This relationship is shown in the next figure.![Image of Rinforcement Learning Model](https://www.kdnuggets.com/images/reinforcement-learning-fig1-700.jpg) Model DescriptionIn order to explore the use of reinforcement learning in supply chains I have implemented a model in Python which I develop here step by step. AgentThis model consists of a single warehouse subject to a stochastic customer demand that has a stochastic variability, for example Uniform between values a and b, U[a,b]. The warehouse is therefore the decision making agent in this model. RewardThe objective of this warehouse is to maximize a reward, this is the revenue it obtains composed of income minus expenses.The Reward experienced by this Warehouse is the total revenue, which consists of a.- Sales Price of the items sold b.- MINUS the holding cost of the Inventory c.- MINUS the penality for Customer Demand not met ActionsThe warehouse holds inventory (no maximum, and held in whole quantities) and can take two actions: a.- Sell inventory to a customer b.- Receive inventory from its supplier The warehouse can only do one of this actions at a time. State The State of the model is the warehouse inventory level Model Development Required LibrariesWe first identify the libraries that will be needed. ###Code import numpy as np # Library required for matrix creation and manipulation import math import time # Library required to calculate execution time import pandas as pd # Library required to manipulate Dataframes with the model results np.set_printoptions(precision=2, suppress=True) ###Output _____no_output_____ ###Markdown Model parametersThe model has cost and price parameters that can be defined at the beginning of the model, and which will not substantially change when the model is executed. ###Code cost_inventory = 2 # The cost of holding inventory purchase_price = 20 # The price at which the inventory is bought sales_price = 50 # The proce at which the inventory is sold ###Output _____no_output_____ ###Markdown The Warehouse has as many states as it has number of units. The number of units the warehouse will be able to contain is between 0 and max_state (Warehouse size). Also, the initial state (initial warehouse stock) will be a random number between 0 and max_state. ###Code max_state = 800 initial_state = np.random.randint(max_state) ###Output _____no_output_____ ###Markdown R Matrix, the "reward structure matrix" is defined as a square matrix of size (max_state x max_state). It contains the rewards and therefore also defones the possible actions that can be taken. First we initialize the R Matrix, We then fill it with the reward values -The X and Y coordinates of the Matrix correspond to the initial and final values of the decision. For example, the R Matrix value at [20, 45] represents that the state is moving from 20 to 45, this means that 25 stock items have been added to the warehouse. Therefore the reward is negative and corresponding to the cost of purchasing the 25 new stock items PLUS the cost of maintaining the inventory at the beginning of the period. ###Code R = np.matrix(np.zeros([max_state,max_state])) for y in range(0, max_state): for x in range(0, max_state): R[x,y] = np.maximum((x-y)*sales_price,0)-np.maximum((y-x)*purchase_price,0)-x*cost_inventory print('R: \n', R) ###Output R: [[ 0. -20. -40. ... -15940. -15960. -15980.] [ 48. -2. -22. ... -15922. -15942. -15962.] [ 96. 46. -4. ... -15904. -15924. -15944.] ... [ 38256. 38206. 38156. ... -1594. -1614. -1634.] [ 38304. 38254. 38204. ... -1546. -1596. -1616.] [ 38352. 38302. 38252. ... -1498. -1548. -1598.]] ###Markdown The Q matrix sapture the total future reward for an agent from a given state after a certain action, and it has the same dimensions than the R Matrix. It is initialized at the time of creation with random numbers. ###Code Q = np.matrix(np.random.random([max_state,max_state])) ###Output _____no_output_____ ###Markdown The learning and discount processes are also given parameters ###Code learning_rate = 0.5 discount = 0.7 EPISODES =3000 STEPS = 200 PRINT_EVERY = EPISODES/50 Pepsilon_init = 0.8 # initial value for the decayed-epsilon-greedy method Pepsilon_end = 0.1 # final value for the decayed-epsilon-greedy method ###Output _____no_output_____ ###Markdown We now start defining the functions used. First a function to determine the available actions from which the next action can be chosen. ###Code def available_actions(state, customers): # The available actions are # a.- Meeting a customer requirement (going to s: state-order) # b.- Buying Inventory from Supplier (going to s: state+purchase) purchase = np.arange(state, max_state) # Calculate all possible future states due to purchases from the current state # print('Purchase: ',purchase) new_customers_state =[] new_customers_state = [np.maximum(state-x,0) for x in customers] # calculate the possible states from customers in the current state # print('new_customers_state: ', new_customers_state) return np.concatenate((purchase,new_customers_state)) ###Output _____no_output_____ ###Markdown Next, we define a function that chooses at random which action to be performed within the range of available actions. This function defines what will be the future action. There is a range of options which basically are:1. Always choose the action that has the maximum Q value for the current_state (exploit)1. Always choose a random action from the available actions for the current_state (explore)1. Alternate between exploit and explore with a certain probability (which may change over time)The general recommnedation is to start exploring most of the time and gradually decrease exploration to maximize exploitation.For this we use the decayed-epsilon-greedy method where epsilon is the probability that a random action is chosen. ###Code def sample_next_action(available_act, epsilon): # here we choose the type of next action to take. 1 for a random next action with probability epsilon # and 0 for a greedy next action with probability (1-epsilon) random_action = np.random.binomial(n=1, p=epsilon, size=1) if random_action == 1: # This is the option for full exploration - always random # print('random action') next_action = int(np.random.choice(available_act, 1)) else: # This is the option for full exploitation - always use what we know (Greedy method) # Choose the next actions from the available actions, and the one with the highest Q Value # print('greedy action') next_action = np.int(np.where(Q[current_state,] == np.max(Q[current_state,available_act]))[1]) # This section just caluclates the amount that is being sold or purchsed, if at all if next_action < current_state: Qsale = current_state-next_action Qpurchase = 0 else: Qpurchase = next_action - current_state Qsale = 0 return next_action, Qsale, Qpurchase def cost_inventory_backlog(current_state): if current_state<=0: return cost_backlog else: return cost_inventory ###Output _____no_output_____ ###Markdown The folowing function updates the Q table with a formula that requires the following parameters: * Q[current_state, action] = value to update. In the case of this model, the action is the future state value the Agent decided to take.* learning_rate = value between 0 and 1 indicating how much new information overrides old information.* R[current_state, action] = Reward obtained when transitioning from current_state to future_state = Action* max a for Q[future_state, a] = the max Q value of the possible Actions a in the future_state ![Image of QFormula](https://randomant.net/images/algorithm-behind-curtain-2/q_learning_algorithm.jpg) ###Code # this function updates the Q matrix according to the path selected and the Q learning algorithm def update(current_state, action): max_index = np.where(Q[action,] == np.max(Q[action,]))[1] # index for the maximum Q value in the future_state # print('Q[action,]: \n', Q[action,]) # print('Current State: ', current_state) # print('Action: ', action) # print('Max Index:', max_index) # just in case there are more than one maximums, in which case we choose one at random if max_index.shape[0] > 1: max_index = int(np.random.choice(max_index, size=1)) else: max_index = int(max_index) max_value = Q[action, max_index] # this is the maximum Q valuein the future state given the action that generates that maximum value # Q learning formula Q[current_state, action] = (1-learning_rate)*Q[current_state, action] + learning_rate*(R[current_state, action] + discount*max_value) ###Output _____no_output_____ ###Markdown Training the simulation ###Code # Training start_time = time.time() epsilon = Pepsilon_init epsilon_delta = (Pepsilon_init - Pepsilon_end)/EPISODES calculation_times = [] total_reward = [] total_demand =[] total_jump = [] jump_max = [] jump_min = [] jump_av =[] jump_sd = [] total_state= [] state_max = [] state_min = [] state_av =[] state_sd = [] current_state = 0 for episode in range(EPISODES): # Initialize values for Step total_reward_episode = 0 start_time_episode = time.time() state_episode = [] jump_episode = [] # determine if this eisode id generating a status output if episode%PRINT_EVERY == 0: print('Calc. Episode {} of {}, {:.1f}% progress'.format(episode, EPISODES, 100*(episode/EPISODES))) # Execute the steps in the Episode for step in range(STEPS): # Create a customer for this step customers = [] customers.append(np.random.randint(0,max_state)) # Calculate the actions (future states) that are available from current state available_act = available_actions(current_state, customers) # Choose an action from the available future states action = sample_next_action(available_act, epsilon) # Update the Q table update(current_state, action[0]) # record the states for the step # av_demand_episode = total_state.append(current_state) state_episode.append(current_state) total_demand.append(customers[0]) total_reward_episode += R[current_state, action[0]] total_jump.append(action[0] - current_state) jump_episode.append(action[0] - current_state) # update the state for the next step current_state = action[0] # record the states for the Episode total_reward.append(total_reward_episode) # Total reward for the episode calculation_times.append(time.time()-start_time_episode) jump_max.append(np.max(jump_episode)) jump_min.append(np.min(jump_episode)) jump_av.append(np.mean(jump_episode)) jump_sd.append(np.std(jump_episode)) state_max.append(np.max(state_episode)) state_min.append(np.min(state_episode)) state_av.append(np.mean(state_episode)) state_sd.append(np.std(state_episode)) # Update parameters for the next episode epsilon = Pepsilon_init - episode*epsilon_delta current_state = np.random.randint(0, int(Q.shape[0])) # print out the total calculation time print('total calculation time: {:.2f} seconds'.format(time.time()-start_time)) ###Output Calc. Episode 0 of 3000, 0.0% progress Calc. Episode 60 of 3000, 2.0% progress Calc. Episode 120 of 3000, 4.0% progress Calc. Episode 180 of 3000, 6.0% progress Calc. Episode 240 of 3000, 8.0% progress Calc. Episode 300 of 3000, 10.0% progress Calc. Episode 360 of 3000, 12.0% progress Calc. Episode 420 of 3000, 14.0% progress Calc. Episode 480 of 3000, 16.0% progress Calc. Episode 540 of 3000, 18.0% progress Calc. Episode 600 of 3000, 20.0% progress Calc. Episode 660 of 3000, 22.0% progress Calc. Episode 720 of 3000, 24.0% progress Calc. Episode 780 of 3000, 26.0% progress Calc. Episode 840 of 3000, 28.0% progress Calc. Episode 900 of 3000, 30.0% progress Calc. Episode 960 of 3000, 32.0% progress Calc. Episode 1020 of 3000, 34.0% progress Calc. Episode 1080 of 3000, 36.0% progress Calc. Episode 1140 of 3000, 38.0% progress Calc. Episode 1200 of 3000, 40.0% progress Calc. Episode 1260 of 3000, 42.0% progress Calc. Episode 1320 of 3000, 44.0% progress Calc. Episode 1380 of 3000, 46.0% progress Calc. Episode 1440 of 3000, 48.0% progress Calc. Episode 1500 of 3000, 50.0% progress Calc. Episode 1560 of 3000, 52.0% progress Calc. Episode 1620 of 3000, 54.0% progress Calc. Episode 1680 of 3000, 56.0% progress Calc. Episode 1740 of 3000, 58.0% progress Calc. Episode 1800 of 3000, 60.0% progress Calc. Episode 1860 of 3000, 62.0% progress Calc. Episode 1920 of 3000, 64.0% progress Calc. Episode 1980 of 3000, 66.0% progress Calc. Episode 2040 of 3000, 68.0% progress Calc. Episode 2100 of 3000, 70.0% progress Calc. Episode 2160 of 3000, 72.0% progress Calc. Episode 2220 of 3000, 74.0% progress Calc. Episode 2280 of 3000, 76.0% progress Calc. Episode 2340 of 3000, 78.0% progress Calc. Episode 2400 of 3000, 80.0% progress Calc. Episode 2460 of 3000, 82.0% progress Calc. Episode 2520 of 3000, 84.0% progress Calc. Episode 2580 of 3000, 86.0% progress Calc. Episode 2640 of 3000, 88.0% progress Calc. Episode 2700 of 3000, 90.0% progress Calc. Episode 2760 of 3000, 92.0% progress Calc. Episode 2820 of 3000, 94.0% progress Calc. Episode 2880 of 3000, 96.0% progress Calc. Episode 2940 of 3000, 98.0% progress total calculation time: 88.39 seconds ###Markdown Plot the results ###Code import matplotlib.pyplot as plt from matplotlib.ticker import StrMethodFormatter %matplotlib inline fig, axs = plt.subplots(nrows=2, ncols=3, figsize=(20, 12)) MA_total_reward = pd.DataFrame(total_reward) Rolling_total_reward = MA_total_reward.rolling(window=5).mean() axs[0,0].plot(MA_total_reward, label='Episode') axs[0,0].plot(Rolling_total_reward, color='r', label ='MA(5)') axs[0,0].set_title('Total Rewards') axs[0,0].set_ylabel('Rewards [$]') axs[0,0].set_xlabel('Episodes') axs[0,0].grid(axis='y', alpha=0.75) axs[0,0].grid(axis='x', alpha=0.75) axs[1,0].plot(calculation_times) axs[1,0].set_title('Calc. Times') axs[1,0].set_xlabel('Episodes') axs[1,0].set_ylabel('Calculation times [s]') axs[1,0].grid(axis='y', alpha=0.75) axs[1,0].grid(axis='x', alpha=0.75) axs[0,1].hist(total_state,color='#0504aa',alpha=0.7, rwidth=0.85) axs[0,1].set_title('States Histogram') axs[0,1].set_xlabel('State') axs[0,1].set_ylabel('Frequency') axs[0,1].set_xlim(xmin=0, xmax=max_state) axs[0,1].grid(axis='y', alpha=0.75) axs[1,1].plot(jump_max,color='b', label = 'max') axs[1,1].plot(jump_min,color='r', label = 'min') axs[1,1].plot(jump_av,color='g', label = 'av') axs[1,1].plot(jump_sd,color='y', label = 'sd') axs[1,1].set_title('Jumps') axs[1,1].legend() axs[1,1].set_xlabel('Episode') axs[1,1].set_ylabel('Jump Value') axs[1,1].grid(axis='y', alpha=0.75) axs[0,2].hist(total_jump,color='#0504aa',alpha=0.7, rwidth=0.85) axs[0,2].set_title('Jump Histogram') axs[0,2].set_xlabel('New_State-Old_State') axs[0,2].set_ylabel('Frequency') axs[0,2].set_xlim(xmin=-max_state, xmax=max_state) axs[0,2].grid(axis='y', alpha=0.75) axs[1,2].plot(state_max,color='b', label = 'max') axs[1,2].plot(state_min,color='r', label = 'min') axs[1,2].plot(state_av,color='g', label = 'av') axs[1,2].plot(state_sd,color='y', label = 'sd') axs[1,2].set_title('States') axs[1,2].legend() axs[1,2].set_xlabel('Episode') axs[1,2].set_ylabel('State Value') axs[1,2].grid(axis='y', alpha=0.75) plt.tight_layout() plt.show() MA_total_reward = pd.DataFrame(total_reward) Rolling_total_reward = MA_total_reward.rolling(window=50).mean() plt.figure(figsize=(10,8)) plt.plot(MA_total_reward, label='Episode') plt.plot(Rolling_total_reward, color='r', label ='MA(50)') plt.title('Total Rewards') plt.ylabel('Rewards [$]') plt.xlabel('Episodes') plt.legend() plt.show() ###Output _____no_output_____
07_Hyperparameter-Tuning-via-Scikit.ipynb
###Markdown Grid Search ###Code import pandas as pd import numpy as np from sklearn.compose import ColumnTransformer from sklearn.preprocessing import StandardScaler, OneHotEncoder from sklearn.pipeline import Pipeline from sklearn.model_selection import train_test_split from sklearn.metrics import f1_score from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import GridSearchCV df = pd.read_csv("train.csv",sep=";") df.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 45211 entries, 0 to 45210 Data columns (total 17 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 age 45211 non-null int64 1 job 45211 non-null object 2 marital 45211 non-null object 3 education 45211 non-null object 4 default 45211 non-null object 5 balance 45211 non-null int64 6 housing 45211 non-null object 7 loan 45211 non-null object 8 contact 45211 non-null object 9 day 45211 non-null int64 10 month 45211 non-null object 11 duration 45211 non-null int64 12 campaign 45211 non-null int64 13 pdays 45211 non-null int64 14 previous 45211 non-null int64 15 poutcome 45211 non-null object 16 y 45211 non-null object dtypes: int64(7), object(10) memory usage: 5.9+ MB ###Markdown Convert the target variable to integer ###Code df['y'] = df['y'].map({'yes':1,'no':0}) ###Output _____no_output_____ ###Markdown Split full data into train and test data ###Code df_train, df_test = train_test_split(df, test_size=0.1, random_state=0) ###Output _____no_output_____ ###Markdown Get distribution of the target variable ###Code df_train['y'].value_counts(True) df_test['y'].value_counts(True) ###Output _____no_output_____ ###Markdown Get only numerical features from the train data ###Code X_train_numerical = df_train.select_dtypes(include=np.number).drop(columns=['y']) y_train = df_train['y'] X_train_numerical.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 40689 entries, 17974 to 2732 Data columns (total 7 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 age 40689 non-null int64 1 balance 40689 non-null int64 2 day 40689 non-null int64 3 duration 40689 non-null int64 4 campaign 40689 non-null int64 5 pdays 40689 non-null int64 6 previous 40689 non-null int64 dtypes: int64(7) memory usage: 2.5 MB ###Markdown Get only numerical features from the test data ###Code X_test_numerical = df_test.select_dtypes(include=np.number).drop(columns=['y']) y_test = df_test['y'] X_test_numerical.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 4522 entries, 14001 to 25978 Data columns (total 7 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 age 4522 non-null int64 1 balance 4522 non-null int64 2 day 4522 non-null int64 3 duration 4522 non-null int64 4 campaign 4522 non-null int64 5 pdays 4522 non-null int64 6 previous 4522 non-null int64 dtypes: int64(7) memory usage: 282.6 KB ###Markdown Calculate F1-Score on Test Data without Hyperparameter Tuning ###Code # Fit the model on train data model = RandomForestClassifier(random_state=0) model.fit(X_train_numerical,y_train) # Evaluate the model on the test data y_pred = model.predict(X_test_numerical) print(f1_score(y_test, y_pred)) ###Output 0.43667068757539196 ###Markdown Defining the Hyperparameter Space for experiment using numerical features only ###Code hyperparameter_space = { "n_estimators": [25,50,100,150,200], "criterion": ["gini", "entropy"], "class_weight": ["balanced","balanced_subsample"], "min_samples_split": [0.01,0.1,0.25,0.5,0.75,1.0], } ###Output _____no_output_____ ###Markdown Perform Grid Search on numerical features only ###Code # Initiate the model model = RandomForestClassifier(random_state=0) # Initiate the Grid Search Class clf = GridSearchCV(model, hyperparameter_space, scoring = 'f1', cv=5, n_jobs=-1, refit = True, verbose=2) # Run the Grid Search CV clf.fit(X_train_numerical, y_train) ###Output Fitting 5 folds for each of 120 candidates, totalling 600 fits ###Markdown Get the best set of hyperparameters along with the average F1-Score from the 5-folds CV ###Code clf.best_params_,clf.best_score_ ###Output _____no_output_____ ###Markdown Calculate the F1-Score on Test Data after fitting the model on the full train data using the best set of hyperparameters ###Code clf.score(X_test_numerical,y_test) ###Output _____no_output_____ ###Markdown Start of the experiment with all features ###Code # Get list of numerical features numerical_feats = list(df_train.drop(columns='y').select_dtypes(include=np.number).columns) # Get list of categorical features categorical_feats = list(df_train.drop(columns='y').select_dtypes(exclude=np.number).columns) ###Output _____no_output_____ ###Markdown Initiate the preprocessors ###Code # Initiate the Normalization Pre-processing for Numerical Features numeric_preprocessor = StandardScaler() # Initiate the One-Hot-Encoding Pre-processing for Categorical Features categorical_preprocessor = OneHotEncoder(handle_unknown="ignore") ###Output _____no_output_____ ###Markdown Create the ColumnTransformer Class to delegate each preprocessor to the corresponding features ###Code preprocessor = ColumnTransformer( transformers=[ ("num", numeric_preprocessor, numerical_feats), ("cat", categorical_preprocessor, categorical_feats), ] ) ###Output _____no_output_____ ###Markdown Create a Pipeline of preprocessor and model ###Code pipe = Pipeline( steps=[("preprocessor", preprocessor), ("model", RandomForestClassifier(random_state=0))] ) ###Output _____no_output_____ ###Markdown Get all features from the train data ###Code X_train_full = df_train.drop(columns=['y']) y_train = df_train['y'] X_train_full.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 40689 entries, 17974 to 2732 Data columns (total 16 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 age 40689 non-null int64 1 job 40689 non-null object 2 marital 40689 non-null object 3 education 40689 non-null object 4 default 40689 non-null object 5 balance 40689 non-null int64 6 housing 40689 non-null object 7 loan 40689 non-null object 8 contact 40689 non-null object 9 day 40689 non-null int64 10 month 40689 non-null object 11 duration 40689 non-null int64 12 campaign 40689 non-null int64 13 pdays 40689 non-null int64 14 previous 40689 non-null int64 15 poutcome 40689 non-null object dtypes: int64(7), object(9) memory usage: 5.3+ MB ###Markdown Get all features from the test data ###Code X_test_full = df_test.drop(columns=['y']) y_test = df_test['y'] X_test_full.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 4522 entries, 14001 to 25978 Data columns (total 16 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 age 4522 non-null int64 1 job 4522 non-null object 2 marital 4522 non-null object 3 education 4522 non-null object 4 default 4522 non-null object 5 balance 4522 non-null int64 6 housing 4522 non-null object 7 loan 4522 non-null object 8 contact 4522 non-null object 9 day 4522 non-null int64 10 month 4522 non-null object 11 duration 4522 non-null int64 12 campaign 4522 non-null int64 13 pdays 4522 non-null int64 14 previous 4522 non-null int64 15 poutcome 4522 non-null object dtypes: int64(7), object(9) memory usage: 600.6+ KB ###Markdown Calculate F1-Score on Test Data without Hyperparameter Tuning ###Code # Fit the pipeline on train data pipe.fit(X_train_full,y_train) # Evaluate on the test data y_pred = pipe.predict(X_test_full) print(f1_score(y_test, y_pred)) ###Output 0.5164319248826291 ###Markdown Re-define the Hyperparameter Space to match the expected input of the `Pipeline` object. ###Code hyperparameter_space = { "model__n_estimators": [25,50,100,150,200], "model__criterion": ["gini", "entropy"], "model__class_weight": ["balanced","balanced_subsample"], "model__min_samples_split": [0.01,0.1,0.25,0.5,0.75,1.0], } ###Output _____no_output_____ ###Markdown Perform Grid Search on all features ###Code # Initiate the Grid Search Class clf = GridSearchCV(pipe, hyperparameter_space, scoring = 'f1', cv=5, n_jobs=-1, refit = True, verbose=2) # Run the Grid Search CV clf.fit(X_train_full, y_train) ###Output Fitting 5 folds for each of 120 candidates, totalling 600 fits ###Markdown Get the best set of hyperparameters along with the average F1-Score from the 5-folds CV ###Code clf.best_params_,clf.best_score_ ###Output _____no_output_____ ###Markdown Get the f1-Score on Test Data after fitting the model on the full train data using the best set of hyperparameters ###Code clf.score(X_test_full,y_test) ###Output _____no_output_____ ###Markdown Random Search ###Code import pandas as pd import numpy as np from sklearn.compose import ColumnTransformer from sklearn.preprocessing import StandardScaler, OneHotEncoder from sklearn.pipeline import Pipeline from sklearn.model_selection import train_test_split from sklearn.metrics import f1_score from sklearn.ensemble import RandomForestClassifier from scipy.stats import randint,truncnorm from sklearn.model_selection import RandomizedSearchCV df = pd.read_csv("train.csv",sep=";") df.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 45211 entries, 0 to 45210 Data columns (total 17 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 age 45211 non-null int64 1 job 45211 non-null object 2 marital 45211 non-null object 3 education 45211 non-null object 4 default 45211 non-null object 5 balance 45211 non-null int64 6 housing 45211 non-null object 7 loan 45211 non-null object 8 contact 45211 non-null object 9 day 45211 non-null int64 10 month 45211 non-null object 11 duration 45211 non-null int64 12 campaign 45211 non-null int64 13 pdays 45211 non-null int64 14 previous 45211 non-null int64 15 poutcome 45211 non-null object 16 y 45211 non-null object dtypes: int64(7), object(10) memory usage: 5.9+ MB ###Markdown Convert the target variable to integer ###Code df['y'] = df['y'].map({'yes':1,'no':0}) ###Output _____no_output_____ ###Markdown Split full data into train and test data ###Code df_train, df_test = train_test_split(df, test_size=0.1, random_state=0) ###Output _____no_output_____ ###Markdown Get list of numerical features ###Code numerical_feats = list(df_train.drop(columns='y').select_dtypes(include=np.number).columns) ###Output _____no_output_____ ###Markdown Get list of categorical features ###Code categorical_feats = list(df_train.drop(columns='y').select_dtypes(exclude=np.number).columns) ###Output _____no_output_____ ###Markdown Initiate the preprocessors ###Code # Initiate the Normalization Pre-processing for Numerical Features numeric_preprocessor = StandardScaler() # Initiate the One-Hot-Encoding Pre-processing for Categorical Features categorical_preprocessor = OneHotEncoder(handle_unknown="ignore") ###Output _____no_output_____ ###Markdown Create the ColumnTransformer Class to delegate each preprocessor to the corresponding features ###Code preprocessor = ColumnTransformer( transformers=[ ("num", numeric_preprocessor, numerical_feats), ("cat", categorical_preprocessor, categorical_feats), ] ) ###Output _____no_output_____ ###Markdown Create a Pipeline of preprocessor and model ###Code pipe = Pipeline( steps=[("preprocessor", preprocessor), ("model", RandomForestClassifier(random_state=0))] ) ###Output _____no_output_____ ###Markdown Get all features from the train data ###Code X_train_full = df_train.drop(columns=['y']) y_train = df_train['y'] X_train_full.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 40689 entries, 17974 to 2732 Data columns (total 16 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 age 40689 non-null int64 1 job 40689 non-null object 2 marital 40689 non-null object 3 education 40689 non-null object 4 default 40689 non-null object 5 balance 40689 non-null int64 6 housing 40689 non-null object 7 loan 40689 non-null object 8 contact 40689 non-null object 9 day 40689 non-null int64 10 month 40689 non-null object 11 duration 40689 non-null int64 12 campaign 40689 non-null int64 13 pdays 40689 non-null int64 14 previous 40689 non-null int64 15 poutcome 40689 non-null object dtypes: int64(7), object(9) memory usage: 5.3+ MB ###Markdown Get all features from the test data ###Code X_test_full = df_test.drop(columns=['y']) y_test = df_test['y'] X_test_full.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 4522 entries, 14001 to 25978 Data columns (total 16 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 age 4522 non-null int64 1 job 4522 non-null object 2 marital 4522 non-null object 3 education 4522 non-null object 4 default 4522 non-null object 5 balance 4522 non-null int64 6 housing 4522 non-null object 7 loan 4522 non-null object 8 contact 4522 non-null object 9 day 4522 non-null int64 10 month 4522 non-null object 11 duration 4522 non-null int64 12 campaign 4522 non-null int64 13 pdays 4522 non-null int64 14 previous 4522 non-null int64 15 poutcome 4522 non-null object dtypes: int64(7), object(9) memory usage: 600.6+ KB ###Markdown Calculate F1-Score on Test Data without Hyperparameter Tuning ###Code # Fit the pipeline on train data pipe.fit(X_train_full,y_train) # Evaluate on the test data y_pred = pipe.predict(X_test_full) print(f1_score(y_test, y_pred)) ###Output 0.5164319248826291 ###Markdown Define the hyperparameter space ###Code hyperparameter_space = { "model__n_estimators": randint(5, 200), "model__criterion": ["gini", "entropy"], "model__class_weight": ["balanced","balanced_subsample"], "model__min_samples_split": truncnorm(a=0,b=0.5,loc=0.005, scale=0.01), } ###Output _____no_output_____ ###Markdown Perform Random Search ###Code # Initiate the Random Search Class clf = RandomizedSearchCV(pipe, hyperparameter_space, n_iter = 200, random_state = 0, scoring = 'f1', cv=5, n_jobs=-1, refit = True, verbose=2) # Run the Random Search CV clf.fit(X_train_full, y_train) ###Output Fitting 5 folds for each of 200 candidates, totalling 1000 fits ###Markdown Get the best set of hyperparameters along with the average F1-Score from the 5-folds CV ###Code clf.best_params_,clf.best_score_ ###Output _____no_output_____ ###Markdown Get the f1-Score on Test Data after fitting the model on the full train data using the best set of hyperparameters ###Code clf.score(X_test_full,y_test) ###Output _____no_output_____ ###Markdown Coarse-to-Fine Search ###Code import pandas as pd import numpy as np from sklearn.compose import ColumnTransformer from sklearn.preprocessing import StandardScaler, OneHotEncoder from sklearn.pipeline import Pipeline from sklearn.model_selection import train_test_split from sklearn.metrics import f1_score from sklearn.ensemble import RandomForestClassifier import scipy from scipy.stats import randint,truncnorm from sklearn.base import clone from sklearn.model_selection import ParameterSampler, cross_val_score class CoarseToFineSearchCV: def __init__(self, estimator, param_distributions, random_iters, top_n_percentile, continuous_hyperparams=[], worse_score = 0, n_iter=10, scoring=None, n_jobs=None, refit=True, cv=None, random_state=0, verbose=0 ): self.estimator = estimator self.param_distributions = param_distributions self.random_iters = random_iters self.top_n_percentile = top_n_percentile self.continuous_hyperparams = continuous_hyperparams self.worse_score = worse_score self.n_iter = n_iter self.scoring = scoring self.n_jobs = n_jobs self.refit = refit self.cv = cv self.random_state = random_state self.verbose = verbose self.best_params_ = {} self.best_score_ = None def fit(self,X,y): new_param_distributions = self.param_distributions.copy() best_params_dict = {'score':self.worse_score,'params':[]} for epoch in range(self.n_iter): if self.verbose >= 2: print("Hyperparameter space") print(new_param_distributions) # List of sampled hyperparameter combinations will be used for random search param_list = list(ParameterSampler(new_param_distributions, n_iter=self.random_iters, random_state=self.random_state)) # Searching the Best Parameters with Random Search rs_results_dict = {} for random_iter in range(min(self.random_iters,len(param_list))): # Get the set of parameter for this iteration strategy_params = param_list[random_iter] estimator = clone(self.estimator).set_params(**strategy_params) results = np.mean(cross_val_score(estimator,X, y, cv=self.cv, scoring=self.scoring, n_jobs=self.n_jobs ) ) rs_results_dict[tuple(strategy_params.values())] = {'score':results} if results >= best_params_dict['score']: best_params_dict['score'] = results best_params_dict['params'] = list(strategy_params.values()) # Save the results in dataframe and sort it based on score param_names = list(strategy_params.keys()) df_rs_results = pd.DataFrame(rs_results_dict).T.reset_index() df_rs_results.columns = param_names + ['score'] df_rs_results = df_rs_results.sort_values(['score'],ascending=False).head(self.n_iter-epoch) # If the best score from this epoch is worse than the best score, # then append the best hyperaparameters combination to this epoch dataframe if df_rs_results['score'].iloc[0] < best_params_dict['score'] and best_params_dict['params']: new_row_dict = {} new_row_dict['score'] = best_params_dict['score'] for idx, key in enumerate(param_names): new_row_dict[key] = best_params_dict['params'][idx] df_rs_results = pd.concat([df_rs_results,pd.DataFrame({0:new_row_dict}).T]).reset_index(drop=True) df_rs_results = df_rs_results.sort_values(['score'],ascending=False).head(self.n_iter-epoch) if self.verbose >= 1: display(df_rs_results) print(df_rs_results.head(1).T.to_dict()) # Get the worse and best hyperparameter combinations percentile_threshold = df_rs_results['score'].quantile(self.top_n_percentile/100) promising_subspace = df_rs_results[df_rs_results['score']>=percentile_threshold] df_rs_results_min = promising_subspace.min(axis=0) df_rs_results_max = promising_subspace.max(axis=0) # Generate new hyperparameter space based on current worse and best hyperparameter combinations for key in new_param_distributions: if isinstance(new_param_distributions[key],scipy.stats._distn_infrastructure.rv_frozen): # Currently only support truncnorm and randint distribution # You can add your own distribution here if key in self.continuous_hyperparams: new_param_distributions[key] = truncnorm(a=df_rs_results_min[key],b=df_rs_results_max[key]+1e-6, loc=(0.8*df_rs_results_min[key]+0.2*df_rs_results_max[key]), scale=(0.8*df_rs_results_min[key]+0.2*df_rs_results_max[key])*2) else: new_param_distributions[key] = randint(int(df_rs_results_min[key]), int(df_rs_results_max[key])+1) elif isinstance(new_param_distributions[key][0],str) or isinstance(new_param_distributions[key][0],bool): new_param_distributions[key] = tuple(promising_subspace[key].unique()) elif isinstance(new_param_distributions[key][0],int): new_param_distributions[key] = [i for i in range(int(df_rs_results_min[key]), int(df_rs_results_max[key])+1)] elif isinstance(new_param_distributions[key][0],float): new_param_distributions[key] = list(np.linspace(df_rs_results_min[key], df_rs_results_max[key], len(param_distributions[key]))) else: new_param_distributions[key] = self.param_distributions[key] if self.verbose >= 1: print("="*100) for i, key in enumerate(param_names): self.best_params_[key] = best_params_dict['params'][i] self.best_score_ = best_params_dict['score'] if self.refit: self.estimator = self.estimator.set_params(**self.best_params_) self.estimator.fit(X,y) def predict(self, X): if self.refit: return self.estimator.predict(X) else: print("Estimator is not refitted.") df = pd.read_csv("train.csv",sep=";") df.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 45211 entries, 0 to 45210 Data columns (total 17 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 age 45211 non-null int64 1 job 45211 non-null object 2 marital 45211 non-null object 3 education 45211 non-null object 4 default 45211 non-null object 5 balance 45211 non-null int64 6 housing 45211 non-null object 7 loan 45211 non-null object 8 contact 45211 non-null object 9 day 45211 non-null int64 10 month 45211 non-null object 11 duration 45211 non-null int64 12 campaign 45211 non-null int64 13 pdays 45211 non-null int64 14 previous 45211 non-null int64 15 poutcome 45211 non-null object 16 y 45211 non-null object dtypes: int64(7), object(10) memory usage: 5.9+ MB ###Markdown Convert the target variable to integer ###Code df['y'] = df['y'].map({'yes':1,'no':0}) ###Output _____no_output_____ ###Markdown Split full data into train and test data ###Code df_train, df_test = train_test_split(df, test_size=0.1, random_state=0) ###Output _____no_output_____ ###Markdown Get list of numerical features ###Code numerical_feats = list(df_train.drop(columns='y').select_dtypes(include=np.number).columns) ###Output _____no_output_____ ###Markdown Get list of categorical features ###Code categorical_feats = list(df_train.drop(columns='y').select_dtypes(exclude=np.number).columns) ###Output _____no_output_____ ###Markdown Initiate the preprocessors ###Code # Initiate the Normalization Pre-processing for Numerical Features numeric_preprocessor = StandardScaler() # Initiate the One-Hot-Encoding Pre-processing for Categorical Features categorical_preprocessor = OneHotEncoder(handle_unknown="ignore") ###Output _____no_output_____ ###Markdown Create the ColumnTransformer Class to delegate each preprocessor to the corresponding features ###Code preprocessor = ColumnTransformer( transformers=[ ("num", numeric_preprocessor, numerical_feats), ("cat", categorical_preprocessor, categorical_feats), ] ) ###Output _____no_output_____ ###Markdown Create a Pipeline of preprocessor and model ###Code pipe = Pipeline( steps=[("preprocessor", preprocessor), ("model", RandomForestClassifier(random_state=0))] ) ###Output _____no_output_____ ###Markdown Get all features from the train data ###Code X_train_full = df_train.drop(columns=['y']) y_train = df_train['y'] X_train_full.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 40689 entries, 17974 to 2732 Data columns (total 16 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 age 40689 non-null int64 1 job 40689 non-null object 2 marital 40689 non-null object 3 education 40689 non-null object 4 default 40689 non-null object 5 balance 40689 non-null int64 6 housing 40689 non-null object 7 loan 40689 non-null object 8 contact 40689 non-null object 9 day 40689 non-null int64 10 month 40689 non-null object 11 duration 40689 non-null int64 12 campaign 40689 non-null int64 13 pdays 40689 non-null int64 14 previous 40689 non-null int64 15 poutcome 40689 non-null object dtypes: int64(7), object(9) memory usage: 5.3+ MB ###Markdown Get all features from the test data ###Code X_test_full = df_test.drop(columns=['y']) y_test = df_test['y'] X_test_full.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 4522 entries, 14001 to 25978 Data columns (total 16 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 age 4522 non-null int64 1 job 4522 non-null object 2 marital 4522 non-null object 3 education 4522 non-null object 4 default 4522 non-null object 5 balance 4522 non-null int64 6 housing 4522 non-null object 7 loan 4522 non-null object 8 contact 4522 non-null object 9 day 4522 non-null int64 10 month 4522 non-null object 11 duration 4522 non-null int64 12 campaign 4522 non-null int64 13 pdays 4522 non-null int64 14 previous 4522 non-null int64 15 poutcome 4522 non-null object dtypes: int64(7), object(9) memory usage: 600.6+ KB ###Markdown Calculate F1-Score on Test Data without Hyperparameter Tuning ###Code # Fit the pipeline on train data pipe.fit(X_train_full,y_train) # Evaluate on the test data y_pred = pipe.predict(X_test_full) print(f1_score(y_test, y_pred)) ###Output 0.5164319248826291 ###Markdown Define the hyperparameter space ###Code hyperparameter_space = { "model__n_estimators": randint(5, 200), "model__criterion": ["gini", "entropy"], "model__class_weight": ["balanced","balanced_subsample"], "model__min_samples_split": truncnorm(a=0,b=0.5,loc=0.005, scale=0.01), } # Initiate the CFS Class clf = CoarseToFineSearchCV(pipe, hyperparameter_space, random_iters=25, top_n_percentile=50, n_iter=10, continuous_hyperparams=['model__min_samples_split'], random_state=0, scoring='f1', cv=5, n_jobs=-1, refit=True, verbose=2 ) # Run the CFS CV clf.fit(X_train_full, y_train) ###Output Hyperparameter space {'model__n_estimators': <scipy.stats._distn_infrastructure.rv_frozen object at 0x7f31b80bd290>, 'model__criterion': ['gini', 'entropy'], 'model__class_weight': ['balanced', 'balanced_subsample'], 'model__min_samples_split': <scipy.stats._distn_infrastructure.rv_frozen object at 0x7f31b8223050>} ###Markdown Get the best set of hyperparameters along with the average F1-Score from the 5-folds CV ###Code clf.best_params_,clf.best_score_ ###Output _____no_output_____ ###Markdown Get the f1-Score on Test Data after fitting the model on the full train data using the best set of hyperparametersclf.best_params_,clf.best_score_ ###Code # Evaluate on the test data y_pred = clf.predict(X_test_full) print(f1_score(y_test, y_pred)) ###Output 0.5612366230677764 ###Markdown Succesive Halving ###Code import pandas as pd import numpy as np from sklearn.compose import ColumnTransformer from sklearn.preprocessing import StandardScaler, OneHotEncoder from sklearn.pipeline import Pipeline from sklearn.model_selection import train_test_split from sklearn.metrics import f1_score from sklearn.ensemble import RandomForestClassifier import matplotlib.pyplot as plt from scipy.stats import randint,truncnorm from sklearn.experimental import enable_halving_search_cv from sklearn.model_selection import HalvingRandomSearchCV df = pd.read_csv("train.csv",sep=";") df.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 45211 entries, 0 to 45210 Data columns (total 17 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 age 45211 non-null int64 1 job 45211 non-null object 2 marital 45211 non-null object 3 education 45211 non-null object 4 default 45211 non-null object 5 balance 45211 non-null int64 6 housing 45211 non-null object 7 loan 45211 non-null object 8 contact 45211 non-null object 9 day 45211 non-null int64 10 month 45211 non-null object 11 duration 45211 non-null int64 12 campaign 45211 non-null int64 13 pdays 45211 non-null int64 14 previous 45211 non-null int64 15 poutcome 45211 non-null object 16 y 45211 non-null object dtypes: int64(7), object(10) memory usage: 5.9+ MB ###Markdown Convert the target variable to integer ###Code df['y'] = df['y'].map({'yes':1,'no':0}) ###Output _____no_output_____ ###Markdown Split full data into train and test data ###Code df_train, df_test = train_test_split(df, test_size=0.1, random_state=0) ###Output _____no_output_____ ###Markdown Get list of numerical features ###Code numerical_feats = list(df_train.drop(columns='y').select_dtypes(include=np.number).columns) ###Output _____no_output_____ ###Markdown Get list of categorical features ###Code categorical_feats = list(df_train.drop(columns='y').select_dtypes(exclude=np.number).columns) ###Output _____no_output_____ ###Markdown Initiate the preprocessors ###Code # Initiate the Normalization Pre-processing for Numerical Features numeric_preprocessor = StandardScaler() # Initiate the One-Hot-Encoding Pre-processing for Categorical Features categorical_preprocessor = OneHotEncoder(handle_unknown="ignore") ###Output _____no_output_____ ###Markdown Create the ColumnTransformer Class to delegate each preprocessor to the corresponding features ###Code preprocessor = ColumnTransformer( transformers=[ ("num", numeric_preprocessor, numerical_feats), ("cat", categorical_preprocessor, categorical_feats), ] ) ###Output _____no_output_____ ###Markdown Create a Pipeline of preprocessor and model ###Code pipe = Pipeline( steps=[("preprocessor", preprocessor), ("model", RandomForestClassifier(random_state=0))] ) ###Output _____no_output_____ ###Markdown Get all features from the train data ###Code X_train_full = df_train.drop(columns=['y']) y_train = df_train['y'] X_train_full.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 40689 entries, 17974 to 2732 Data columns (total 16 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 age 40689 non-null int64 1 job 40689 non-null object 2 marital 40689 non-null object 3 education 40689 non-null object 4 default 40689 non-null object 5 balance 40689 non-null int64 6 housing 40689 non-null object 7 loan 40689 non-null object 8 contact 40689 non-null object 9 day 40689 non-null int64 10 month 40689 non-null object 11 duration 40689 non-null int64 12 campaign 40689 non-null int64 13 pdays 40689 non-null int64 14 previous 40689 non-null int64 15 poutcome 40689 non-null object dtypes: int64(7), object(9) memory usage: 5.3+ MB ###Markdown Get all features from the test data ###Code X_test_full = df_test.drop(columns=['y']) y_test = df_test['y'] X_test_full.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 4522 entries, 14001 to 25978 Data columns (total 16 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 age 4522 non-null int64 1 job 4522 non-null object 2 marital 4522 non-null object 3 education 4522 non-null object 4 default 4522 non-null object 5 balance 4522 non-null int64 6 housing 4522 non-null object 7 loan 4522 non-null object 8 contact 4522 non-null object 9 day 4522 non-null int64 10 month 4522 non-null object 11 duration 4522 non-null int64 12 campaign 4522 non-null int64 13 pdays 4522 non-null int64 14 previous 4522 non-null int64 15 poutcome 4522 non-null object dtypes: int64(7), object(9) memory usage: 600.6+ KB ###Markdown Calculate F1-Score on Test Data without Hyperparameter Tuning Define the hyperparameter space ###Code hyperparameter_space = { "model__n_estimators": randint(5, 200), "model__criterion": ["gini", "entropy"], "model__class_weight": ["balanced","balanced_subsample"], "model__min_samples_split": truncnorm(a=0,b=0.5,loc=0.005, scale=0.01), } ###Output _____no_output_____ ###Markdown Perform Succesive Halving with Random Search ###Code # Initiate the SH Class clf = HalvingRandomSearchCV(pipe, hyperparameter_space, factor=3,aggressive_elimination=False, random_state = 0, scoring = 'f1', cv=5, n_jobs=-1, refit = True, verbose=2) # Run the SH CV clf.fit(X_train_full, y_train) ###Output n_iterations: 7 n_required_iterations: 7 n_possible_iterations: 7 min_resources_: 20 max_resources_: 40689 aggressive_elimination: False factor: 3 ---------- iter: 0 n_candidates: 2034 n_resources: 20 Fitting 5 folds for each of 2034 candidates, totalling 10170 fits ---------- iter: 1 n_candidates: 678 n_resources: 60 Fitting 5 folds for each of 678 candidates, totalling 3390 fits ---------- iter: 2 n_candidates: 226 n_resources: 180 Fitting 5 folds for each of 226 candidates, totalling 1130 fits ---------- iter: 3 n_candidates: 76 n_resources: 540 Fitting 5 folds for each of 76 candidates, totalling 380 fits ---------- iter: 4 n_candidates: 26 n_resources: 1620 Fitting 5 folds for each of 26 candidates, totalling 130 fits ---------- iter: 5 n_candidates: 9 n_resources: 4860 Fitting 5 folds for each of 9 candidates, totalling 45 fits ---------- iter: 6 n_candidates: 3 n_resources: 14580 Fitting 5 folds for each of 3 candidates, totalling 15 fits ###Markdown Get the best set of hyperparameters along with the average F1-Score from the 5-folds CV ###Code clf.best_params_,clf.best_score_ ###Output _____no_output_____ ###Markdown Get the f1-Score on Test Data after fitting the model on the full train data using the best set of hyperparameters ###Code clf.score(X_test_full,y_test) results = pd.DataFrame(clf.cv_results_) results["params_str"] = results.params.apply(str) results.drop_duplicates(subset=("params_str", "iter"), inplace=True) mean_scores = results.pivot( index="iter", columns="params_str", values="mean_test_score" ) fig, ax = plt.subplots(figsize=(16,16)) ax = mean_scores.plot(legend=False, alpha=0.6, ax=ax) labels = [ f"Iteration {i+1}\nn_samples={clf.n_resources_[i]}\nn_candidates={clf.n_candidates_[i]}" for i in range(clf.n_iterations_) ] ax.set_xticks(range(clf.n_iterations_)) ax.set_xticklabels(labels, rotation=0, multialignment="left",size=16) ax.set_title("F1-Score of Candidates over Iterations",size=20) ax.set_ylabel("5-Folds Cross Validation F1-Score", fontsize=18) ax.set_xlabel("") plt.tight_layout() plt.show() results mean_scores ###Output _____no_output_____ ###Markdown Hyper Band ###Code import pandas as pd import numpy as np from sklearn.compose import ColumnTransformer from sklearn.preprocessing import StandardScaler, OneHotEncoder from sklearn.pipeline import Pipeline from sklearn.model_selection import train_test_split from sklearn.metrics import f1_score from sklearn.ensemble import RandomForestClassifier import matplotlib.pyplot as plt from scipy.stats import randint,truncnorm from hyperband import HyperbandSearchCV df = pd.read_csv("train.csv",sep=";") df.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 45211 entries, 0 to 45210 Data columns (total 17 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 age 45211 non-null int64 1 job 45211 non-null object 2 marital 45211 non-null object 3 education 45211 non-null object 4 default 45211 non-null object 5 balance 45211 non-null int64 6 housing 45211 non-null object 7 loan 45211 non-null object 8 contact 45211 non-null object 9 day 45211 non-null int64 10 month 45211 non-null object 11 duration 45211 non-null int64 12 campaign 45211 non-null int64 13 pdays 45211 non-null int64 14 previous 45211 non-null int64 15 poutcome 45211 non-null object 16 y 45211 non-null object dtypes: int64(7), object(10) memory usage: 5.9+ MB ###Markdown Convert the target variable to integer ###Code df['y'] = df['y'].map({'yes':1,'no':0}) ###Output _____no_output_____ ###Markdown Split full data into train and test data ###Code df_train, df_test = train_test_split(df, test_size=0.1, random_state=0) ###Output _____no_output_____ ###Markdown Get list of numerical features ###Code numerical_feats = list(df_train.drop(columns='y').select_dtypes(include=np.number).columns) ###Output _____no_output_____ ###Markdown Get list of categorical features ###Code categorical_feats = list(df_train.drop(columns='y').select_dtypes(exclude=np.number).columns) ###Output _____no_output_____ ###Markdown Initiate the preprocessors ###Code # Initiate the Normalization Pre-processing for Numerical Features numeric_preprocessor = StandardScaler() # Initiate the One-Hot-Encoding Pre-processing for Categorical Features categorical_preprocessor = OneHotEncoder(handle_unknown="ignore") ###Output _____no_output_____ ###Markdown Create the ColumnTransformer Class to delegate each preprocessor to the corresponding features ###Code preprocessor = ColumnTransformer( transformers=[ ("num", numeric_preprocessor, numerical_feats), ("cat", categorical_preprocessor, categorical_feats), ] ) ###Output _____no_output_____ ###Markdown Create a Pipeline of preprocessor and model ###Code pipe = Pipeline( steps=[("preprocessor", preprocessor), ("model", RandomForestClassifier(random_state=0))] ) ###Output _____no_output_____ ###Markdown Get all features from the train data ###Code X_train_full = df_train.drop(columns=['y']) y_train = df_train['y'] X_train_full.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 40689 entries, 17974 to 2732 Data columns (total 16 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 age 40689 non-null int64 1 job 40689 non-null object 2 marital 40689 non-null object 3 education 40689 non-null object 4 default 40689 non-null object 5 balance 40689 non-null int64 6 housing 40689 non-null object 7 loan 40689 non-null object 8 contact 40689 non-null object 9 day 40689 non-null int64 10 month 40689 non-null object 11 duration 40689 non-null int64 12 campaign 40689 non-null int64 13 pdays 40689 non-null int64 14 previous 40689 non-null int64 15 poutcome 40689 non-null object dtypes: int64(7), object(9) memory usage: 5.3+ MB ###Markdown Get all features from the test data ###Code X_test_full = df_test.drop(columns=['y']) y_test = df_test['y'] X_test_full.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 4522 entries, 14001 to 25978 Data columns (total 16 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 age 4522 non-null int64 1 job 4522 non-null object 2 marital 4522 non-null object 3 education 4522 non-null object 4 default 4522 non-null object 5 balance 4522 non-null int64 6 housing 4522 non-null object 7 loan 4522 non-null object 8 contact 4522 non-null object 9 day 4522 non-null int64 10 month 4522 non-null object 11 duration 4522 non-null int64 12 campaign 4522 non-null int64 13 pdays 4522 non-null int64 14 previous 4522 non-null int64 15 poutcome 4522 non-null object dtypes: int64(7), object(9) memory usage: 600.6+ KB ###Markdown Calculate F1-Score on Test Data without Hyperparameter Tuning ###Code # Fit the pipeline on train data pipe.fit(X_train_full,y_train) # Evaluate on the test data y_pred = pipe.predict(X_test_full) print(f1_score(y_test, y_pred)) ###Output 0.5164319248826291 ###Markdown Define the hyperparameter space ###Code hyperparameter_space = { "model__criterion": ["gini", "entropy"], "model__class_weight": ["balanced","balanced_subsample"], "model__min_samples_split": truncnorm(a=0,b=0.5,loc=0.005, scale=0.01), } ###Output _____no_output_____ ###Markdown Perform Hyper Band ###Code # Initiate the HB Class clf = HyperbandSearchCV(pipe, hyperparameter_space, resource_param='model__n_estimators', eta=3,min_iter=1,max_iter=100, random_state = 0, scoring = 'f1', cv=5, n_jobs=-1, refit = True, verbose=2) # Run the HB CV clf.fit(X_train_full, y_train) clf.best_params_,clf.best_score_ clf.score(X_test_full,y_test) def del_key(dict_,key): del dict_[key] return dict_ #delete model__n_estimators from params results = pd.DataFrame(clf.cv_results_) results['params'] = results['params'].apply(lambda x: del_key(x,'model__n_estimators')) import warnings warnings.filterwarnings('ignore') for bracket_i, bracket in enumerate(results['hyperband_bracket'].unique()): results_SH = results[results['hyperband_bracket']==bracket] results_SH["params_str"] = results_SH.params.apply(str) results_SH.drop_duplicates(subset=("params_str", "SH_iter"), inplace=True) mean_scores = results_SH.pivot( index="SH_iter", columns="params_str", values="mean_test_score" ) display(mean_scores) fig, ax = plt.subplots(figsize=(16,16)) ax = mean_scores.plot(legend=False, alpha=0.6, marker="*",ax=ax) labels = [ f"Iteration {i+1}\nn_estimators={clf.n_resources_[bracket_i][i]}\nn_candidates={int(clf.n_candidates_[bracket_i][i])}" for i in range(clf.n_trials_[bracket_i]) ] ax.set_xticks(range(clf.n_trials_[bracket_i])) ax.set_xticklabels(labels, rotation=0, multialignment="left",size=16) ax.set_title(f"Bracket-{bracket_i+1}\nF1-Score of Candidates over Iterations",size=20) ax.set_ylabel("5-Folds Cross Validation F1-Score", fontsize=18) ax.set_xlabel("") plt.tight_layout() plt.show() ###Output _____no_output_____ ###Markdown Bayesian Optimization Custom `Real` Hyperparameter Space wrapper.`truncnorm` distribution is added here. ###Code from skopt.space import * from scipy.stats import truncnorm class Real(Dimension): """Search space dimension that can take on any real value. Parameters ---------- low : float Lower bound (inclusive). high : float Upper bound (inclusive). prior : "uniform", "log-uniform", or "truncnorm", default="uniform" Distribution to use when sampling random points for this dimension. - If `"uniform"`, points are sampled uniformly between the lower and upper bounds. - If `"log-uniform"`, points are sampled uniformly between `log(lower, base)` and `log(upper, base)` where log has base `base`. - If "truncnorm", points are sample from the truncated normal distribution between the lower and upper bounds, with mean and std equal to loc and scale, respectively. The loc and scale needs to be given through **kwargs. base : int The logarithmic base to use for a log-uniform prior. - Default 10, otherwise commonly 2. transform : "identity", "normalize", optional The following transformations are supported. - "identity", (default) the transformed space is the same as the original space. - "normalize", the transformed space is scaled to be between 0 and 1. name : str or None Name associated with the dimension, e.g., "learning rate". dtype : str or dtype, default=np.float float type which will be used in inverse_transform, can be float. """ def __init__(self, low, high, prior="uniform", base=10, transform=None, name=None, dtype=np.float, **kwargs): if high <= low: raise ValueError("the lower bound {} has to be less than the" " upper bound {}".format(low, high)) self.low = low self.high = high self.prior = prior self.base = base self.log_base = np.log10(base) self.name = name self.dtype = dtype self._rvs = None self.transformer = None self.transform_ = transform self.kwargs = kwargs if isinstance(self.dtype, str) and self.dtype\ not in ['float', 'float16', 'float32', 'float64']: raise ValueError("dtype must be 'float', 'float16', 'float32'" "or 'float64'" " got {}".format(self.dtype)) elif isinstance(self.dtype, type) and self.dtype\ not in [float, np.float, np.float16, np.float32, np.float64]: raise ValueError("dtype must be float, np.float" " got {}".format(self.dtype)) if transform is None: transform = "identity" self.set_transformer(transform) def set_transformer(self, transform="identity"): """Define rvs and transformer spaces. Parameters ---------- transform : str Can be 'normalize' or 'identity' """ self.transform_ = transform if self.transform_ not in ["normalize", "identity"]: raise ValueError("transform should be 'normalize' or 'identity'" " got {}".format(self.transform_)) # XXX: The _rvs is for sampling in the transformed space. # The rvs on Dimension calls inverse_transform on the points sampled # using _rvs if self.transform_ == "normalize": # set upper bound to next float after 1. to make the numbers # inclusive of upper edge self._rvs = _uniform_inclusive(0., 1.) if self.prior == "uniform": self.transformer = Pipeline( [Identity(), Normalize(self.low, self.high)]) elif self.prior == "log-uniform": self.transformer = Pipeline( [LogN(self.base), Normalize(np.log10(self.low) / self.log_base, np.log10(self.high) / self.log_base)] ) else: #self.prior == "truncnorm" self.transformer = Pipeline( [Identity(), Normalize(self.low+1e-6, self.high)]) else: if self.prior == "uniform": self._rvs = _uniform_inclusive(self.low, self.high - self.low) self.transformer = Identity() elif self.prior == "log-uniform": self._rvs = _uniform_inclusive( np.log10(self.low) / self.log_base, np.log10(self.high) / self.log_base - np.log10(self.low) / self.log_base) self.transformer = LogN(self.base) else: #self.prior == "truncnorm" self._rvs = truncnorm(a=self.low,b=self.high, loc=self.kwargs.get("loc",(self.low + self.high)/2), scale=self.kwargs.get("scale",(self.low + self.high))) self.transformer = Identity() def __eq__(self, other): return (type(self) is type(other) and np.allclose([self.low], [other.low]) and np.allclose([self.high], [other.high]) and self.prior == other.prior and self.transform_ == other.transform_) def __repr__(self): return "Real(low={}, high={}, prior='{}', transform='{}')".format( self.low, self.high, self.prior, self.transform_) def inverse_transform(self, Xt): """Inverse transform samples from the warped space back into the original space. """ inv_transform = super(Real, self).inverse_transform(Xt) if isinstance(inv_transform, list): inv_transform = np.array(inv_transform) inv_transform = np.clip(inv_transform, self.low, self.high).astype(self.dtype) if self.dtype == float or self.dtype == 'float': # necessary, otherwise the type is converted to a numpy type return getattr(inv_transform, "tolist", lambda: value)() else: return inv_transform @property def bounds(self): return (self.low, self.high) @property def is_constant(self): return self.low == self.high def __contains__(self, point): if isinstance(point, list): point = np.array(point) return self.low <= point <= self.high @property def transformed_bounds(self): if self.transform_ == "normalize": return 0.0, 1.0 else: if self.prior in ["uniform","truncnorm"]: return self.low, self.high else: return np.log10(self.low), np.log10(self.high) def distance(self, a, b): """Compute distance between point `a` and `b`. Parameters ---------- a : float First point. b : float Second point. """ if not (a in self and b in self): raise RuntimeError("Can only compute distance for values within " "the space, not %s and %s." % (a, b)) return abs(a - b) def _uniform_inclusive(loc=0.0, scale=1.0): # like scipy.stats.distributions but inclusive of `high` # XXX scale + 1. might not actually be a float after scale if # XXX scale is very large. return uniform(loc=loc, scale=np.nextafter(scale, scale + 1.)) import pandas as pd import numpy as np from sklearn.compose import ColumnTransformer from sklearn.preprocessing import StandardScaler, OneHotEncoder from sklearn.pipeline import Pipeline as Sklearn_Pipeline from sklearn.model_selection import train_test_split from sklearn.metrics import f1_score from sklearn.ensemble import RandomForestClassifier import matplotlib.pyplot as plt from skopt import BayesSearchCV ###Output _____no_output_____ ###Markdown Load Data ###Code df = pd.read_csv("train.csv",sep=";") df.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 45211 entries, 0 to 45210 Data columns (total 17 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 age 45211 non-null int64 1 job 45211 non-null object 2 marital 45211 non-null object 3 education 45211 non-null object 4 default 45211 non-null object 5 balance 45211 non-null int64 6 housing 45211 non-null object 7 loan 45211 non-null object 8 contact 45211 non-null object 9 day 45211 non-null int64 10 month 45211 non-null object 11 duration 45211 non-null int64 12 campaign 45211 non-null int64 13 pdays 45211 non-null int64 14 previous 45211 non-null int64 15 poutcome 45211 non-null object 16 y 45211 non-null object dtypes: int64(7), object(10) memory usage: 5.9+ MB ###Markdown Convert the target variable to integer ###Code df['y'] = df['y'].map({'yes':1,'no':0}) ###Output _____no_output_____ ###Markdown Split full data into train and test data ###Code df_train, df_test = train_test_split(df, test_size=0.1, random_state=0) ###Output _____no_output_____ ###Markdown Get list of numerical features ###Code numerical_feats = list(df_train.drop(columns='y').select_dtypes(include=np.number).columns) ###Output _____no_output_____ ###Markdown Get list of categorical features ###Code categorical_feats = list(df_train.drop(columns='y').select_dtypes(exclude=np.number).columns) ###Output _____no_output_____ ###Markdown Initiate the preprocessors ###Code # Initiate the Normalization Pre-processing for Numerical Features numeric_preprocessor = StandardScaler() # Initiate the One-Hot-Encoding Pre-processing for Categorical Features categorical_preprocessor = OneHotEncoder(handle_unknown="ignore") ###Output _____no_output_____ ###Markdown Create the ColumnTransformer Class to delegate each preprocessor to the corresponding features ###Code preprocessor = ColumnTransformer( transformers=[ ("num", numeric_preprocessor, numerical_feats), ("cat", categorical_preprocessor, categorical_feats), ] ) ###Output _____no_output_____ ###Markdown Create a Pipeline of preprocessor and model ###Code pipe = Sklearn_Pipeline( steps=[("preprocessor", preprocessor), ("model", RandomForestClassifier(random_state=0))] ) ###Output _____no_output_____ ###Markdown Get all features from the train data ###Code X_train_full = df_train.drop(columns=['y']) y_train = df_train['y'] X_train_full.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 40689 entries, 17974 to 2732 Data columns (total 16 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 age 40689 non-null int64 1 job 40689 non-null object 2 marital 40689 non-null object 3 education 40689 non-null object 4 default 40689 non-null object 5 balance 40689 non-null int64 6 housing 40689 non-null object 7 loan 40689 non-null object 8 contact 40689 non-null object 9 day 40689 non-null int64 10 month 40689 non-null object 11 duration 40689 non-null int64 12 campaign 40689 non-null int64 13 pdays 40689 non-null int64 14 previous 40689 non-null int64 15 poutcome 40689 non-null object dtypes: int64(7), object(9) memory usage: 5.3+ MB ###Markdown Get all features from the test data ###Code X_test_full = df_test.drop(columns=['y']) y_test = df_test['y'] X_test_full.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 4522 entries, 14001 to 25978 Data columns (total 16 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 age 4522 non-null int64 1 job 4522 non-null object 2 marital 4522 non-null object 3 education 4522 non-null object 4 default 4522 non-null object 5 balance 4522 non-null int64 6 housing 4522 non-null object 7 loan 4522 non-null object 8 contact 4522 non-null object 9 day 4522 non-null int64 10 month 4522 non-null object 11 duration 4522 non-null int64 12 campaign 4522 non-null int64 13 pdays 4522 non-null int64 14 previous 4522 non-null int64 15 poutcome 4522 non-null object dtypes: int64(7), object(9) memory usage: 600.6+ KB ###Markdown Calculate F1-Score on Test Data without Hyperparameter Tuning ###Code # Fit the pipeline on train data pipe.fit(X_train_full,y_train) # Evaluate on the test data y_pred = pipe.predict(X_test_full) print(f1_score(y_test, y_pred)) ###Output 0.5164319248826291 ###Markdown Define the hyperparameter space ###Code hyperparameter_space = { "model__n_estimators": Integer(low=5, high=200), "model__criterion": Categorical(["gini", "entropy"]), "model__class_weight": Categorical(["balanced","balanced_subsample"]), "model__min_samples_split": Real(low=0,high=0.5,prior="truncnorm", **{"loc":0.005,"scale":0.01}) } ###Output _____no_output_____ ###Markdown BOGP Perform Bayesian Optimization Gaussian Process ###Code # Initiate the BOGP Class clf = BayesSearchCV(pipe, hyperparameter_space, n_iter=50, optimizer_kwargs={"base_estimator":"GP", "n_initial_points":10, "initial_point_generator":"random", "acq_func":"EI", "acq_optimizer":"auto", "n_jobs":-1, "random_state":0, "acq_func_kwargs": {"xi":0.01} }, random_state = 0, scoring = 'f1', cv=5, n_jobs=-1, refit = True, verbose=2) # Run the BOGP CV clf.fit(X_train_full, y_train) clf.best_params_,clf.best_score_ clf.score(X_test_full,y_test) ###Output _____no_output_____ ###Markdown BORF Perform Bayesian Optimization Random Forest ###Code # Initiate the BORF Class clf = BayesSearchCV(pipe, hyperparameter_space, n_iter=50, optimizer_kwargs={"base_estimator":"RF", "n_initial_points":10, "initial_point_generator":"random", "acq_func":"LCB", "acq_optimizer":"auto", "n_jobs":-1, "random_state":0, "acq_func_kwargs": {"kappa":1.96} }, random_state = 0, scoring = 'f1', cv=5, n_jobs=-1, refit = True, verbose=2) # Run the BORF CV clf.fit(X_train_full, y_train) clf.best_params_,clf.best_score_ clf.score(X_test_full,y_test) ###Output _____no_output_____ ###Markdown BOGBRT Perform Bayesian Optimization Gradient Boosted Trees ###Code # Initiate the BOGBRT Class clf = BayesSearchCV(pipe, hyperparameter_space, n_iter=50, optimizer_kwargs={"base_estimator":"GBRT", "n_initial_points":10, "initial_point_generator":"random", "acq_func":"LCB", "acq_optimizer":"auto", "n_jobs":-1, "random_state":0, "acq_func_kwargs": {"kappa":1.96} }, random_state = 0, scoring = 'f1', cv=5, n_jobs=-1, refit = True, verbose=2) # Run the BOGBRT CV clf.fit(X_train_full, y_train) clf.best_params_,clf.best_score_ clf.score(X_test_full,y_test) ###Output _____no_output_____
notebooks/ml/cc/exercises/5_binary_classification.ipynb
###Markdown ###Code #@title Copyright 2020 Google LLC. Double-click here for license information. # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown Binary ClassificationSo far, you've only created regression models. That is, you created models that produced floating-point predictions, such as, "houses in this neighborhood costs N thousand dollars." In this Colab, you'll create and evaluate a binary [classification model](https://developers.google.com/machine-learning/glossary/classification_model). That is, you'll create a model that answers a binary question. In this exercise, the binary question will be, "Are houses in this neighborhood above a certain price?" Learning Objectives:After doing this Colab, you'll know how to: * Convert a regression question into a classification question. * Modify the classification threshold and determine how that modification influences the model. * Experiment with different classification metrics to determine your model's effectiveness. The Dataset Like several of the previous Colabs, this Colab uses the [California Housing Dataset](https://developers.google.com/machine-learning/crash-course/california-housing-data-description). Use the right version of TensorFlow The following hidden code cell ensures that the Colab will run on TensorFlow 2.X. ###Code #@title Run on TensorFlow 2.x %tensorflow_version 2.x ###Output _____no_output_____ ###Markdown Call the import statementsThe following code imports the necessary modules. ###Code #@title Load the imports # from __future__ import absolute_import, division, print_function, unicode_literals import numpy as np import pandas as pd import tensorflow as tf from tensorflow.keras import layers from matplotlib import pyplot as plt # The following lines adjust the granularity of reporting. pd.options.display.max_rows = 10 pd.options.display.float_format = "{:.1f}".format # tf.keras.backend.set_floatx('float32') print("Ran the import statements.") ###Output _____no_output_____ ###Markdown Load the datasets from the internetThe following code cell loads the separate .csv files and creates the following two pandas DataFrames:* `train_df`, which contains the training set* `test_df`, which contains the test set ###Code train_df = pd.read_csv("https://download.mlcc.google.com/mledu-datasets/california_housing_train.csv") test_df = pd.read_csv("https://download.mlcc.google.com/mledu-datasets/california_housing_test.csv") train_df = train_df.reindex(np.random.permutation(train_df.index)) # shuffle the training set ###Output _____no_output_____ ###Markdown Unlike some of the previous Colabs, the preceding code cell did not scale the label (`median_house_value`). The following section ("Normalize values") provides an alternative approach. Normalize valuesWhen creating a model with multiple features, the values of each feature should cover roughly the same range. For example, if one feature's range spans 500 to 100,000 and another feature's range spans 2 to 12, then the model will be difficult or impossible to train. Therefore, you should [normalize](https://developers.google.com/machine-learning/glossary/normalization) features in a multi-feature model. The following code cell normalizes datasets by converting each raw value (including the label) to its Z-score. A **Z-score** is the number of standard deviations from the mean for a particular raw value. For example, consider a feature having the following characteristics: * The mean is 60. * The standard deviation is 10.The raw value 75 would have a Z-score of +1.5:``` Z-score = (75 - 60) / 10 = +1.5```The raw value 38 would have a Z-score of -2.2:``` Z-score = (38 - 60) / 10 = -2.2``` ###Code # Calculate the Z-scores of each column in the training set and # write those Z-scores into a new pandas DataFrame named train_df_norm. train_df_mean = train_df.mean() train_df_std = train_df.std() train_df_norm = (train_df - train_df_mean)/train_df_std # Examine some of the values of the normalized training set. Notice that most # Z-scores fall between -2 and +2. train_df_norm.head() # Calculate the Z-scores of each column in the test set and # write those Z-scores into a new pandas DataFrame named test_df_norm. test_df_mean = test_df.mean() test_df_std = test_df.std() test_df_norm = (test_df - test_df_mean)/test_df_std ###Output _____no_output_____ ###Markdown Task 1: Create a binary labelIn classification problems, the label for every example must be either 0 or 1. Unfortunately, the natural label in the California Housing Dataset, `median_house_value`, contains floating-point values like 80,100 or 85,700 rather than 0s and 1s, while the normalized version of `median_house_values` contains floating-point values primarily between -3 and +3.Your task is to create a new column named `median_house_value_is_high` in both the training set and the test set . If the `median_house_value` is higher than a certain arbitrary value (defined by `threshold`), then set `median_house_value_is_high` to 1. Otherwise, set `median_house_value_is_high` to 0. **Hint:** The cells in the `median_house_value_is_high` column must each hold `1` and `0`, not `True` and `False`. To convert `True` and `False` to `1` and `0`, call the pandas DataFrame function `astype(float)`. ###Code threshold = 265000 # This is the 75th percentile for median house values. train_df_norm["median_house_value_is_high"] = (train_df["median_house_value"] > threshold).astype(float) test_df_norm["median_house_value_is_high"] = (test_df["median_house_value"] > threshold).astype(float) # Print out a few example cells from the beginning and # middle of the training set, just to make sure that # your code created only 0s and 1s in the newly created # median_house_value_is_high column train_df_norm["median_house_value_is_high"].head(8000) #@title Double-click for possible solutions. # We arbitrarily set the threshold to 265,000, which is # the 75th percentile for median house values. Every neighborhood # with a median house price above 265,000 will be labeled 1, # and all other neighborhoods will be labeled 0. threshold = 265000 train_df_norm["median_house_value_is_high"] = (train_df["median_house_value"] > threshold).astype(float) test_df_norm["median_house_value_is_high"] = (test_df["median_house_value"] > threshold).astype(float) train_df_norm["median_house_value_is_high"].head(8000) # Alternatively, instead of picking the threshold # based on raw house values, you can work with Z-scores. # For example, the following possible solution uses a Z-score # of +1.0 as the threshold, meaning that no more # than 16% of the values in median_house_value_is_high # will be labeled 1. # threshold_in_Z = 1.0 # train_df_norm["median_house_value_is_high"] = (train_df_norm["median_house_value"] > threshold_in_Z).astype(float) # test_df_norm["median_house_value_is_high"] = (test_df_norm["median_house_value"] > threshold_in_Z).astype(float) ###Output _____no_output_____ ###Markdown Represent features in feature columnsThis code cell specifies the features that you'll ultimately train the model on and how each of those features will be represented. The transformations (collected in `feature_layer`) don't actually get applied until you pass a DataFrame to it, which will happen when we train the model. ###Code # Create an empty list that will eventually hold all created feature columns. feature_columns = [] # Create a numerical feature column to represent median_income. median_income = tf.feature_column.numeric_column("median_income") feature_columns.append(median_income) # Create a numerical feature column to represent total_rooms. tr = tf.feature_column.numeric_column("total_rooms") feature_columns.append(tr) # Convert the list of feature columns into a layer that will later be fed into # the model. feature_layer = layers.DenseFeatures(feature_columns) # Print the first 3 and last 3 rows of the feature_layer's output when applied # to train_df_norm: feature_layer(dict(train_df_norm)) ###Output _____no_output_____ ###Markdown Define functions that build and train a modelThe following code cell defines two functions: * `create_model(my_learning_rate, feature_layer, my_metrics)`, which defines the model's topography. * `train_model(model, dataset, epochs, label_name, batch_size, shuffle)`, uses input features and labels to train the model.Prior exercises used [ReLU](https://developers.google.com/machine-learning/glossaryReLU) as the [activation function](https://developers.google.com/machine-learning/glossaryactivation_function). By contrast, this exercise uses [sigmoid](https://developers.google.com/machine-learning/glossarysigmoid_function) as the activation function. ###Code #@title Define the functions that create and train a model. def create_model(my_learning_rate, feature_layer, my_metrics): """Create and compile a simple classification model.""" # Most simple tf.keras models are sequential. model = tf.keras.models.Sequential() # Add the feature layer (the list of features and how they are represented) # to the model. model.add(feature_layer) # Funnel the regression value through a sigmoid function. model.add(tf.keras.layers.Dense(units=1, input_shape=(1,), activation=tf.sigmoid),) # Call the compile method to construct the layers into a model that # TensorFlow can execute. Notice that we're using a different loss # function for classification than for regression. model.compile(optimizer=tf.keras.optimizers.RMSprop(learning_rate=my_learning_rate), loss=tf.keras.losses.BinaryCrossentropy(), metrics=my_metrics) return model def train_model(model, dataset, epochs, label_name, batch_size=None, shuffle=True): """Feed a dataset into the model in order to train it.""" # The x parameter of tf.keras.Model.fit can be a list of arrays, where # each array contains the data for one feature. Here, we're passing # every column in the dataset. Note that the feature_layer will filter # away most of those columns, leaving only the desired columns and their # representations as features. features = {name:np.array(value) for name, value in dataset.items()} label = np.array(features.pop(label_name)) history = model.fit(x=features, y=label, batch_size=batch_size, epochs=epochs, shuffle=shuffle) # The list of epochs is stored separately from the rest of history. epochs = history.epoch # Isolate the classification metric for each epoch. hist = pd.DataFrame(history.history) return epochs, hist print("Defined the create_model and train_model functions.") ###Output _____no_output_____ ###Markdown Define a plotting functionThe following [matplotlib](https://developers.google.com/machine-learning/glossary/matplotlib) function plots one or more curves, showing how various classification metrics change with each epoch. ###Code #@title Define the plotting function. def plot_curve(epochs, hist, list_of_metrics): """Plot a curve of one or more classification metrics vs. epoch.""" # list_of_metrics should be one of the names shown in: # https://www.tensorflow.org/tutorials/structured_data/imbalanced_data#define_the_model_and_metrics plt.figure() plt.xlabel("Epoch") plt.ylabel("Value") for m in list_of_metrics: x = hist[m] plt.plot(epochs[1:], x[1:], label=m) plt.legend() print("Defined the plot_curve function.") ###Output _____no_output_____ ###Markdown Invoke the creating, training, and plotting functionsThe following code cell calls specify the hyperparameters, and then invokes the functions to create and train the model, and then to plot the results. ###Code # The following variables are the hyperparameters. learning_rate = 0.001 epochs = 20 batch_size = 100 label_name = "median_house_value_is_high" classification_threshold = 0.35 # Establish the metrics the model will measure. METRICS = [ tf.keras.metrics.BinaryAccuracy(name='accuracy', threshold=classification_threshold), ] # Establish the model's topography. my_model = create_model(learning_rate, feature_layer, METRICS) # Train the model on the training set. epochs, hist = train_model(my_model, train_df_norm, epochs, label_name, batch_size) # Plot a graph of the metric(s) vs. epochs. list_of_metrics_to_plot = ['accuracy'] plot_curve(epochs, hist, list_of_metrics_to_plot) ###Output _____no_output_____ ###Markdown Accuracy should gradually improve during training (until it can improve no more). Evaluate the model against the test setAt the end of model training, you ended up with a certain accuracy against the *training set*. Invoke the following code cell to determine your model's accuracy against the *test set*. ###Code features = {name:np.array(value) for name, value in test_df_norm.items()} label = np.array(features.pop(label_name)) my_model.evaluate(x = features, y = label, batch_size=batch_size) ###Output _____no_output_____ ###Markdown Task 2: How accurate is your model really?Is your model valuable? ###Code #@title Double-click for a possible answer to Task 2. # A perfect model would make 100% accurate predictions. # Our model makes 80% accurate predictions. 80% sounds # good, but note that a model that always guesses # "median_house_value_is_high is False" would be 75% # accurate. ###Output _____no_output_____ ###Markdown Task 3: Add precision and recall as metricsRelying solely on accuracy, particularly for a class-imbalanced data set (like ours), can be a poor way to judge a classification model. Modify the code in the following code cell to enable the model to measure not only accuracy but also precision and recall. We haveadded accuracy and precision; your task is to add recall. See the [TensorFlow Reference](https://www.tensorflow.org/api_docs/python/tf/keras/metrics/Recall) for details. ###Code # The following variables are the hyperparameters. learning_rate = 0.001 epochs = 20 batch_size = 100 classification_threshold = 0.5 label_name = "median_house_value_is_high" # Modify the following definition of METRICS to generate # not only accuracy and precision, but also recall: METRICS = [ tf.keras.metrics.BinaryAccuracy(name='accuracy', threshold=classification_threshold), tf.keras.metrics.Precision(thresholds=classification_threshold, name='precision' ), tf.keras.metrics.Recall(thresholds=classification_threshold, name='recall' ) ] # Establish the model's topography. my_model = create_model(learning_rate, feature_layer, METRICS) # Train the model on the training set. epochs, hist = train_model(my_model, train_df_norm, epochs, label_name, batch_size) # Plot metrics vs. epochs list_of_metrics_to_plot = ['accuracy', 'precision', 'recall'] plot_curve(epochs, hist, list_of_metrics_to_plot) #@title Double-click to view the solution for Task 3. # The following variables are the hyperparameters. learning_rate = 0.001 epochs = 20 batch_size = 100 classification_threshold = 0.35 label_name = "median_house_value_is_high" # Here is the updated definition of METRICS: METRICS = [ tf.keras.metrics.BinaryAccuracy(name='accuracy', threshold=classification_threshold), tf.keras.metrics.Precision(thresholds=classification_threshold, name='precision' ), tf.keras.metrics.Recall(thresholds=classification_threshold, name="recall"), ] # Establish the model's topography. my_model = create_model(learning_rate, feature_layer, METRICS) # Train the model on the training set. epochs, hist = train_model(my_model, train_df_norm, epochs, label_name, batch_size) # Plot metrics vs. epochs list_of_metrics_to_plot = ['accuracy', "precision", "recall"] plot_curve(epochs, hist, list_of_metrics_to_plot) # The new graphs suggest that precision and recall are # somewhat in conflict. That is, improvements to one of # those metrics may hurt the other metric. ###Output _____no_output_____ ###Markdown Task 4: Experiment with the classification threshold (if time permits)Experiment with different values for `classification_threshold` in the code cell within "Invoke the creating, training, and plotting functions." What value of `classification_threshold` produces the highest accuracy? ###Code #@title Double-click to view the solution for Task 4. # The following variables are the hyperparameters. learning_rate = 0.001 epochs = 20 batch_size = 100 classification_threshold = 0.52 label_name = "median_house_value_is_high" # Here is the updated definition of METRICS: METRICS = [ tf.keras.metrics.BinaryAccuracy(name='accuracy', threshold=classification_threshold), tf.keras.metrics.Precision(thresholds=classification_threshold, name='precision' ), tf.keras.metrics.Recall(thresholds=classification_threshold, name="recall"), ] # Establish the model's topography. my_model = create_model(learning_rate, feature_layer, METRICS) # Train the model on the training set. epochs, hist = train_model(my_model, train_df_norm, epochs, label_name, batch_size) # Plot metrics vs. epochs list_of_metrics_to_plot = ['accuracy', "precision", "recall"] plot_curve(epochs, hist, list_of_metrics_to_plot) # A `classification_threshold` of slightly over 0.5 # appears to produce the highest accuracy (about 83%). # Raising the `classification_threshold` to 0.9 drops # accuracy by about 5%. Lowering the # `classification_threshold` to 0.3 drops accuracy by # about 3%. ###Output _____no_output_____ ###Markdown Task 5: Summarize model performance (if time permits)If time permits, add one more metric that attempts to summarize the model's overall performance. ###Code #@title Double-click to view the solution for Task 5. # The following variables are the hyperparameters. learning_rate = 0.001 epochs = 20 batch_size = 100 label_name = "median_house_value_is_high" # AUC is a reasonable "summary" metric for # classification models. # Here is the updated definition of METRICS to # measure AUC: METRICS = [ tf.keras.metrics.AUC(num_thresholds=100, name='auc'), ] # Establish the model's topography. my_model = create_model(learning_rate, feature_layer, METRICS) # Train the model on the training set. epochs, hist = train_model(my_model, train_df_norm, epochs, label_name, batch_size) # Plot metrics vs. epochs list_of_metrics_to_plot = ['auc'] plot_curve(epochs, hist, list_of_metrics_to_plot) ###Output _____no_output_____
Balanced_1293_91%.ipynb
###Markdown recupere BDD -> Matrices d'inputs ###Code import pandas as pd ROWS=1293 #recupere les données du fichier csv et les mets dans les ndarray X et Y (grace à to_numpy(),sinon ce sont des types dataframe) #Ils ont donc une shape de (671,1) X= pd.read_csv("sentence_1293.csv", encoding = "utf-8",sep=";",usecols = ["sen_sentence"],nrows=ROWS).to_numpy() Y= pd.read_csv("sentence_1293.csv", encoding = "utf-8",sep=";",usecols = ["sen_en_full"],nrows=ROWS).to_numpy().reshape(-1,) import pprint pprint.pprint(Y.dtype) import pprint #permet un affichage ligne par ligne,équivalent au println() de java from nltk.tag import StanfordPOSTagger #librairie utilisée pour tagger jar = './stanford-postagger-full-2018-10-16/stanford-postagger.jar' # recupere les modeles telechargés dans le fichier model = './stanford-postagger-full-2018-10-16/models/french.tagger' java_path = "C:/Program Files/Java/jre1.8.0_211/bin/java.exe" # on s'assure du lien pour java os.environ['JAVAHOME'] = java_path # variable d'environnement pos_tagger = StanfordPOSTagger(model, jar, encoding='utf-8' ) #prêt à être utilisé #fonctions qui transforme une phrase en matrice lignes (taille 28) de nos inputs , binarisation des tags def senToTags(sentence): sen_tags=[m[1] for m in pos_tagger.tag(sentence)] #tout les tags presents dans la phrase #dictionnaire des tags possibles (28) tags=['P', 'N', 'ADJ', 'NC', 'DET', 'NPP', 'V', 'VPP', 'ADV', 'PROREL', 'CLS', 'VINF', 'CC', 'PUNC', 'PRO', 'ET', 'CS', 'CLR', 'CLO', 'VPR', 'ADVWH', 'C', 'VIMP', 'CL', 'VS', 'PROWH', 'ADJWH', 'PREF'] result=list() for tag in tags: # pour chaque tag du dictionnaire if tag in sen_tags: # on vérifie s'il est présent dans la phrase result.append(1) else: result.append(0) return result print("exemple : senToTags(\" Les Americains ont réagi en six semaines là où il a fallu neuf mois aux Francais.\") ---> ",senToTags("Les Americains ont réagi en six semaines là où il a fallu neuf mois aux Francais.")) print(senToTags("Je")) print(senToTags("suis")) print(senToTags("beau")) print(senToTags("Je suis beau")) import time start = time.time() # recupere l'heure exacte pour calculer le temps d'execution X_tmp=list() # initialise une list for i in range(len(X)): X_tmp+=senToTags(X[i]) #on applique cette fonction (cellule precedente) à chaque phrase, à chaque ligne de X donc end = time.time() print("time execution : ",end - start) from imblearn.over_sampling import SMOTE from sklearn.model_selection import train_test_split X_tmp=np.asarray(X_tmp).reshape(-1,28) #redimensionne en imposant 28 colonnes , peut importe le nb de lignes #simple separation de X_tmp en X_train et X_test selon le ratio , test_size ... meme chose pour Y # le random_state=2 permet uniquement de fixer l'aleatoire apres un premier tirage pour ne pas avoir de problemes en reexecutant X_train, X_test, Y_train, Y_test = train_test_split(X_tmp, Y, test_size=0.1, random_state=2) print("Before OverSampling, counts of label '1': {}".format(sum(Y_train==1))) # nb de label=1 avant equilibrage print("Before OverSampling, counts of label '0': {} \n".format(sum(Y_train==0))) # nb de label=0 avant equilibrage sm = SMOTE(random_state=2) #initialise le regulateur qui va en realité crée des doublons de la classe X_train, Y_train = sm.fit_sample(X_train, Y_train) # inferieur pour egaliser le nb de labels print('After OverSampling, the shape of train_X: {}'.format(X_train.shape)) print('After OverSampling, the shape of train_y: {} \n'.format(Y_train.shape)) print("After OverSampling, counts of label '1': {}".format(sum(Y_train == 1))) print("After OverSampling, counts of label '0': {}".format(sum(Y_train == 0))) ###Output Before OverSampling, counts of label '1': 610 Before OverSampling, counts of label '0': 553 After OverSampling, the shape of train_X: (1220, 28) After OverSampling, the shape of train_y: (1220,) After OverSampling, counts of label '1': 610 After OverSampling, counts of label '0': 610 ###Markdown Forward Neural Network Fonctions pour le reseau de neurones ###Code #couche de neurones, on aurait pu utilisé tf.layers.dense() qui prend les meme parametres et qui crée une couche intégralement connectée, elle se charge de creer les var necessaires en utilisant la stratégie d'initialisationj appropriée def neuron_layer(X, n_neurons, name, activation=None): with tf.name_scope(name): #portée de nom , facultatif, pour mieux retrouver n_inputs = int(X.shape[1]) #nb entrées = nb de colonnes de X donc 28 #ces 3 lignes créent la var W qui contiendra la matrice des poids des connexions entre chaque entrée et chaque neurone stddev = 2 / np.sqrt(n_inputs) #facultatif, on tronquera avec cet ecart-type qui permet à l'algo de converger plus rapidement,plus stddev est grand et plus l'ecart sera grand init = tf.truncated_normal((n_inputs, n_neurons), stddev=stddev) #matrice n_inputs x n_neurons valeurs proche de 0 , tronquées avec stddeb W = tf.Variable(init,name="kernel") #weights random b = tf.Variable(tf.zeros([n_neurons]), name="bias") #biais,initialisé à 0 Z = tf.matmul(X, W) + b #z=X.W +b Z = tf.cast(Z,tf.int32) if activation is not None: #passe par une fonction d'activation si elle existe return activation(Z) else: return Z #return un Y avec une dimension = valeur max+1 , donc ici y_one_hot aura une shape=(any,2) #voir exemple cellule d'apres def to_one_hot(y): n_classes = y.max() + 1 # dimension +1 dans notre cas , ce sera forcement 2 m = len(y) Y_one_hot = np.zeros((m, n_classes)) #matrice de dimension(m,n_classes) remplit de 0 Y_one_hot[np.arange(m), y] = 1 #on mets un 1 dans la colonne 1 etc.. voir exemple return Y_one_hot #Homogeneisation, decoupage de X et Y en paquets de taille = batch_size, choisis aléatoirement dans X et Y , permets d'eviter def shuffle_batch(X, y, batch_size): #de calculer que sur une série de données "particuliere" comme l'interview de Rihanna et pas sur le reste, ici au moins , on a des données homogenes rnd_idx = np.random.permutation(len(X)) #array de taille = 671 de valeur aleatoires < 671 n_batches = len(X) // batch_size #nb de paquets for batch_idx in np.array_split(rnd_idx, n_batches): #np.array_split(X,n) split X en n parties(array) X_batch, y_batch = X[batch_idx], y[batch_idx] #on declare nos paquets yield X_batch, y_batch #yield est comme un return sauf que c'est un generateur , ce qui permet de gagner en memoire , il ne parcourt pas à chaque appel toute la fonction mais recupere à la volée la valeur puis incremente ensuite le generateur , un lien qui explique davanatage : https://www.journaldunet.fr/web-tech/developpement/1202863-a-quoi-sert-le-mot-cle-yield-en-python/ ###Output _____no_output_____ ###Markdown exemples fonctions utilisées ###Code #exemple y_one_hot y=np.asarray([1,0,1,1,0,0]) print("y_shape = ",y.shape) y1=to_one_hot(y) print("y1_shape = ",y1.shape) #on mets pour chaque element de y ayant pour valeur i , un 1 à la colonne i et sur le reste de la ligne, que des 0 print("y1 =" ,y1) #exemple truncated_normal import tensorflow as tf with tf.Session() as sess: p=tf.truncated_normal((3,2),stddev=0.2).eval() print(p) # exemple argmax a = np.arange(6).reshape(2,3) + 10 print("a =",a) print("axis=0 ->",np.argmax(a,axis=0)) #axis=0 prends pour chaque colonne,l'indice du max print("axis=1 ->",np.argmax(a,axis=1)) #axis=1 prends pour chaque ligne,l'indice du max ###Output a = [[10 11 12] [13 14 15]] axis=0 -> [1 1 1] axis=1 -> [2 2] ###Markdown 28_18_11_2 reseau 0.01 learning rate,75 epochs,26 batchs -> 91% ###Code import tensorflow as tf import sklearn n_inputs = 28 # effet entonnoir , quelques liens pour comprendre comment choisir n_hidden1 = 18 # le nb de neurons par couches : https://stats.stackexchange.com/questions/181/how-to-choose-the-number-of-hidden-layers-and-nodes-in-a-feedforward-neural-netw n_hidden2 = 11 # https://www.researchgate.net/post/How_to_decide_the_number_of_hidden_layers_and_nodes_in_a_hidden_layer n_outputs = 2 # 2 classes , 2 outputs reset_graph() X = tf.placeholder(tf.float32, shape=(None, n_inputs), name="X") #permets d'utiliser X sans lui rentrer tout de suite ses donnees y = tf.placeholder(tf.float32, shape=(None), name="y") #None => any hidden1 = tf.layers.dense(X, n_hidden1, name="hidden1", #prend en entrée nos données(phrase), puisque X sera nourrit dans le feed_dict par X_train activation=tf.nn.relu) #et en sortie le nb de neurones dans la 2e couche hidden2 = tf.layers.dense(hidden1, n_hidden2, name="hidden2", # prend en entrée la couche precedente etc... activation=tf.nn.relu) logits = tf.layers.dense(hidden2, n_outputs, name="outputs") #derniere couche, donc renvoie les valeurs sorties par le reseau learning_rate =0.01 #le reseau est pret , on definit la fonction de cout, et pour ca on utilise l'entropie croisée xentropy = tf.keras.backend.binary_crossentropy(y,logits) # penalise les faible proba , on utilise une pour binary classificaion ici loss = tf.reduce_mean(xentropy) # l'entroprie est calculée à partir des logits et attend les labels de y , reduce_mean est l'entroprie croisée moyenne sur toutes les instances #ajuste les parametres du modele pour minimiser la fonction cout optimizer = tf.train.GradientDescentOptimizer(learning_rate) training_op = optimizer.minimize(loss) #comment on évalue le modele,ici l'exactitude , on va verifier si la prevision est correcte en verifiant si le logit le plus eleve = le bon label correct = tf.nn.in_top_k(logits, tf.argmax(y, 1), 1) #voir exemple dans la cellule d'apres pour cette fonction accuracy = tf.reduce_mean(tf.cast(correct, tf.float32)) #moyenne , donc precision gloable du modele init = tf.global_variables_initializer() #noeud qui initialise toutes les variables saver = tf.train.Saver() #pour enregistrer le modele n_epochs = 75 batch_size = 26 #phase d'execution with tf.Session() as sess: init.run() #execute le noeud init for epoch in range(n_epochs): # a chaque epoch, on evalue chaque minis paquets qui correspondent au X_train a la fin de la sous boucle for X_batch, y_batch in shuffle_batch(X_train, Y_train, batch_size): #pour chaque mini-lot choisis de maniere homogene dans X_train,Y_train _, cost, corr, acc = sess.run([training_op, loss, correct, accuracy], feed_dict={X: X_train, y: to_one_hot(Y_train)}) #on execute l'operation d'entrainement sur training_op et sur d'autres parametres pour avoir une meilleure idée de ce qu'il se passe (cf print suivant), acc_batch = accuracy.eval(feed_dict={X: X_batch, y: to_one_hot(y_batch)}) #a la fin de chaque etape, le code evalue l'exactitude du modele sur le dernier mini lot (car hors boucle) print(epoch,'Batch accuracy: {} Loss: {} Train_Accuracy: {}'.format(acc_batch,cost, acc)) save_path = saver.save(sess, "./my_model_final.ckpt") #le modele est entrainé, on test with tf.Session() as sess: saver.restore(sess, "./my_model_final.ckpt") # restore le model Z = logits.eval(feed_dict={X: X_test}) # evalue la couche de sortie sur X_test y_pred = np.argmax(Z, axis=1) #renvoie l'indice du max de chaque ligne (axis=1) de Z , donc la proba la plus elevée parmis les classes print("Predicted classes:",y_pred) print("Actual classes: ", Y_test) print("accuracy_score : ",sklearn.metrics.accuracy_score(Y_test, y_pred, normalize=True ,sample_weight=None)) print("matthews_corrcoef : ",sklearn.metrics.matthews_corrcoef(Y_test, y_pred, sample_weight=None)) print("F1_score : ",sklearn.metrics.f1_score(Y_test, y_pred, labels=None, pos_label=1, average='binary', sample_weight=None)) ###Output WARNING:tensorflow:From <ipython-input-13-bb52d814efda>:15: dense (from tensorflow.python.layers.core) is deprecated and will be removed in a future version. Instructions for updating: Use keras.layers.dense instead. WARNING:tensorflow:From /home/yaniv/.local/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version. Instructions for updating: Colocations handled automatically by placer. WARNING:tensorflow:From /home/yaniv/.local/lib/python3.6/site-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.cast instead. 0 Batch accuracy: 0.5384615659713745 Loss: 0.6516870260238647 Train_Accuracy: 0.6213114857673645 1 Batch accuracy: 0.7307692170143127 Loss: 0.5982598066329956 Train_Accuracy: 0.6975409984588623 2 Batch accuracy: 0.7307692170143127 Loss: 0.5531545281410217 Train_Accuracy: 0.7442622780799866 3 Batch accuracy: 0.7692307829856873 Loss: 0.5144656300544739 Train_Accuracy: 0.7795081734657288 4 Batch accuracy: 0.807692289352417 Loss: 0.48532843589782715 Train_Accuracy: 0.8049180507659912 5 Batch accuracy: 0.8461538553237915 Loss: 0.46316108107566833 Train_Accuracy: 0.8204917907714844 6 Batch accuracy: 0.9615384340286255 Loss: 0.4460398256778717 Train_Accuracy: 0.8262295126914978 7 Batch accuracy: 0.7692307829856873 Loss: 0.4316900670528412 Train_Accuracy: 0.8393442630767822 8 Batch accuracy: 0.8846153616905212 Loss: 0.41968193650245667 Train_Accuracy: 0.8434426188468933 9 Batch accuracy: 0.7307692170143127 Loss: 0.4097774922847748 Train_Accuracy: 0.8434426188468933 10 Batch accuracy: 0.8846153616905212 Loss: 0.4014088213443756 Train_Accuracy: 0.8475409746170044 11 Batch accuracy: 0.8461538553237915 Loss: 0.39415931701660156 Train_Accuracy: 0.8508196473121643 12 Batch accuracy: 0.8461538553237915 Loss: 0.38784223794937134 Train_Accuracy: 0.8500000238418579 13 Batch accuracy: 0.9615384340286255 Loss: 0.3829469084739685 Train_Accuracy: 0.854098379611969 14 Batch accuracy: 0.9615384340286255 Loss: 0.37892138957977295 Train_Accuracy: 0.8532786965370178 15 Batch accuracy: 0.7692307829856873 Loss: 0.37563422322273254 Train_Accuracy: 0.8549180030822754 16 Batch accuracy: 0.807692289352417 Loss: 0.3728090226650238 Train_Accuracy: 0.8557376861572266 17 Batch accuracy: 0.7692307829856873 Loss: 0.37032026052474976 Train_Accuracy: 0.8573770523071289 18 Batch accuracy: 0.9230769276618958 Loss: 0.36815041303634644 Train_Accuracy: 0.8573770523071289 19 Batch accuracy: 0.8846153616905212 Loss: 0.3662404417991638 Train_Accuracy: 0.8573770523071289 20 Batch accuracy: 0.9230769276618958 Loss: 0.3645246624946594 Train_Accuracy: 0.8573770523071289 21 Batch accuracy: 0.7692307829856873 Loss: 0.36291059851646423 Train_Accuracy: 0.8573770523071289 22 Batch accuracy: 0.9615384340286255 Loss: 0.3614101707935333 Train_Accuracy: 0.8573770523071289 23 Batch accuracy: 0.8461538553237915 Loss: 0.3599766194820404 Train_Accuracy: 0.8573770523071289 24 Batch accuracy: 0.9230769276618958 Loss: 0.3585716485977173 Train_Accuracy: 0.8573770523071289 25 Batch accuracy: 0.8846153616905212 Loss: 0.357231467962265 Train_Accuracy: 0.8573770523071289 26 Batch accuracy: 0.9230769276618958 Loss: 0.3559054136276245 Train_Accuracy: 0.8573770523071289 27 Batch accuracy: 0.9230769276618958 Loss: 0.35465604066848755 Train_Accuracy: 0.8573770523071289 28 Batch accuracy: 0.7692307829856873 Loss: 0.35346275568008423 Train_Accuracy: 0.8573770523071289 29 Batch accuracy: 0.9230769276618958 Loss: 0.3523232340812683 Train_Accuracy: 0.8565573692321777 30 Batch accuracy: 0.7307692170143127 Loss: 0.3512333333492279 Train_Accuracy: 0.8565573692321777 31 Batch accuracy: 0.9230769276618958 Loss: 0.35024890303611755 Train_Accuracy: 0.8565573692321777 32 Batch accuracy: 0.8461538553237915 Loss: 0.34933650493621826 Train_Accuracy: 0.8565573692321777 33 Batch accuracy: 0.9615384340286255 Loss: 0.3484630286693573 Train_Accuracy: 0.8573770523071289 34 Batch accuracy: 0.8461538553237915 Loss: 0.34762173891067505 Train_Accuracy: 0.8581967353820801 35 Batch accuracy: 0.9230769276618958 Loss: 0.3467845618724823 Train_Accuracy: 0.8573770523071289 36 Batch accuracy: 0.807692289352417 Loss: 0.34598836302757263 Train_Accuracy: 0.8573770523071289 37 Batch accuracy: 0.9230769276618958 Loss: 0.3451789319515228 Train_Accuracy: 0.8581967353820801 38 Batch accuracy: 0.8846153616905212 Loss: 0.3443928062915802 Train_Accuracy: 0.8581967353820801 39 Batch accuracy: 0.8461538553237915 Loss: 0.3436150550842285 Train_Accuracy: 0.8581967353820801 40 Batch accuracy: 0.9615384340286255 Loss: 0.34286874532699585 Train_Accuracy: 0.8573770523071289 41 Batch accuracy: 0.8461538553237915 Loss: 0.34215161204338074 Train_Accuracy: 0.8565573692321777 42 Batch accuracy: 0.807692289352417 Loss: 0.3414512872695923 Train_Accuracy: 0.8573770523071289 43 Batch accuracy: 0.7692307829856873 Loss: 0.3407735228538513 Train_Accuracy: 0.8573770523071289 44 Batch accuracy: 0.8846153616905212 Loss: 0.3401103913784027 Train_Accuracy: 0.8573770523071289 45 Batch accuracy: 1.0 Loss: 0.33940014243125916 Train_Accuracy: 0.8573770523071289 46 Batch accuracy: 0.9615384340286255 Loss: 0.33866095542907715 Train_Accuracy: 0.8581967353820801 47 Batch accuracy: 0.9230769276618958 Loss: 0.33796143531799316 Train_Accuracy: 0.8598360419273376 48 Batch accuracy: 0.8461538553237915 Loss: 0.33728814125061035 Train_Accuracy: 0.8598360419273376 49 Batch accuracy: 0.9230769276618958 Loss: 0.3366228938102722 Train_Accuracy: 0.8598360419273376 50 Batch accuracy: 0.8461538553237915 Loss: 0.33598586916923523 Train_Accuracy: 0.8598360419273376 51 Batch accuracy: 0.8461538553237915 Loss: 0.3353661000728607 Train_Accuracy: 0.8598360419273376 52 Batch accuracy: 0.807692289352417 Loss: 0.33475616574287415 Train_Accuracy: 0.8598360419273376 53 Batch accuracy: 0.8846153616905212 Loss: 0.3341434597969055 Train_Accuracy: 0.8598360419273376 54 Batch accuracy: 0.7692307829856873 Loss: 0.3335452675819397 Train_Accuracy: 0.8606557250022888 55 Batch accuracy: 0.8846153616905212 Loss: 0.3329615294933319 Train_Accuracy: 0.8606557250022888 56 Batch accuracy: 0.9230769276618958 Loss: 0.33240196108818054 Train_Accuracy: 0.8606557250022888 57 Batch accuracy: 0.7692307829856873 Loss: 0.33185797929763794 Train_Accuracy: 0.8606557250022888 58 Batch accuracy: 0.807692289352417 Loss: 0.33132413029670715 Train_Accuracy: 0.8606557250022888 59 Batch accuracy: 0.807692289352417 Loss: 0.33079054951667786 Train_Accuracy: 0.8606557250022888 60 Batch accuracy: 0.8461538553237915 Loss: 0.3302666246891022 Train_Accuracy: 0.8606557250022888 61 Batch accuracy: 0.9230769276618958 Loss: 0.3297377824783325 Train_Accuracy: 0.8606557250022888 62 Batch accuracy: 0.9230769276618958 Loss: 0.32918140292167664 Train_Accuracy: 0.86147540807724 63 Batch accuracy: 0.9230769276618958 Loss: 0.3286372423171997 Train_Accuracy: 0.86147540807724 64 Batch accuracy: 0.807692289352417 Loss: 0.3281061053276062 Train_Accuracy: 0.86147540807724 65 Batch accuracy: 0.8461538553237915 Loss: 0.3275797367095947 Train_Accuracy: 0.86147540807724 66 Batch accuracy: 0.9230769276618958 Loss: 0.32706424593925476 Train_Accuracy: 0.86147540807724 67 Batch accuracy: 0.8461538553237915 Loss: 0.3265504837036133 Train_Accuracy: 0.86147540807724 68 Batch accuracy: 0.8846153616905212 Loss: 0.32602864503860474 Train_Accuracy: 0.8622950911521912 69 Batch accuracy: 0.8846153616905212 Loss: 0.3255085051059723 Train_Accuracy: 0.8631147742271423 70 Batch accuracy: 0.8846153616905212 Loss: 0.3249756991863251 Train_Accuracy: 0.8631147742271423 71 Batch accuracy: 0.807692289352417 Loss: 0.3244386911392212 Train_Accuracy: 0.8639343976974487 72 Batch accuracy: 0.8846153616905212 Loss: 0.32391226291656494 Train_Accuracy: 0.8647540807723999 73 Batch accuracy: 0.807692289352417 Loss: 0.32338666915893555 Train_Accuracy: 0.8663934469223022 74 Batch accuracy: 0.7307692170143127 Loss: 0.3228589594364166 Train_Accuracy: 0.8655737638473511 WARNING:tensorflow:From /home/yaniv/.local/lib/python3.6/site-packages/tensorflow/python/training/saver.py:1266: checkpoint_exists (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version. Instructions for updating: Use standard file APIs to check for files with this prefix. ###Markdown exemple de tf.nn.in_top_k(logits, y, k) ###Code # logits est un tenseur de type float32 de shape (batch_size,n_classes) # y est un tenseur de type int32 de shape (batch_size) , c'est les classes_ids # k est le nb d'elements à prendre (precision) #retourne un array de booleen test = tf.nn.in_top_k([[0,1], [1,0], [0,1], [1, 0], [0, 1]], [0, 1, 1, 1, 1], 1) #logits_shape=(5,2) et y_shape=(5,) with tf.Session() as sess: print("retourne le booleen qui dit si le max de logits[i] se trouve effectivement à colonne y[i]") print(" ","ici, est ce que le max de logits[0] = [0,1] se trouve à la colonne y[0]=0 ? --> False") print(" ","ici, est ce que le max de logits[2] = [0,1] se trouve à la colonne y[2]=1 ? --> true") print() print(test.eval()) ###Output retourne le booleen qui dit si le max de logits[i] se trouve effectivement à colonne y[i] ici, est ce que le max de logits[0] = [0,1] se trouve à la colonne y[0]=0 ? --> False ici, est ce que le max de logits[2] = [0,1] se trouve à la colonne y[2]=1 ? --> true [False False True False True]
Tutorials/TensorFlow_V1/notebooks/3_NeuralNetworks/convolutional_network_raw.ipynb
###Markdown CNN Overview![CNN](http://personal.ie.cuhk.edu.hk/~ccloy/project_target_code/images/fig3.png) MNIST Dataset OverviewThis example is using MNIST handwritten digits. The dataset contains 60,000 examples for training and 10,000 examples for testing. The digits have been size-normalized and centered in a fixed-size image (28x28 pixels) with values from 0 to 1. For simplicity, each image has been flattened and converted to a 1-D numpy array of 784 features (28*28).![MNIST Dataset](http://neuralnetworksanddeeplearning.com/images/mnist_100_digits.png)More info: http://yann.lecun.com/exdb/mnist/ ###Code from __future__ import division, print_function, absolute_import import tensorflow as tf # Import MNIST data from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets("/tmp/data/", one_hot=True) # Training Parameters learning_rate = 0.001 num_steps = 500 batch_size = 128 display_step = 10 # Network Parameters num_input = 784 # MNIST data input (img shape: 28*28) num_classes = 10 # MNIST total classes (0-9 digits) dropout = 0.75 # Dropout, probability to keep units # tf Graph input X = tf.placeholder(tf.float32, [None, num_input]) Y = tf.placeholder(tf.float32, [None, num_classes]) keep_prob = tf.placeholder(tf.float32) # dropout (keep probability) # Create some wrappers for simplicity def conv2d(x, W, b, strides=1): # Conv2D wrapper, with bias and relu activation x = tf.nn.conv2d(x, W, strides=[1, strides, strides, 1], padding='SAME') x = tf.nn.bias_add(x, b) return tf.nn.relu(x) def maxpool2d(x, k=2): # MaxPool2D wrapper return tf.nn.max_pool(x, ksize=[1, k, k, 1], strides=[1, k, k, 1], padding='SAME') # Create model def conv_net(x, weights, biases, dropout): # MNIST data input is a 1-D vector of 784 features (28*28 pixels) # Reshape to match picture format [Height x Width x Channel] # Tensor input become 4-D: [Batch Size, Height, Width, Channel] x = tf.reshape(x, shape=[-1, 28, 28, 1]) # Convolution Layer conv1 = conv2d(x, weights['wc1'], biases['bc1']) # Max Pooling (down-sampling) conv1 = maxpool2d(conv1, k=2) # Convolution Layer conv2 = conv2d(conv1, weights['wc2'], biases['bc2']) # Max Pooling (down-sampling) conv2 = maxpool2d(conv2, k=2) # Fully connected layer # Reshape conv2 output to fit fully connected layer input fc1 = tf.reshape(conv2, [-1, weights['wd1'].get_shape().as_list()[0]]) fc1 = tf.add(tf.matmul(fc1, weights['wd1']), biases['bd1']) fc1 = tf.nn.relu(fc1) # Apply Dropout fc1 = tf.nn.dropout(fc1, dropout) # Output, class prediction out = tf.add(tf.matmul(fc1, weights['out']), biases['out']) return out # Store layers weight & bias weights = { # 5x5 conv, 1 input, 32 outputs 'wc1': tf.Variable(tf.random_normal([5, 5, 1, 32])), # 5x5 conv, 32 inputs, 64 outputs 'wc2': tf.Variable(tf.random_normal([5, 5, 32, 64])), # fully connected, 7*7*64 inputs, 1024 outputs 'wd1': tf.Variable(tf.random_normal([7*7*64, 1024])), # 1024 inputs, 10 outputs (class prediction) 'out': tf.Variable(tf.random_normal([1024, num_classes])) } biases = { 'bc1': tf.Variable(tf.random_normal([32])), 'bc2': tf.Variable(tf.random_normal([64])), 'bd1': tf.Variable(tf.random_normal([1024])), 'out': tf.Variable(tf.random_normal([num_classes])) } # Construct model logits = conv_net(X, weights, biases, keep_prob) prediction = tf.nn.softmax(logits) # Define loss and optimizer loss_op = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits( logits=logits, labels=Y)) optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate) train_op = optimizer.minimize(loss_op) # Evaluate model correct_pred = tf.equal(tf.argmax(prediction, 1), tf.argmax(Y, 1)) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) # Initialize the variables (i.e. assign their default value) init = tf.global_variables_initializer() # Start training with tf.Session() as sess: # Run the initializer sess.run(init) for step in range(1, num_steps+1): batch_x, batch_y = mnist.train.next_batch(batch_size) # Run optimization op (backprop) sess.run(train_op, feed_dict={X: batch_x, Y: batch_y, keep_prob: dropout}) if step % display_step == 0 or step == 1: # Calculate batch loss and accuracy loss, acc = sess.run([loss_op, accuracy], feed_dict={X: batch_x, Y: batch_y, keep_prob: 1.0}) print("Step " + str(step) + ", Minibatch Loss= " + \ "{:.4f}".format(loss) + ", Training Accuracy= " + \ "{:.3f}".format(acc)) print("Optimization Finished!") # Calculate accuracy for 256 MNIST test images print("Testing Accuracy:", \ sess.run(accuracy, feed_dict={X: mnist.test.images[:256], Y: mnist.test.labels[:256], keep_prob: 1.0})) ###Output Step 1, Minibatch Loss= 75040.5312, Training Accuracy= 0.086 Step 10, Minibatch Loss= 36096.9727, Training Accuracy= 0.164 Step 20, Minibatch Loss= 9777.1387, Training Accuracy= 0.539 Step 30, Minibatch Loss= 6564.4692, Training Accuracy= 0.641 Step 40, Minibatch Loss= 6579.7979, Training Accuracy= 0.562 Step 50, Minibatch Loss= 1912.1987, Training Accuracy= 0.852 Step 60, Minibatch Loss= 3849.7532, Training Accuracy= 0.820 Step 70, Minibatch Loss= 2257.2659, Training Accuracy= 0.867 Step 80, Minibatch Loss= 2672.9355, Training Accuracy= 0.781 Step 90, Minibatch Loss= 1670.8557, Training Accuracy= 0.875 Step 100, Minibatch Loss= 3973.2122, Training Accuracy= 0.789 Step 110, Minibatch Loss= 1513.4589, Training Accuracy= 0.867 Step 120, Minibatch Loss= 1369.3561, Training Accuracy= 0.914 Step 130, Minibatch Loss= 2125.4482, Training Accuracy= 0.883 Step 140, Minibatch Loss= 2434.8276, Training Accuracy= 0.859 Step 150, Minibatch Loss= 844.7146, Training Accuracy= 0.914 Step 160, Minibatch Loss= 1470.3861, Training Accuracy= 0.922 Step 170, Minibatch Loss= 1912.4985, Training Accuracy= 0.898 Step 180, Minibatch Loss= 1339.4073, Training Accuracy= 0.891 Step 190, Minibatch Loss= 1549.8510, Training Accuracy= 0.898 Step 200, Minibatch Loss= 660.0526, Training Accuracy= 0.930 Step 210, Minibatch Loss= 135.7387, Training Accuracy= 0.969 Step 220, Minibatch Loss= 925.0186, Training Accuracy= 0.922 Step 230, Minibatch Loss= 1258.3237, Training Accuracy= 0.930 Step 240, Minibatch Loss= 1714.0659, Training Accuracy= 0.883 Step 250, Minibatch Loss= 1375.0530, Training Accuracy= 0.938 Step 260, Minibatch Loss= 1600.6301, Training Accuracy= 0.875 Step 270, Minibatch Loss= 1869.0861, Training Accuracy= 0.875 Step 280, Minibatch Loss= 691.9367, Training Accuracy= 0.953 Step 290, Minibatch Loss= 370.6295, Training Accuracy= 0.945 Step 300, Minibatch Loss= 1590.4878, Training Accuracy= 0.914 Step 310, Minibatch Loss= 695.4811, Training Accuracy= 0.945 Step 320, Minibatch Loss= 1570.3514, Training Accuracy= 0.898 Step 330, Minibatch Loss= 655.1699, Training Accuracy= 0.938 Step 340, Minibatch Loss= 1245.4441, Training Accuracy= 0.906 Step 350, Minibatch Loss= 510.7900, Training Accuracy= 0.953 Step 360, Minibatch Loss= 805.4855, Training Accuracy= 0.938 Step 370, Minibatch Loss= 851.2961, Training Accuracy= 0.938 Step 380, Minibatch Loss= 693.1071, Training Accuracy= 0.945 Step 390, Minibatch Loss= 799.9010, Training Accuracy= 0.961 Step 400, Minibatch Loss= 677.4307, Training Accuracy= 0.930 Step 410, Minibatch Loss= 834.4326, Training Accuracy= 0.930 Step 420, Minibatch Loss= 996.1141, Training Accuracy= 0.930 Step 430, Minibatch Loss= 201.1711, Training Accuracy= 0.953 Step 440, Minibatch Loss= 642.8739, Training Accuracy= 0.930 Step 450, Minibatch Loss= 583.2195, Training Accuracy= 0.938 Step 460, Minibatch Loss= 400.6084, Training Accuracy= 0.969 Step 470, Minibatch Loss= 193.7009, Training Accuracy= 0.984 Step 480, Minibatch Loss= 737.9506, Training Accuracy= 0.914 Step 490, Minibatch Loss= 217.6613, Training Accuracy= 0.977 Step 500, Minibatch Loss= 224.9438, Training Accuracy= 0.969 Optimization Finished! Testing Accuracy: 0.95703125
assets/pytorch-seq2seq/3 - Neural Machine Translation by Jointly Learning to Align and Translate.ipynb
###Markdown 3 - Neural Machine Translation by Jointly Learning to Align and TranslateIn this third notebook on sequence-to-sequence models using PyTorch and TorchText, we'll be implementing the model from [Neural Machine Translation by Jointly Learning to Align and Translate](https://arxiv.org/abs/1409.0473). This model achives our best perplexity yet, ~27 compared to ~34 for the previous model. IntroductionAs a reminder, here is the general encoder-decoder model:![](assets/seq2seq1.png)In the previous model, our architecture was set-up in a way to reduce "information compression" by explicitly passing the context vector, $z$, to the decoder at every time-step and by passing both the context vector and embedded input word, $d(y_t)$, along with the hidden state, $s_t$, to the linear layer, $f$, to make a prediction.![](assets/seq2seq7.png)Even though we have reduced some of this compression, our context vector still needs to contain all of the information about the source sentence. The model implemented in this notebook avoids this compression by allowing the decoder to look at the entire source sentence (via its hidden states) at each decoding step! How does it do this? It uses *attention*. Attention works by first, calculating an attention vector, $a$, that is the length of the source sentence. The attention vector has the property that each element is between 0 and 1, and the entire vector sums to 1. We then calculate a weighted sum of our source sentence hidden states, $H$, to get a weighted source vector, $w$. $$w = \sum_{i}a_ih_i$$We calculate a new weighted source vector every time-step when decoding, using it as input to our decoder RNN as well as the linear layer to make a prediction. We'll explain how to do all of this during the tutorial. Preparing DataAgain, the preparation is similar to last time.First we import all the required modules. ###Code import torch import torch.nn as nn import torch.optim as optim import torch.nn.functional as F from torchtext.legacy.datasets import Multi30k from torchtext.legacy.data import Field, BucketIterator import spacy import numpy as np import random import math import time ###Output _____no_output_____ ###Markdown Set the random seeds for reproducability. ###Code SEED = 1234 random.seed(SEED) np.random.seed(SEED) torch.manual_seed(SEED) torch.cuda.manual_seed(SEED) torch.backends.cudnn.deterministic = True ###Output _____no_output_____ ###Markdown Load the German and English spaCy models. ###Code spacy_de = spacy.load('de_core_news_sm') spacy_en = spacy.load('en_core_web_sm') ###Output _____no_output_____ ###Markdown We create the tokenizers. ###Code def tokenize_de(text): """ Tokenizes German text from a string into a list of strings """ return [tok.text for tok in spacy_de.tokenizer(text)] def tokenize_en(text): """ Tokenizes English text from a string into a list of strings """ return [tok.text for tok in spacy_en.tokenizer(text)] ###Output _____no_output_____ ###Markdown The fields remain the same as before. ###Code SRC = Field(tokenize = tokenize_de, init_token = '<sos>', eos_token = '<eos>', lower = True) TRG = Field(tokenize = tokenize_en, init_token = '<sos>', eos_token = '<eos>', lower = True) ###Output _____no_output_____ ###Markdown Load the data. ###Code train_data, valid_data, test_data = Multi30k.splits(exts = ('.de', '.en'), fields = (SRC, TRG)) ###Output _____no_output_____ ###Markdown Build the vocabulary. ###Code SRC.build_vocab(train_data, min_freq = 2) TRG.build_vocab(train_data, min_freq = 2) ###Output _____no_output_____ ###Markdown Define the device. ###Code device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') print(device) ###Output _____no_output_____ ###Markdown Create the iterators. ###Code BATCH_SIZE = 64 train_iterator, valid_iterator, test_iterator = BucketIterator.splits( (train_data, valid_data, test_data), batch_size = BATCH_SIZE, device = device) ###Output _____no_output_____ ###Markdown Building the Seq2Seq Model EncoderFirst, we'll build the encoder. Similar to the previous model, we only use a single layer GRU, however we now use a *bidirectional RNN*. With a bidirectional RNN, we have two RNNs in each layer. A *forward RNN* going over the embedded sentence from left to right (shown below in green), and a *backward RNN* going over the embedded sentence from right to left (teal). All we need to do in code is set `bidirectional = True` and then pass the embedded sentence to the RNN as before. ![](assets/seq2seq8.png)We now have:$$\begin{align*}h_t^\rightarrow &= \text{EncoderGRU}^\rightarrow(e(x_t^\rightarrow),h_{t-1}^\rightarrow)\\h_t^\leftarrow &= \text{EncoderGRU}^\leftarrow(e(x_t^\leftarrow),h_{t-1}^\leftarrow)\end{align*}$$Where $x_0^\rightarrow = \text{}, x_1^\rightarrow = \text{guten}$ and $x_0^\leftarrow = \text{}, x_1^\leftarrow = \text{morgen}$.As before, we only pass an input (`embedded`) to the RNN, which tells PyTorch to initialize both the forward and backward initial hidden states ($h_0^\rightarrow$ and $h_0^\leftarrow$, respectively) to a tensor of all zeros. We'll also get two context vectors, one from the forward RNN after it has seen the final word in the sentence, $z^\rightarrow=h_T^\rightarrow$, and one from the backward RNN after it has seen the first word in the sentence, $z^\leftarrow=h_T^\leftarrow$.The RNN returns `outputs` and `hidden`. `outputs` is of size **[src len, batch size, hid dim * num directions]** where the first `hid_dim` elements in the third axis are the hidden states from the top layer forward RNN, and the last `hid_dim` elements are hidden states from the top layer backward RNN. We can think of the third axis as being the forward and backward hidden states concatenated together other, i.e. $h_1 = [h_1^\rightarrow; h_{T}^\leftarrow]$, $h_2 = [h_2^\rightarrow; h_{T-1}^\leftarrow]$ and we can denote all encoder hidden states (forward and backwards concatenated together) as $H=\{ h_1, h_2, ..., h_T\}$.`hidden` is of size **[n layers * num directions, batch size, hid dim]**, where **[-2, :, :]** gives the top layer forward RNN hidden state after the final time-step (i.e. after it has seen the last word in the sentence) and **[-1, :, :]** gives the top layer backward RNN hidden state after the final time-step (i.e. after it has seen the first word in the sentence).As the decoder is not bidirectional, it only needs a single context vector, $z$, to use as its initial hidden state, $s_0$, and we currently have two, a forward and a backward one ($z^\rightarrow=h_T^\rightarrow$ and $z^\leftarrow=h_T^\leftarrow$, respectively). We solve this by concatenating the two context vectors together, passing them through a linear layer, $g$, and applying the $\tanh$ activation function. $$z=\tanh(g(h_T^\rightarrow, h_T^\leftarrow)) = \tanh(g(z^\rightarrow, z^\leftarrow)) = s_0$$**Note**: this is actually a deviation from the paper. Instead, they feed only the first backward RNN hidden state through a linear layer to get the context vector/decoder initial hidden state. This doesn't seem to make sense to me, so we have changed it.As we want our model to look back over the whole of the source sentence we return `outputs`, the stacked forward and backward hidden states for every token in the source sentence. We also return `hidden`, which acts as our initial hidden state in the decoder. ###Code class Encoder(nn.Module): def __init__(self, input_dim, emb_dim, enc_hid_dim, dec_hid_dim, dropout): super().__init__() self.embedding = nn.Embedding(input_dim, emb_dim) self.rnn = nn.GRU(emb_dim, enc_hid_dim, bidirectional = True) self.fc = nn.Linear(enc_hid_dim * 2, dec_hid_dim) self.dropout = nn.Dropout(dropout) def forward(self, src): #src = [src len, batch size] embedded = self.dropout(self.embedding(src)) #embedded = [src len, batch size, emb dim] outputs, hidden = self.rnn(embedded) #outputs = [src len, batch size, hid dim * num directions] #hidden = [n layers * num directions, batch size, hid dim] #hidden is stacked [forward_1, backward_1, forward_2, backward_2, ...] #outputs are always from the last layer #hidden [-2, :, : ] is the last of the forwards RNN #hidden [-1, :, : ] is the last of the backwards RNN #initial decoder hidden is final hidden state of the forwards and backwards # encoder RNNs fed through a linear layer hidden = torch.tanh(self.fc(torch.cat((hidden[-2,:,:], hidden[-1,:,:]), dim = 1))) #outputs = [src len, batch size, enc hid dim * 2] #hidden = [batch size, dec hid dim] return outputs, hidden ###Output _____no_output_____ ###Markdown AttentionNext up is the attention layer. This will take in the previous hidden state of the decoder, $s_{t-1}$, and all of the stacked forward and backward hidden states from the encoder, $H$. The layer will output an attention vector, $a_t$, that is the length of the source sentence, each element is between 0 and 1 and the entire vector sums to 1.Intuitively, this layer takes what we have decoded so far, $s_{t-1}$, and all of what we have encoded, $H$, to produce a vector, $a_t$, that represents which words in the source sentence we should pay the most attention to in order to correctly predict the next word to decode, $\hat{y}_{t+1}$. First, we calculate the *energy* between the previous decoder hidden state and the encoder hidden states. As our encoder hidden states are a sequence of $T$ tensors, and our previous decoder hidden state is a single tensor, the first thing we do is `repeat` the previous decoder hidden state $T$ times. We then calculate the energy, $E_t$, between them by concatenating them together and passing them through a linear layer (`attn`) and a $\tanh$ activation function. $$E_t = \tanh(\text{attn}(s_{t-1}, H))$$ This can be thought of as calculating how well each encoder hidden state "matches" the previous decoder hidden state.We currently have a **[dec hid dim, src len]** tensor for each example in the batch. We want this to be **[src len]** for each example in the batch as the attention should be over the length of the source sentence. This is achieved by multiplying the `energy` by a **[1, dec hid dim]** tensor, $v$.$$\hat{a}_t = v E_t$$We can think of $v$ as the weights for a weighted sum of the energy across all encoder hidden states. These weights tell us how much we should attend to each token in the source sequence. The parameters of $v$ are initialized randomly, but learned with the rest of the model via backpropagation. Note how $v$ is not dependent on time, and the same $v$ is used for each time-step of the decoding. We implement $v$ as a linear layer without a bias.Finally, we ensure the attention vector fits the constraints of having all elements between 0 and 1 and the vector summing to 1 by passing it through a $\text{softmax}$ layer.$$a_t = \text{softmax}(\hat{a_t})$$This gives us the attention over the source sentence!Graphically, this looks something like below. This is for calculating the very first attention vector, where $s_{t-1} = s_0 = z$. The green/teal blocks represent the hidden states from both the forward and backward RNNs, and the attention computation is all done within the pink block.![](assets/seq2seq9.png) ###Code class Attention(nn.Module): def __init__(self, enc_hid_dim, dec_hid_dim): super().__init__() self.attn = nn.Linear((enc_hid_dim * 2) + dec_hid_dim, dec_hid_dim) self.v = nn.Linear(dec_hid_dim, 1, bias = False) def forward(self, hidden, encoder_outputs): #hidden = [batch size, dec hid dim] #encoder_outputs = [src len, batch size, enc hid dim * 2] batch_size = encoder_outputs.shape[1] src_len = encoder_outputs.shape[0] #repeat decoder hidden state src_len times hidden = hidden.unsqueeze(1).repeat(1, src_len, 1) encoder_outputs = encoder_outputs.permute(1, 0, 2) #hidden = [batch size, src len, dec hid dim] #encoder_outputs = [batch size, src len, enc hid dim * 2] energy = torch.tanh(self.attn(torch.cat((hidden, encoder_outputs), dim = 2))) #energy = [batch size, src len, dec hid dim] attention = self.v(energy).squeeze(2) #attention= [batch size, src len] return F.softmax(attention, dim=1) ###Output _____no_output_____ ###Markdown DecoderNext up is the decoder. The decoder contains the attention layer, `attention`, which takes the previous hidden state, $s_{t-1}$, all of the encoder hidden states, $H$, and returns the attention vector, $a_t$.We then use this attention vector to create a weighted source vector, $w_t$, denoted by `weighted`, which is a weighted sum of the encoder hidden states, $H$, using $a_t$ as the weights.$$w_t = a_t H$$The embedded input word, $d(y_t)$, the weighted source vector, $w_t$, and the previous decoder hidden state, $s_{t-1}$, are then all passed into the decoder RNN, with $d(y_t)$ and $w_t$ being concatenated together.$$s_t = \text{DecoderGRU}(d(y_t), w_t, s_{t-1})$$We then pass $d(y_t)$, $w_t$ and $s_t$ through the linear layer, $f$, to make a prediction of the next word in the target sentence, $\hat{y}_{t+1}$. This is done by concatenating them all together.$$\hat{y}_{t+1} = f(d(y_t), w_t, s_t)$$The image below shows decoding the first word in an example translation.![](assets/seq2seq10.png)The green/teal blocks show the forward/backward encoder RNNs which output $H$, the red block shows the context vector, $z = h_T = \tanh(g(h^\rightarrow_T,h^\leftarrow_T)) = \tanh(g(z^\rightarrow, z^\leftarrow)) = s_0$, the blue block shows the decoder RNN which outputs $s_t$, the purple block shows the linear layer, $f$, which outputs $\hat{y}_{t+1}$ and the orange block shows the calculation of the weighted sum over $H$ by $a_t$ and outputs $w_t$. Not shown is the calculation of $a_t$. ###Code class Decoder(nn.Module): def __init__(self, output_dim, emb_dim, enc_hid_dim, dec_hid_dim, dropout, attention): super().__init__() self.output_dim = output_dim self.attention = attention self.embedding = nn.Embedding(output_dim, emb_dim) self.rnn = nn.GRU((enc_hid_dim * 2) + emb_dim, dec_hid_dim) self.fc_out = nn.Linear((enc_hid_dim * 2) + dec_hid_dim + emb_dim, output_dim) self.dropout = nn.Dropout(dropout) def forward(self, input, hidden, encoder_outputs): #input = [batch size] #hidden = [batch size, dec hid dim] #encoder_outputs = [src len, batch size, enc hid dim * 2] input = input.unsqueeze(0) #input = [1, batch size] embedded = self.dropout(self.embedding(input)) #embedded = [1, batch size, emb dim] a = self.attention(hidden, encoder_outputs) #a = [batch size, src len] a = a.unsqueeze(1) #a = [batch size, 1, src len] encoder_outputs = encoder_outputs.permute(1, 0, 2) #encoder_outputs = [batch size, src len, enc hid dim * 2] weighted = torch.bmm(a, encoder_outputs) #weighted = [batch size, 1, enc hid dim * 2] weighted = weighted.permute(1, 0, 2) #weighted = [1, batch size, enc hid dim * 2] rnn_input = torch.cat((embedded, weighted), dim = 2) #rnn_input = [1, batch size, (enc hid dim * 2) + emb dim] output, hidden = self.rnn(rnn_input, hidden.unsqueeze(0)) #output = [seq len, batch size, dec hid dim * n directions] #hidden = [n layers * n directions, batch size, dec hid dim] #seq len, n layers and n directions will always be 1 in this decoder, therefore: #output = [1, batch size, dec hid dim] #hidden = [1, batch size, dec hid dim] #this also means that output == hidden assert (output == hidden).all() embedded = embedded.squeeze(0) output = output.squeeze(0) weighted = weighted.squeeze(0) prediction = self.fc_out(torch.cat((output, weighted, embedded), dim = 1)) #prediction = [batch size, output dim] return prediction, hidden.squeeze(0) ###Output _____no_output_____ ###Markdown Seq2SeqThis is the first model where we don't have to have the encoder RNN and decoder RNN have the same hidden dimensions, however the encoder has to be bidirectional. This requirement can be removed by changing all occurences of `enc_dim * 2` to `enc_dim * 2 if encoder_is_bidirectional else enc_dim`. This seq2seq encapsulator is similar to the last two. The only difference is that the `encoder` returns both the final hidden state (which is the final hidden state from both the forward and backward encoder RNNs passed through a linear layer) to be used as the initial hidden state for the decoder, as well as every hidden state (which are the forward and backward hidden states stacked on top of each other). We also need to ensure that `hidden` and `encoder_outputs` are passed to the decoder. Briefly going over all of the steps:- the `outputs` tensor is created to hold all predictions, $\hat{Y}$- the source sequence, $X$, is fed into the encoder to receive $z$ and $H$- the initial decoder hidden state is set to be the `context` vector, $s_0 = z = h_T$- we use a batch of `` tokens as the first `input`, $y_1$- we then decode within a loop: - inserting the input token $y_t$, previous hidden state, $s_{t-1}$, and all encoder outputs, $H$, into the decoder - receiving a prediction, $\hat{y}_{t+1}$, and a new hidden state, $s_t$ - we then decide if we are going to teacher force or not, setting the next input as appropriate ###Code class Seq2Seq(nn.Module): def __init__(self, encoder, decoder, device): super().__init__() self.encoder = encoder self.decoder = decoder self.device = device def forward(self, src, trg, teacher_forcing_ratio = 0.5): #src = [src len, batch size] #trg = [trg len, batch size] #teacher_forcing_ratio is probability to use teacher forcing #e.g. if teacher_forcing_ratio is 0.75 we use teacher forcing 75% of the time batch_size = src.shape[1] trg_len = trg.shape[0] trg_vocab_size = self.decoder.output_dim #tensor to store decoder outputs outputs = torch.zeros(trg_len, batch_size, trg_vocab_size).to(self.device) #encoder_outputs is all hidden states of the input sequence, back and forwards #hidden is the final forward and backward hidden states, passed through a linear layer encoder_outputs, hidden = self.encoder(src) #first input to the decoder is the <sos> tokens input = trg[0,:] for t in range(1, trg_len): #insert input token embedding, previous hidden state and all encoder hidden states #receive output tensor (predictions) and new hidden state output, hidden = self.decoder(input, hidden, encoder_outputs) #place predictions in a tensor holding predictions for each token outputs[t] = output #decide if we are going to use teacher forcing or not teacher_force = random.random() < teacher_forcing_ratio #get the highest predicted token from our predictions top1 = output.argmax(1) #if teacher forcing, use actual next token as next input #if not, use predicted token input = trg[t] if teacher_force else top1 return outputs ###Output _____no_output_____ ###Markdown Training the Seq2Seq ModelThe rest of this tutorial is very similar to the previous one.We initialise our parameters, encoder, decoder and seq2seq model (placing it on the GPU if we have one). ###Code INPUT_DIM = len(SRC.vocab) OUTPUT_DIM = len(TRG.vocab) ENC_EMB_DIM = 256 DEC_EMB_DIM = 256 ENC_HID_DIM = 512 DEC_HID_DIM = 512 ENC_DROPOUT = 0.5 DEC_DROPOUT = 0.5 attn = Attention(ENC_HID_DIM, DEC_HID_DIM) enc = Encoder(INPUT_DIM, ENC_EMB_DIM, ENC_HID_DIM, DEC_HID_DIM, ENC_DROPOUT) dec = Decoder(OUTPUT_DIM, DEC_EMB_DIM, ENC_HID_DIM, DEC_HID_DIM, DEC_DROPOUT, attn) model = Seq2Seq(enc, dec, device).to(device) ###Output _____no_output_____ ###Markdown We use a simplified version of the weight initialization scheme used in the paper. Here, we will initialize all biases to zero and all weights from $\mathcal{N}(0, 0.01)$. ###Code def init_weights(m): for name, param in m.named_parameters(): if 'weight' in name: nn.init.normal_(param.data, mean=0, std=0.01) else: nn.init.constant_(param.data, 0) model.apply(init_weights) ###Output _____no_output_____ ###Markdown Calculate the number of parameters. We get an increase of almost 50% in the amount of parameters from the last model. ###Code def count_parameters(model): return sum(p.numel() for p in model.parameters() if p.requires_grad) print(f'The model has {count_parameters(model):,} trainable parameters') ###Output _____no_output_____ ###Markdown We create an optimizer. ###Code optimizer = optim.Adam(model.parameters()) ###Output _____no_output_____ ###Markdown We initialize the loss function. ###Code TRG_PAD_IDX = TRG.vocab.stoi[TRG.pad_token] criterion = nn.CrossEntropyLoss(ignore_index = TRG_PAD_IDX) ###Output _____no_output_____ ###Markdown We then create the training loop... ###Code def train(model, iterator, optimizer, criterion, clip): model.train() epoch_loss = 0 for i, batch in enumerate(iterator): src = batch.src trg = batch.trg optimizer.zero_grad() output = model(src, trg) #trg = [trg len, batch size] #output = [trg len, batch size, output dim] output_dim = output.shape[-1] output = output[1:].view(-1, output_dim) trg = trg[1:].view(-1) #trg = [(trg len - 1) * batch size] #output = [(trg len - 1) * batch size, output dim] loss = criterion(output, trg) loss.backward() torch.nn.utils.clip_grad_norm_(model.parameters(), clip) optimizer.step() epoch_loss += loss.item() return epoch_loss / len(iterator) ###Output _____no_output_____ ###Markdown ...and the evaluation loop, remembering to set the model to `eval` mode and turn off teaching forcing. ###Code def evaluate(model, iterator, criterion): model.eval() epoch_loss = 0 with torch.no_grad(): for i, batch in enumerate(iterator): src = batch.src trg = batch.trg output = model(src, trg, 0) #turn off teacher forcing #trg = [trg len, batch size] #output = [trg len, batch size, output dim] output_dim = output.shape[-1] output = output[1:].view(-1, output_dim) trg = trg[1:].view(-1) #trg = [(trg len - 1) * batch size] #output = [(trg len - 1) * batch size, output dim] loss = criterion(output, trg) epoch_loss += loss.item() return epoch_loss / len(iterator) ###Output _____no_output_____ ###Markdown Finally, define a timing function. ###Code def epoch_time(start_time, end_time): elapsed_time = end_time - start_time elapsed_mins = int(elapsed_time / 60) elapsed_secs = int(elapsed_time - (elapsed_mins * 60)) return elapsed_mins, elapsed_secs ###Output _____no_output_____ ###Markdown Then, we train our model, saving the parameters that give us the best validation loss. ###Code N_EPOCHS = 10 CLIP = 1 best_valid_loss = float('inf') for epoch in range(N_EPOCHS): start_time = time.time() train_loss = train(model, train_iterator, optimizer, criterion, CLIP) valid_loss = evaluate(model, valid_iterator, criterion) end_time = time.time() epoch_mins, epoch_secs = epoch_time(start_time, end_time) if valid_loss < best_valid_loss: best_valid_loss = valid_loss torch.save(model.state_dict(), 'tut3-model.pt') print(f'Epoch: {epoch+1:02} | Time: {epoch_mins}m {epoch_secs}s') print(f'\tTrain Loss: {train_loss:.3f} | Train PPL: {math.exp(train_loss):7.3f}') print(f'\t Val. Loss: {valid_loss:.3f} | Val. PPL: {math.exp(valid_loss):7.3f}') ###Output _____no_output_____ ###Markdown Finally, we test the model on the test set using these "best" parameters. ###Code model.load_state_dict(torch.load('tut3-model.pt')) test_loss = evaluate(model, test_iterator, criterion) print(f'| Test Loss: {test_loss:.3f} | Test PPL: {math.exp(test_loss):7.3f} |') ###Output _____no_output_____
notebooks/Chapter04/02-Baseline Forecasts using darts-Copy1.ipynb
###Markdown Baseline Forecasts ###Code pred_df = pd.concat([ts_train, ts_test]) metric_record = [] ts_train = TimeSeries.from_series(ts_train) ts_test = TimeSeries.from_series(ts_test) def eval_model(model, ts_train, ts_test, name=None): if name is None: name = type(model).__name__ model.fit(ts_train) y_pred = model.predict(len(ts_test)) return y_pred, { "Algorithm": name, "MAE": mae(actual_series = ts_test, pred_series = y_pred), "MSE": mse(actual_series = ts_test, pred_series = y_pred), "MASE": mase(actual_series = ts_test, pred_series = y_pred, insample=ts_train), "Forecast Bias": forecast_bias(actual_series = ts_test, pred_series = y_pred) } def eval_model_backtest(model, ts_train, ts_test, name=None): if name is None: name = type(model).__name__ ts = ts_train.append(ts_test) start = ts_test.time_index.min() # model.fit(ts_train) y_pred = model.historical_forecasts(ts, start=start, verbose=True) return y_pred, { "Algorithm": name, "MAE": mae(actual_series = ts_test, pred_series = y_pred), "MSE": mse(actual_series = ts_test, pred_series = y_pred), "MASE": mase(actual_series = ts_test, pred_series = y_pred, insample=ts_train), "Forecast Bias": forecast_bias(actual_series = ts_test, pred_series = y_pred) } def format_y_pred(y_pred, name): y_pred = y_pred.data_array().to_series() y_pred.index = y_pred.index.get_level_values(0) y_pred.name = name return y_pred from itertools import cycle def plot_forecast(pred_df, forecast_columns, forecast_display_names=None): if forecast_display_names is None: forecast_display_names = forecast_columns else: assert len(forecast_columns)==len(forecast_display_names) mask = ~pred_df[forecast_columns[0]].isnull() colors = ["rgba("+",".join([str(c) for c in plotting_utils.hex_to_rgb(c)])+",<alpha>)" for c in px.colors.qualitative.Plotly] act_color = colors[0] colors = cycle(colors[1:]) fig = go.Figure() fig.add_trace(go.Scatter(x=pred_df[mask].index, y=pred_df[mask].energy_consumption, mode='lines', line = dict(color=act_color.replace("<alpha>", "0.9")), name='Actual Consumption')) for col, display_col in zip(forecast_columns,forecast_display_names): fig.add_trace(go.Scatter(x=pred_df[mask].index, y=pred_df.loc[mask, col], mode='lines', line = dict(dash='dot', color=next(colors).replace("<alpha>", "1")), name=display_col)) return fig ###Output _____no_output_____ ###Markdown Running Baseline Forecast for all consumers ###Code lcl_ids = sorted(train_df.LCLid.unique()) ###Output _____no_output_____ ###Markdown Naive Forecast ###Code from src.utils.ts_utils import darts_metrics_adapter name = "Naive" naive_preds = [] naive_metrics = [] for lcl_id in tqdm(lcl_ids): # naive_model = NaiveSeasonal(K=1) tr = train_df.loc[train_df.LCLid==lcl_id, ["timestamp","energy_consumption"]].set_index("timestamp") ts = test_df.loc[test_df.LCLid==lcl_id, ["timestamp","energy_consumption"]].set_index("timestamp") tr_ts = pd.concat([tr, ts]) tr_ts['naive_predictions'] = tr_ts['energy_consumption'].shift(1) y_pred = tr_ts.loc[ts.index].drop(columns="energy_consumption") metrics = { "Algorithm": name, "MAE": darts_metrics_adapter(mae,actual_series = ts, pred_series = y_pred), "MSE": darts_metrics_adapter(mse,actual_series = ts, pred_series = y_pred), "MASE": darts_metrics_adapter(mase,actual_series = ts, pred_series = y_pred, insample=tr.energy_consumption), "Forecast Bias": darts_metrics_adapter(forecast_bias,actual_series = ts, pred_series = y_pred) } # break # y_pred = format_y_pred(y_pred, "naive_predictions").to_frame() y_pred['LCLid'] = lcl_id metrics["LCLid"] = lcl_id y_pred['energy_consumption'] = ts.energy_consumption.values naive_preds.append(y_pred) naive_metrics.append(metrics) # break naive_pred_df = pd.concat(naive_preds) naive_pred_df.head() naive_metric_df = pd.DataFrame(naive_metrics) naive_metric_df.head() from src.utils import ts_utils overall_metrics_naive = { "MAE": ts_utils.mae(naive_pred_df["energy_consumption"], naive_pred_df["naive_predictions"]), "MSE": ts_utils.mse(naive_pred_df.energy_consumption, naive_pred_df.naive_predictions), "meanMASE": naive_metric_df.MASE.mean(), "Forecast Bias": ts_utils.forecast_bias_aggregate(naive_pred_df.energy_consumption, naive_pred_df.naive_predictions) } overall_metrics_naive ###Output _____no_output_____ ###Markdown Seasonal Naive Forecast ###Code from src.utils.ts_utils import darts_metrics_adapter name = "Naive" naive_preds = [] naive_metrics = [] for lcl_id in tqdm(lcl_ids): # naive_model = NaiveSeasonal(K=1) tr = train_df.loc[train_df.LCLid==lcl_id, ["timestamp","energy_consumption"]].set_index("timestamp") ts = test_df.loc[test_df.LCLid==lcl_id, ["timestamp","energy_consumption"]].set_index("timestamp") tr_ts = pd.concat([tr, ts]) tr_ts['naive_predictions'] = tr_ts['energy_consumption'].shift(1) y_pred = tr_ts.loc[ts.index].drop(columns="energy_consumption") metrics = { "Algorithm": name, "MAE": darts_metrics_adapter(mae,actual_series = ts, pred_series = y_pred), "MSE": darts_metrics_adapter(mse,actual_series = ts, pred_series = y_pred), "MASE": darts_metrics_adapter(mase,actual_series = ts, pred_series = y_pred, insample=tr.energy_consumption), "Forecast Bias": darts_metrics_adapter(forecast_bias,actual_series = ts, pred_series = y_pred) } # break # y_pred = format_y_pred(y_pred, "naive_predictions").to_frame() y_pred['LCLid'] = lcl_id metrics["LCLid"] = lcl_id y_pred['energy_consumption'] = ts.energy_consumption.values naive_preds.append(y_pred) naive_metrics.append(metrics) # break naive_pred_df = pd.concat(naive_preds) naive_pred_df.head() naive_metric_df = pd.DataFrame(naive_metrics) naive_metric_df.head() from src.utils import ts_utils overall_metrics_naive = { "MAE": ts_utils.mae(naive_pred_df["energy_consumption"], naive_pred_df["naive_predictions"]), "MSE": ts_utils.mse(naive_pred_df.energy_consumption, naive_pred_df.naive_predictions), "meanMASE": naive_metric_df.MASE.mean(), "Forecast Bias": ts_utils.forecast_bias_aggregate(naive_pred_df.energy_consumption, naive_pred_df.naive_predictions) } overall_metrics_naive ###Output _____no_output_____ ###Markdown FFT ###Code name = "FFT" fft_preds = [] fft_metrics = [] for lcl_id in tqdm(lcl_ids): fft_model = FFT(nr_freqs_to_keep=35, trend="poly", trend_poly_degree=2) tr = TimeSeries.from_series(train_df.loc[train_df.LCLid==lcl_id, ["timestamp","energy_consumption"]].set_index("timestamp")) ts = TimeSeries.from_series(test_df.loc[test_df.LCLid==lcl_id, ["timestamp","energy_consumption"]].set_index("timestamp")) y_pred, metrics = eval_model(fft_model, tr, ts, name=name) y_pred = format_y_pred(y_pred, "fft_predictions").to_frame() y_pred['LCLid'] = lcl_id metrics["LCLid"] = lcl_id y_pred['energy_consumption'] = ts.data_array().to_series().values fft_preds.append(y_pred) fft_metrics.append(metrics) fft_pred_df = pd.concat(fft_preds) fft_pred_df.head() fft_metric_df = pd.DataFrame(fft_metrics) fft_metric_df.head() actual_series = TimeSeries.from_values(fft_pred_df.energy_consumption.values) pred_series = TimeSeries.from_values(fft_pred_df.fft_predictions.values) overall_metrics_fft = { "MAE": mae(actual_series = actual_series, pred_series = pred_series), "MSE": mse(actual_series = actual_series, pred_series = pred_series), "meanMASE": fft_metric_df.MASE.mean(), "Forecast Bias": ope(actual_series = actual_series, pred_series = pred_series) } overall_metrics_fft ###Output _____no_output_____ ###Markdown Evaluation of Baseline Forecast ###Code p_df = pd.DataFrame([overall_metrics_fft, overall_metrics_theta], index=["FFT","Theta"]) p_df.style.format({"MAE": "{:.3f}", "MSE": "{:.3f}", "meanMASE": "{:.3f}", "Forecast Bias": "{:.2f}%"}).highlight_min(color='lightgreen') baseline_pred_df = theta_pred_df.reset_index().merge(fft_pred_df.reset_index().drop(columns='energy_consumption'), on=['time','LCLid'], how='outer') baseline_metrics_df = pd.concat([theta_metric_df, fft_metric_df]) baseline_metrics_df.head() fig = px.histogram(baseline_metrics_df, x="MASE", color="Algorithm", pattern_shape="Algorithm", marginal="box", nbins=500, barmode="overlay", histnorm="probability density") fig = format_plot(fig, xlabel="MASE", ylabel="Probability Density", title="Distribution of MASE in the dataset") fig.update_layout(xaxis_range=[0,10]) fig.write_image("imgs/chapter_4/mase_dist.png") fig.show() fig = px.histogram(baseline_metrics_df, x="MAE", color="Algorithm", pattern_shape="Algorithm", marginal="box", nbins=100, barmode="overlay", histnorm="probability density") fig = format_plot(fig, xlabel="MAE", ylabel="Probability Density", title="Distribution of MAE in the dataset") fig.write_image("imgs/chapter_4/mae_dist.png") fig.update_layout(xaxis_range=[0,1.1]) fig.show() fig = px.histogram(baseline_metrics_df, x="MSE", color="Algorithm", pattern_shape="Algorithm", marginal="box", nbins=500, barmode="overlay", histnorm="probability density") fig = format_plot(fig, xlabel="MSE", ylabel="Probability Density", title="Distribution of MSE in the dataset") fig.update_layout(xaxis_range=[0,1]) fig.write_image("imgs/chapter_4/mse_dist.png") fig.show() fig = px.histogram(baseline_metrics_df, x="Forecast Bias", color="Algorithm", pattern_shape="Algorithm", marginal="box", nbins=250, barmode="overlay", histnorm="probability density") fig = format_plot(fig, xlabel="Forecast Bias", ylabel="Probability Density", title="Distribution of Forecast Bias in the dataset") fig.update_layout(xaxis_range=[-200,200]) fig.write_image("imgs/chapter_4/bias_dist.png") fig.show() ###Output _____no_output_____ ###Markdown Saving the Baseline Forecasts and Metrics ###Code os.makedirs("data/london_smart_meters/output", exist_ok=True) output = Path("data/london_smart_meters/output") baseline_pred_df.to_pickle(output/"baseline_prediction_df.pkl") baseline_metrics_df.to_pickle(output/"baseline_metrics_df.pkl") p_df.to_pickle(output/"baseline_aggregate_metrics.pkl") ###Output _____no_output_____
samples/Notebook-and-Environment-samples-for-Projects.ipynb
###Markdown CPDCTL Samples for Notebooks and Environments in Projects CPDCTL is a command-line interface (CLI) you can use to manage the lifecycle of notebooks. By using the notebook CLI, you can automate the flow for creating notebooks and running notebook jobs, moving notebooks between projects in Watson Studio, and adding custom libraries to notebook runtime environments. This notebook begins by showing you how to install and configure CPDCTL and is then split up into four sections with examples of how to use the commands for:- Creating notebooks and running notebook jobs- Creating Python scripts and running script jobs- Downloading notebooks from one project and uploading them to another project- Adding custom libraries to a notebook runtime environment Table of Contents [1. Installing and configuring CPDCTL](part1)- [1.1 Installing the latest version of CPDCTL](part1.1)- [1.2 Adding CPD cluster configuration settings](part1.2)[2. Demo 1: Creating a notebook asset and running a job](part2)- [2.1 Creating a notebook asset](part2.1)- [2.2 Running a job](part2.2)[3. Demo 2: Creating a Python script asset and running a job](part3)- [3.1 Creating a Python script asset](part3.1)- [3.2 Running a job](part3.2)[4. Demo 3: Downloading a notebook and uploading it to another project](part4)- [4.1 Downloading a notebook](part4.1)- [4.2 Uploading the notebook to another project](part4.2)[5. Demo 4: Adding additional packages to custom environment](part5)- [5.1 Creating a custom software specification](part5.1)- [5.2 Adding additional packages](part5.2)- [5.3 Creating a custom environment](part5.3) Before you beginImport the following libraries: ###Code import base64 import json import os import requests import platform import tarfile import zipfile from IPython.core.display import display, HTML ###Output _____no_output_____ ###Markdown 1. Installing and configuring CPDCTL 1.1 Installing the latest version of CPDCTL To use the notebook and environment CLI commands, you need to install CPDCTL. Download the binary from the [CPDCTL GitHub respository](https://github.com/IBM/cpdctl/releases). Download the binary and then display the version number: ###Code PLATFORM = platform.system().lower() CPDCTL_ARCH = "{}_amd64".format(PLATFORM) CPDCTL_RELEASES_URL="https://api.github.com/repos/IBM/cpdctl/releases" CWD = os.getcwd() PATH = os.environ['PATH'] CPDCONFIG = os.path.join(CWD, '.cpdctl.config.yml') response = requests.get(CPDCTL_RELEASES_URL) assets = response.json()[0]['assets'] platform_asset = next(a for a in assets if CPDCTL_ARCH in a['name']) cpdctl_url = platform_asset['url'] cpdctl_file_name = platform_asset['name'] response = requests.get(cpdctl_url, headers={'Accept': 'application/octet-stream'}) with open(cpdctl_file_name, 'wb') as f: f.write(response.content) display(HTML('<code>cpdctl</code> binary downloaded from: <a href="{}">{}</a>'.format(platform_asset['browser_download_url'], platform_asset['name']))) %%capture %env PATH={CWD}:{PATH} %env CPDCONFIG={CPDCONFIG} if cpdctl_file_name.endswith('tar.gz'): with tarfile.open(cpdctl_file_name, "r:gz") as tar: tar.extractall() elif cpdctl_file_name.endswith('zip'): with zipfile.ZipFile(cpdctl_file_name, 'r') as zf: zf.extractall() if CPDCONFIG and os.path.exists(CPDCONFIG): os.remove(CPDCONFIG) version_r = ! cpdctl version CPDCTL_VERSION = version_r.s print("cpdctl version: {}".format(CPDCTL_VERSION)) ###Output cpdctl version: 1.0.0 ###Markdown 1.2 Adding CPD cluster configuration settings Before you can use CPDCTL, you need to add configuration settings. You only need to configure these settings once for the same IBM Cloud Pak for Data (CPD) user and cluster. Begin by entering your CPD credentials and the URL to the CPD cluster: ###Code CPD_USER_NAME = #'YOUR CPD user name' CPD_USER_PASSWORD = #'YOUR CPD user password' CPD_URL = #'YOUR CPD CLUSTER URL' ###Output _____no_output_____ ###Markdown Add "cpd_user" user to the cpdctl configuration: ###Code ! cpdctl config user set cpd_user --username {CPD_USER_NAME} --password {CPD_USER_PASSWORD} ###Output _____no_output_____ ###Markdown Add "cpd" cluster to the cpdctl configuration: ###Code ! cpdctl config profile set cpd --url {CPD_URL} ###Output _____no_output_____ ###Markdown Add "cpd" context to the cpdctl configuration: ###Code ! cpdctl config context set cpd --profile cpd --user cpd_user ###Output _____no_output_____ ###Markdown List available contexts: ###Code ! cpdctl config context list ###Output Name Profile User Current cpd cpd cpd_user * ###Markdown Switch to the context you just created if it is not marked in the `Current` column: ###Code ! cpdctl config context use cpd ###Output Switched to context "cpd". ###Markdown List available projects in context: ###Code ! cpdctl project list ###Output ... ID Name Created Description Tags 09a3d37e-7572-4b54-88d5-44ae9f2e262a test2 2021-01-21T14:11:13.347Z [] 45c4b416-0700-46eb-be6e-7bb6bcd0a69f test 2021-01-21T14:10:21.116Z [] 5b36b5b9-98b3-4241-afa0-9ad85908ee19 Default Notebooks 2021-01-14T17:33:05.918Z [] ###Markdown Choose the project in which you want to work: ###Code result = ! cpdctl project list --output json -j "(resources[].metadata.guid)[0]" --raw-output project_id = result.s print("project id: {}".format(project_id)) # You can also specify your project id directly: # project_id = "Your project ID" ###Output project id: 09a3d37e-7572-4b54-88d5-44ae9f2e262a ###Markdown 2. Demo 1: Creating a notebook asset and running a job Before starting with this section, ensure that you have run the cells in [Section 1](part1) and specified the ID of the project in which you will work. Suppose you have a Jupyter Notebook (.ipynb) file on your local system and you would like to run the code in the file as a job on a CPD cluster. This section shows you how to create a notebook asset and run a job on a CPD cluster. 2.1 Creating a notebook asset First of all, you need to create a notebook asset in your project. To create a notebook asset you need to specify:- The environment in which your notebook is to run- A notebook file (.ipynb). List all the environments in your project, filter them by their display name and get the ID of the environment in which your notebook will be run: ###Code environment_name = "Default Python 3.7" query_string = "(resources[?entity.environment.display_name == '{}'].metadata.asset_id)[0]".format(environment_name) result = ! cpdctl environment list --project-id {project_id} --output json -j "{query_string}" --raw-output env_id = result.s print("environment id: {}".format(env_id)) # You can also specify your environment id directly: # env_id = "Your environment ID" ###Output environment id: jupconda37-09a3d37e-7572-4b54-88d5-44ae9f2e262a ###Markdown Upload the .ipynb file: ###Code remote_file_path = "notebook/cpdctl-test-notebook.ipynb" local_file_path = "cpdctl-test-notebook.ipynb" ! cpdctl asset file upload --path {remote_file_path} --file {local_file_path} --project-id {project_id} ###Output ... OK ###Markdown Create a notebook asset: ###Code file_name = "cpdctl-test-notebook.ipynb" runtime = { 'environment': env_id } runtime_json = json.dumps(runtime) originate = { 'type': 'blank' } originate_json = json.dumps(originate) result = ! cpdctl notebook create --file-reference {remote_file_path} --name {file_name} --project {project_id} --runtime '{runtime_json}' --originates-from '{originate_json}' --output json -j "metadata.asset_id" --raw-output notebook_id = result.s print("notebook id: {}".format(notebook_id)) ###Output notebook id: 9a0aa244-2573-4481-b37a-76fca9e00e64 ###Markdown 2.2 Running a job Before creating a notebook job, you need to create a version of your notebook: ###Code result = ! cpdctl notebook version create --notebook-id {notebook_id} --output json -j "metadata.guid" --raw-output version_id = result.s print("version id: {}".format(version_id)) ###Output version id: abef40d5-7577-4254-af41-69c9a2a9923b ###Markdown To create a notebook job, you need to give your job a name, add a description, and pass the notebook ID and environment ID you determined in [2.1](part2.1). Additionally, you can add environment variables that will be used in your notebook: ###Code job_name = "cpdctl-test-job" job = { 'asset_ref': notebook_id, 'configuration': { 'env_id': env_id, 'env_variables': [ 'foo=1', 'bar=2'] }, 'description': 'my job', 'name': job_name } job_json = json.dumps(job) result = ! cpdctl job create --job '{job_json}' --project-id {project_id} --output json -j "metadata.asset_id" --raw-output job_id = result.s print("job id: {}".format(job_id)) ###Output job id: 20746052-166b-4fc9-949d-e8bbec863ffd ###Markdown Run a notebook job: ###Code run_data = { 'job_run': {} } run_data_json = json.dumps(run_data) result = ! cpdctl job run create --project-id {project_id} --job-id {job_id} --job-run '{run_data_json}' --output json -j "metadata.asset_id" --raw-output run_id = result.s print("run id: {}".format(run_id)) ###Output run id: 850feaa4-b55d-4866-b1dc-55a1156460c5 ###Markdown You can see the output of each cell in your .ipynb file by listing job run logs: ###Code ! cpdctl job run logs --job-id {job_id} --run-id {run_id} --project-id {project_id} ###Output ... total_count results 7 Cell 1: 7 0 7 1 7 4 7 9 7 16 7 ###Markdown 3. Demo 2: Creating a Python script asset and running a job Before starting with this section, ensure that you have run the cells in [Section 1](part1) and specified the ID of the project in which you will work. Suppose you have a Python script (.py) on your local system and you would like to run the code in the script as a job on a CPD cluster. This section shows you how to create a Python script asset and run a job on a CPD cluster. 3.1 Creating a Python script asset Upload the script (.py) file: ###Code remote_file_path = "script/test_script.py" local_file_path = "test_script.py" ! cpdctl asset file upload --path {remote_file_path} --file {local_file_path} --project-id {project_id} ###Output ... OK ###Markdown Specify the metadata, entity and attachments of the script file: ###Code metadata = { "name": "my_test_script", "asset_type": "script", "asset_category": "USER", "origin_country": "us" } metadata_json = json.dumps(metadata) entity = { "script": { "language": { "name": "python3" } } } entity_json = json.dumps(entity) attachments = [ { "asset_type": "script", "name": "my_test_script", "description": "attachment for script", "mime": "application/text", "object_key": remote_file_path } ] attachments_json = json.dumps(attachments) ###Output _____no_output_____ ###Markdown Create a Python script asset: ###Code result = ! cpdctl asset create --metadata '{metadata_json}' --entity '{entity_json}' --attachments '{attachments_json}' --project-id {project_id} --output json -j "metadata.asset_id" --raw-output script_id = result.s print("script id: {}".format(script_id)) ###Output script id: 7b4e1280-c2d1-4004-b15a-d619b5266efe ###Markdown 3.2 Running a job Similar to a notebook job, you need to specify the environment in which your script job is to run: ###Code environment_name = "Default Python 3.7" query_string = "(resources[?entity.environment.display_name == '{}'].metadata.asset_id)[0]".format(environment_name) result = ! cpdctl environment list --project-id {project_id} --output json -j "{query_string}" --raw-output env_id = result.s print("environment id: {}".format(env_id)) # You can also specify your environment id directly: # env_id = "Your environment ID" ###Output environment id: jupconda37-09a3d37e-7572-4b54-88d5-44ae9f2e262a ###Markdown Now you can create a script job. To do this, you need to give your script job a name, a description, and pass the script ID and environment ID. ###Code job_name = "cpdctl-test-job-for-script" job = { 'asset_ref': script_id, 'configuration': { 'env_id': env_id, 'env_variables': [ 'foo=1', 'bar=2']}, 'description': 'my script job', 'name': job_name } job_json = json.dumps(job) result = ! cpdctl job create --job '{job_json}' --project-id {project_id} --output json -j "metadata.asset_id" --raw-output job_id = result.s print("job id: {}".format(job_id)) ###Output job id: 0495fa5b-8b18-44c1-86e0-3d6a340405c7 ###Markdown Run your script job: ###Code run_data = { 'job_run': {} } run_data_json = json.dumps(run_data) result = ! cpdctl job run create --project-id {project_id} --job-id {job_id} --job-run '{run_data_json}' --output json -j "metadata.asset_id" --raw-output run_id = result.s print("run id: {}".format(run_id)) ###Output run id: de481355-7791-43cd-8c65-dce744b101fa ###Markdown Show your script job run logs: ###Code ! cpdctl job run logs --job-id {job_id} --run-id {run_id} --project-id {project_id} ###Output ... total_count results 6 25 6 36 6 49 6 64 6 81 6 ###Markdown 4. Demo 3: Downloading a notebook and uploading it to another project Before starting with this section, ensure that you have run the cells in [Section 1](part1) and specified the ID of the project in which you will work. Suppose you have a notebook in one project and would like to add a specific version of this notebook to another project. To do this, you first need to download the notebook file to your local system and then upload it to the other project. After that you need to create a notebook asset in your project by referencing the uploaded notebook file (.ipynb) and specifying the environment in which your notebook is to run. 4.1 Downloading a notebook You can select which notebook version you want to download. List notebook versions: ###Code ! cpdctl notebook version list --notebook-id {notebook_id} ###Output ... ID Created abef40d5-7577-4254-af41-69c9a2a9923b 1611757174773 ###Markdown Get the path in the storage volume to the notebook version that you want to download: ###Code result = ! cpdctl notebook version list --notebook-id {notebook_id} --output json -j "(resources[].metadata.guid)[0]" --raw-output version_id = result.s print("version id: {}".format(version_id)) # You can also specify your version id directly: # env_id = "Your version ID" result = ! cpdctl notebook version get --notebook-id {notebook_id} --version-id {version_id} --output json -j "entity.file_reference" --raw-output version_storage_path = result.s print("version storage path: {}".format(version_storage_path)) ###Output version storage path: .notebook_versions/cpdctl-test-notebook_version_1611757174773.ipynb ###Markdown Download the noteboook asset with the specific version from the storage path: ###Code file_name = "my-new-notebook.ipynb" ! cpdctl asset file download --path {version_storage_path} --output-file {file_name} --project-id {project_id} --raw-output ###Output ... OK Output written to my-new-notebook.ipynb ###Markdown 4.2 Uploading the notebook to another project Determine the ID of the project to which you want to upload your notebook: ###Code result = ! cpdctl project list --output json -j "(resources[].metadata.guid)[1]" --raw-output project2_id = result.s print("another project id: {}".format(project2_id)) # You can also specify your another project id directly: # project2_id = "Your another project ID" ###Output another project id: 45c4b416-0700-46eb-be6e-7bb6bcd0a69f ###Markdown Upload the notebook file to this project: ###Code remote_file_path = "notebook/{}".format(file_name) ! cpdctl asset file upload --path {remote_file_path} --file {file_name} --project-id {project2_id} ###Output ... OK ###Markdown After you have uploaded the notebook file to the project, you need to specify the environment in which to run the notebook: ###Code environment_name = "Default Python 3.7" query_string = "(resources[?entity.environment.display_name == '{}'].metadata.asset_id)[0]".format(environment_name) result = ! cpdctl environment list --project-id {project2_id} --output json -j "{query_string}" --raw-output env_id = result.s print("environment id: {}".format(env_id)) # You can also specify your environment id directly: # env_id = "Your environment ID" ###Output environment id: jupconda37-45c4b416-0700-46eb-be6e-7bb6bcd0a69f ###Markdown Now you can create a notebook asset in this project by referencing the uploaded notebook file: ###Code file_name = "my-new-notebook-in-another-project.ipynb" runtime = { 'environment': env_id } runtime_json = json.dumps(runtime) originate = { 'type': 'blank' } originate_json = json.dumps(originate) result = ! cpdctl notebook create --file-reference {remote_file_path} --name {file_name} --project {project2_id} --originates-from '{originate_json}' --runtime '{runtime_json}' --output json -j "metadata.asset_id" --raw-output notebook_id = result.s print("notebook id: {}".format(notebook_id)) ###Output notebook id: 7c920b46-47a4-4ebf-9a88-f9c9024af0a3 ###Markdown 5. Demo 4: Adding additional packages for custom environment Before starting with this section, ensure that you have run the cells in [Section 1](part1) and specified the ID of the project in which you will work. Suppose you have a `conda-yml` file that lists your additional packages **or** you have a `pip-zip` file containing your custom packages, and you would like to install these packages in your custom environment. To do this, you need to:- Create a custom software specification- Add your custom packages- Create a custom environment 5.1 Creating a custom software specification To create a custom software specification, you need to specify the base software specification that you want to customize. You can list all the software specifications in your project and choose one of them as the base software specification: ###Code ! cpdctl environment software-specification list --project-id {project_id} base_sw_spec_name = "Default Python 3.7" query_string = "(resources[?metadata.description == '{}'].metadata.asset_id)[0]".format(base_sw_spec_name) result = ! cpdctl environment software-specification list --project-id {project_id} --output json -j "{query_string}" --raw-output base_sw_spec_id = result.s print("base software specification id: {}".format(base_sw_spec_id)) # You can also specify your base software specification id directly: # based_sw_spec_id = "Your base software specification ID" ###Output base software specification id: e4429883-c883-42b6-87a8-f419d64088cd ###Markdown Create a custom software specification: ###Code custom_sw_spec_name = "my_sw_spec" base_sw_spec = { 'guid': base_sw_spec_id } base_sw_spec_json = json.dumps(base_sw_spec) sw_conf = {} sw_conf_json = json.dumps(sw_conf) result = ! cpdctl environment software-specification create --project-id {project_id} --name {custom_sw_spec_name} --base-software-specification '{base_sw_spec_json}' --software-configuration '{sw_conf_json}' --output json -j "metadata.asset_id" --raw-output custom_sw_spec_id = result.s print("custom software specification id: {}".format(custom_sw_spec_id)) ###Output custom software specification id: d083a07d-42e0-4486-b839-bd019d71ef73 ###Markdown 5.2 Adding additional packages Create a package extension: ###Code pkg_name = "my_test_packages" result = ! cpdctl environment package-extension create --name {pkg_name} --type "conda_yml" --project-id {project_id} --output json pkg_ext_id = json.loads(result.s)['metadata']['asset_id'] print("package extension id: {}".format(pkg_ext_id)) ###Output package extension id: 73f7267e-bc02-42fc-ae5f-a54a32a27fc6 ###Markdown Get the path to where you want to upload the additional packages: ###Code pkg_ext_href = json.loads(result.s)['entity']['package_extension']['href'].split('/')[4].split('?')[0] remote_pkg_path = "package_extension/{}".format(pkg_ext_href) print("path where asset should be uploaded: {}".format(remote_pkg_path)) ###Output path where asset should be uploaded: package_extension/my_test_packages_An_bkImwj2.yml ###Markdown Define a conda-yaml file listing additional packages: ###Code my_yaml = """ channels: - defaults dependencies: - pip: - fuzzywuzzy """ with open('my-pkg-ext.yaml', 'w') as f: f.write(my_yaml) ###Output _____no_output_____ ###Markdown Upload additional packages to the path returned in the previous command: ###Code local_pkg_path = "./my-pkg-ext.yaml" ! cpdctl asset file upload --path "{remote_pkg_path}" --file {local_pkg_path} --project-id {project_id} ! cpdctl environment package-extension upload-complete --package-extension-id {pkg_ext_id} --project-id {project_id} ###Output ... OK ###Markdown Add the package extension into the custom software specification: ###Code ! cpdctl environment software-specification add-package-extensions --software-specification-id {custom_sw_spec_id} --package-extension-id {pkg_ext_id} --project-id {project_id} ###Output ... OK ###Markdown 5.3 Creating a custom environment List all the hardware specifications in your project and choose one that fits your custom environment: ###Code ! cpdctl environment hardware-specification list --project-id {project_id} hw_spec_keyword_1 = "one CPU core" hw_spec_keyword_2 = "4 GiB of memory" query_string = "(resources[?contains(metadata.description, '{}') && contains(metadata.description, '{}')].metadata.asset_id)[0]".format(hw_spec_keyword_1, hw_spec_keyword_2) result = ! cpdctl environment hardware-specification list --project-id {project_id} --output json -j "{query_string}" --raw-output hw_spec_id = result.s print("hardware specification id: {}".format(hw_spec_id)) # You can also specify your hardware specification id directly: # hw_spec_id = "Your base software specification ID" ###Output hardware specification id: f3ebac7d-0a75-410c-8b48-a931428cc4c5 ###Markdown Create a custom environment by specifying the hardware specification, the custom software specification and the tool specification: ###Code env_name = "my_custom_env" hw_spec = { 'guid': hw_spec_id } custom_sw_spec = { 'guid': custom_sw_spec_id } custom_sw_spec_json = json.dumps(custom_sw_spec) tool_spec = { 'supported_kernels': [{ 'language': 'python', 'version': '3.7', 'display_name': 'Python 3.7' }] } hw_spec_json = json.dumps(hw_spec) tool_spec_json = json.dumps(tool_spec) result = ! cpdctl environment create --project-id {project_id} --type "notebook" --name {env_name} --display-name {env_name} --hardware-specification '{hw_spec_json}' --software-specification '{custom_sw_spec_json}' --tools-specification '{tool_spec_json}' --output json -j "metadata.asset_id" --raw-output custom_env_id = result.s print("custom environment id: {}".format(custom_env_id)) ###Output custom environment id: 2b18f3e4-34ca-45b6-8f8e-49d34551b919
14 - App for Lottery Addiction/Lottery_6_49.ipynb
###Markdown App to Prevent Gambling AddictionsA medical institute that aims to prevent and treat gambling addictions wants to build a dedicated mobile app to help lottery addicts better estimate their chances of winning. The institute has a team of engineers that will build the app, but they need us to create the logical core of the app and calculate probabilities.For the first version of the app, they want us to focus on the 6/49 lottery and build functions that enable users to answer questions like:- What is the probability of winning the big prize with a single ticket?- What is the probability of winning the big prize if we play 40 different tickets (or any other number)?- What is the probability of having at least five (or four, or three, or two) winning numbers on a single ticket? Defining FunctionsThroughout the project, we'll need to calculate repeatedly probabilities and combinations. As a consequence, we'll start by writing two functions that we'll use often:- A function that calculates factorials; and- A function that calculates combinations. ###Code # Example of a factorial number: 5! = 5*4*3*2*1 def factorial(n): terms = [n] c = n for i in range(n): if c != 1: terms.append(c-1) c -= 1 else: break product = 1 for term in terms: product *= term return product def combinations(n, k): combination = factorial(n)/(factorial(n-k) * factorial(k)) return combination ###Output _____no_output_____ ###Markdown One-Ticket ProbabilityFor the first version of the app, we want players to be able to calculate the probability of winning the big prize with the various numbers they play on a single ticket (for each ticket a player chooses six numbers out of 49). So, we'll start by building a function that calculates the probability of winning the big prize for any given ticket. Furthermore, we'll show the probability value in a friendly way — in a way that people without any probability training are able to understand. ###Code def one_ticket_probability(list_6): len_list = len(list_6) all_outcomes = combinations(49, len_list) win_probability_pct = (1/all_outcomes)*100 return print('''Your chances to win the big prize with the numbers {} are {:.7f}%. In other words, you have a 1 in {:,} chances to win.'''.format(list_6, win_probability_pct, int(all_outcomes))) print(one_ticket_probability([1,2,3,4,5,6])) ###Output Your chances to win the big prize with the numbers [1, 2, 3, 4, 5, 6] are 0.0000072%. In other words, you have a 1 in 13,983,816 chances to win. None ###Markdown Canada Lottery Historical DataFor the first version of the app, users should also be able to compare their ticket against the historical lottery data in Canada and determine whether they would have ever won by now.Now, we'll focus on exploring the historical data coming from the Canada 6/49 lottery. The data set can be downloaded [here](https://www.kaggle.com/datascienceai/lottery-dataset). ###Code import pandas as pd import numpy as np import matplotlib.pyplot as plt %matplotlib inline lottery = pd.read_csv('649.csv') print(lottery.shape) lottery.head(3) lottery.tail(3) ###Output _____no_output_____ ###Markdown Function for Historical Data CheckThe engineering team wants us to write a function that prints:- The number of times the combination selected occurred in the Canada data set; and- The probability of winning the big prize in the next drawing with that combination. ###Code def extract_numbers(row): row = row[4:10] row = set(row.values) return row lottery['winning_numbers'] = lottery.apply(extract_numbers, axis=1) lottery.head(3) def check_historical_occurence(list_6, historic=lottery['winning_numbers']): set_numbers = set(list_6) series = historic == set_numbers counter = series.sum() return print('''Number of times the combination {} occurred: {} Regardless whether this combination occurred or not, your chances to win are the same. Your chances to win the big prize with these numbers are 0.0000072%. In other words, you have a 1 in 13,983,816 chances to win.'''.format(list_6, counter)) check_historical_occurence(list_6=[3,11,12,14,41,44]) ###Output Number of times the combination [3, 11, 12, 14, 41, 44] occurred: 0 Regardless whether this combination occurred or not, your chances to win are the same. Your chances to win the big prize with these numbers are 0.0000072%. In other words, you have a 1 in 13,983,816 chances to win. ###Markdown Multi-Ticket ProbabilityLottery addicts usually play more than one ticket on a single drawing, thinking that this might increase their chances of winning significantly. Our purpose is to help them better estimate their chances of winning — we're going to write a function that will allow the users to calculate the chances of winning for any number of different tickets.We've talked with the engineering team and they gave us the following information:- The user will input the number of different tickets they want to play (without inputting the specific combinations they intend to play).- Our function will see an integer between 1 and 13,983,816 (the maximum number of different tickets).- The function should print information about the probability of winning the big prize depending on the number of different tickets played. ###Code def multi_ticket_probability(n_tickets): all_outcomes = combinations(49, 6) probability = (n_tickets/all_outcomes)*100 if n_tickets == 1: print('''Your chances to win the big prize with one ticket are {:.7f}%. In other words, you have a 1 in {:,} chances to win.'''.format(probability, int(all_outcomes))) else: combinations_simplified = round(all_outcomes / n_tickets) print('''Your chances to win the big prize with {:,} different tickets are {:.7f}%. In other words, you have a 1 in {:,} chances to win.'''.format(n_tickets, probability, combinations_simplified)) testing = [1, 10, 100, 10000, 1000000, 6991908, 13983816] for n in testing: multi_ticket_probability(n) print('------------------------') # output delimiter ###Output Your chances to win the big prize with one ticket are 0.0000072%. In other words, you have a 1 in 13,983,816 chances to win. ------------------------ Your chances to win the big prize with 10 different tickets are 0.0000715%. In other words, you have a 1 in 1,398,382 chances to win. ------------------------ Your chances to win the big prize with 100 different tickets are 0.0007151%. In other words, you have a 1 in 139,838 chances to win. ------------------------ Your chances to win the big prize with 10,000 different tickets are 0.0715112%. In other words, you have a 1 in 1,398 chances to win. ------------------------ Your chances to win the big prize with 1,000,000 different tickets are 7.1511238%. In other words, you have a 1 in 14 chances to win. ------------------------ Your chances to win the big prize with 6,991,908 different tickets are 50.0000000%. In other words, you have a 1 in 2 chances to win. ------------------------ Your chances to win the big prize with 13,983,816 different tickets are 100.0000000%. In other words, you have a 1 in 1 chances to win. ------------------------ ###Markdown Less Winning NumbersWe're going to write one more function to allow the users to calculate probabilities for two, three, four, or five winning numbers.For extra context, in most 6/49 lotteries there are smaller prizes if a player's ticket match two, three, four, or five of the six numbers drawn. As a consequence, the users might be interested in knowing the probability of having two, three, four, or five winning numbers.These are the engineering details we'll need to be aware of:- Inside the app, the user inputs: - six different numbers from 1 to 49; and - an integer between 2 and 5 that represents the number of winning numbers expected- Our function prints information about the probability of having the inputted number of winning numbers. ###Code def probability_less_6(n_winning_numbers): n_combinations_ticket = combinations(6, n_winning_numbers) n_combinations_remaining = combinations(43, 6 - n_winning_numbers) successful_outcomes = n_combinations_ticket * n_combinations_remaining n_combinations_total = combinations(49, 6) probability = successful_outcomes / n_combinations_total probability_percentage = probability * 100 combinations_simplified = round(n_combinations_total/successful_outcomes) print('''Your chances of having {} winning numbers with this ticket are {:.6f}%. In other words, you have a 1 in {:,} chances to win.'''.format(n_winning_numbers, probability_percentage, int(combinations_simplified))) testing_2 = [2, 3, 4, 5] for n in testing_2: probability_less_6(n) print('------------------------') # output delimiter ###Output Your chances of having 2 winning numbers with this ticket are 13.237803%. In other words, you have a 1 in 8 chances to win. ------------------------ Your chances of having 3 winning numbers with this ticket are 1.765040%. In other words, you have a 1 in 57 chances to win. ------------------------ Your chances of having 4 winning numbers with this ticket are 0.096862%. In other words, you have a 1 in 1,032 chances to win. ------------------------ Your chances of having 5 winning numbers with this ticket are 0.001845%. In other words, you have a 1 in 54,201 chances to win. ------------------------
37-warmup-blank_semantics_sentiment_and_named_entities.ipynb
###Markdown What is the most positive sentence, and the most negative sentence in the earning calls? Incorporate subjectivity and select objective sentiments (low subjectivity) ###Code data = pd.read_csv('data/EC10.csv') ###Output _____no_output_____
demo/distillation/image_classification_distillation_tutorial.ipynb
###Markdown PaddleSlim Distillation知识蒸馏简介与实验一般情况下,模型参数量越多,结构越复杂,其性能越好,但参数也越冗余,运算量和资源消耗也越大。**知识蒸馏**就是一种将大模型学习到的有用信息(Dark Knowledge)压缩进更小更快的模型,而获得可以匹敌大模型结果的方法。在本文中性能强劲的大模型被称为teacher, 性能稍逊但体积较小的模型被称为student。示例包含以下步骤:1. 导入依赖2. 定义student_program和teacher_program3. 选择特征图4. 合并program (merge)并添加蒸馏loss5. 模型训练 1. 导入依赖PaddleSlim依赖Paddle1.7版本,请确认已正确安装Paddle,然后按以下方式导入Paddle、PaddleSlim以及其他依赖: ###Code import paddle import paddle.fluid as fluid import paddleslim as slim import sys sys.path.append("../") import models ###Output _____no_output_____ ###Markdown 2. 定义student_program和teacher_program本教程在MNIST数据集上进行知识蒸馏的训练和验证,输入图片尺寸为`[1, 28, 28]`,输出类别数为10。选择`ResNet50`作为teacher对`MobileNet`结构的student进行蒸馏训练。 ###Code model = models.__dict__['MobileNet']() student_program = fluid.Program() student_startup = fluid.Program() with fluid.program_guard(student_program, student_startup): image = fluid.data( name='image', shape=[None] + [1, 28, 28], dtype='float32') label = fluid.data(name='label', shape=[None, 1], dtype='int64') out = model.net(input=image, class_dim=10) cost = fluid.layers.cross_entropy(input=out, label=label) avg_cost = fluid.layers.mean(x=cost) acc_top1 = fluid.layers.accuracy(input=out, label=label, k=1) acc_top5 = fluid.layers.accuracy(input=out, label=label, k=5) teacher_model = models.__dict__['ResNet50']() teacher_program = fluid.Program() teacher_startup = fluid.Program() with fluid.program_guard(teacher_program, teacher_startup): with fluid.unique_name.guard(): image = fluid.data( name='image', shape=[None] + [1, 28, 28], dtype='float32') predict = teacher_model.net(image, class_dim=10) exe = fluid.Executor(fluid.CPUPlace()) exe.run(teacher_startup) ###Output _____no_output_____ ###Markdown 3. 选择特征图我们可以用student_的list_vars方法来观察其中全部的Variables,从中选出一个或多个变量(Variable)来拟合teacher相应的变量。 ###Code # get all student variables student_vars = [] for v in student_program.list_vars(): student_vars.append((v.name, v.shape)) #uncomment the following lines to observe student's variables for distillation #print("="*50+"student_model_vars"+"="*50) #print(student_vars) # get all teacher variables teacher_vars = [] for v in teacher_program.list_vars(): teacher_vars.append((v.name, v.shape)) #uncomment the following lines to observe teacher's variables for distillation #print("="*50+"teacher_model_vars"+"="*50) #print(teacher_vars) ###Output _____no_output_____ ###Markdown 经过筛选我们可以看到,teacher_program中的'bn5c_branch2b.output.1.tmp_3'和student_program的'depthwise_conv2d_11.tmp_0'尺寸一致,可以组成蒸馏损失函数。 4. 合并program (merge)并添加蒸馏lossmerge操作将student_program和teacher_program中的所有Variables和Op都将被添加到同一个Program中,同时为了避免两个program中有同名变量会引起命名冲突,merge也会为teacher_program中的Variables添加一个同一的命名前缀name_prefix,其默认值是'teacher_'为了确保teacher网络和student网络输入的数据是一样的,merge操作也会对两个program的输入数据层进行合并操作,所以需要指定一个数据层名称的映射关系data_name_map,key是teacher的输入数据名称,value是student的 ###Code data_name_map = {'image': 'image'} main = slim.dist.merge(teacher_program, student_program, data_name_map, fluid.CPUPlace()) with fluid.program_guard(student_program, student_startup): l2_loss = slim.dist.l2_loss('teacher_bn5c_branch2b.output.1.tmp_3', 'depthwise_conv2d_11.tmp_0', student_program) loss = l2_loss + avg_cost opt = fluid.optimizer.Momentum(0.01, 0.9) opt.minimize(loss) exe.run(student_startup) ###Output _____no_output_____ ###Markdown 5. 模型训练为了快速执行该示例,我们选取简单的MNIST数据,Paddle框架的`paddle.dataset.mnist`包定义了MNIST数据的下载和读取。代码如下: ###Code train_reader = paddle.fluid.io.batch( paddle.dataset.mnist.train(), batch_size=128, drop_last=True) train_feeder = fluid.DataFeeder(['image', 'label'], fluid.CPUPlace(), student_program) for data in train_reader(): acc1, acc5, loss_np = exe.run(student_program, feed=train_feeder.feed(data), fetch_list=[acc_top1.name, acc_top5.name, loss.name]) print("Acc1: {:.6f}, Acc5: {:.6f}, Loss: {:.6f}".format(acc1.mean(), acc5.mean(), loss_np.mean())) ###Output _____no_output_____ ###Markdown PaddleSlim Distillation知识蒸馏简介与实验一般情况下,模型参数量越多,结构越复杂,其性能越好,但参数也越冗余,运算量和资源消耗也越大。**知识蒸馏**就是一种将大模型学习到的有用信息(Dark Knowledge)压缩进更小更快的模型,而获得可以匹敌大模型结果的方法。在本文中性能强劲的大模型被称为teacher, 性能稍逊但体积较小的模型被称为student。示例包含以下步骤:1. 导入依赖2. 定义student_program和teacher_program3. 选择特征图4. 合并program (merge)并添加蒸馏loss5. 模型训练 1. 导入依赖PaddleSlim依赖Paddle1.7版本,请确认已正确安装Paddle,然后按以下方式导入Paddle、PaddleSlim以及其他依赖: ###Code import paddle import paddle.fluid as fluid import paddleslim as slim import sys sys.path.append("../") import models ###Output _____no_output_____ ###Markdown 2. 定义student_program和teacher_program本教程在MNIST数据集上进行知识蒸馏的训练和验证,输入图片尺寸为`[1, 28, 28]`,输出类别数为10。选择`ResNet50`作为teacher对`MobileNet`结构的student进行蒸馏训练。 ###Code model = models.__dict__['MobileNet']() student_program = fluid.Program() student_startup = fluid.Program() with fluid.program_guard(student_program, student_startup): image = fluid.data( name='image', shape=[None] + [1, 28, 28], dtype='float32') label = fluid.data(name='label', shape=[None, 1], dtype='int64') out = model.net(input=image, class_dim=10) cost = fluid.layers.cross_entropy(input=out, label=label) avg_cost = fluid.layers.mean(x=cost) acc_top1 = fluid.layers.accuracy(input=out, label=label, k=1) acc_top5 = fluid.layers.accuracy(input=out, label=label, k=5) teacher_model = models.__dict__['ResNet50']() teacher_program = fluid.Program() teacher_startup = fluid.Program() with fluid.program_guard(teacher_program, teacher_startup): with fluid.unique_name.guard(): image = fluid.data( name='image', shape=[None] + [1, 28, 28], dtype='float32') predict = teacher_model.net(image, class_dim=10) exe = fluid.Executor(fluid.CPUPlace()) exe.run(teacher_startup) ###Output _____no_output_____ ###Markdown 3. 选择特征图我们可以用student_的list_vars方法来观察其中全部的Variables,从中选出一个或多个变量(Variable)来拟合teacher相应的变量。 ###Code # get all student variables student_vars = [] for v in student_program.list_vars(): student_vars.append((v.name, v.shape)) #uncomment the following lines to observe student's variables for distillation #print("="*50+"student_model_vars"+"="*50) #print(student_vars) # get all teacher variables teacher_vars = [] for v in teacher_program.list_vars(): teacher_vars.append((v.name, v.shape)) #uncomment the following lines to observe teacher's variables for distillation #print("="*50+"teacher_model_vars"+"="*50) #print(teacher_vars) ###Output _____no_output_____ ###Markdown 经过筛选我们可以看到,teacher_program中的'bn5c_branch2b.output.1.tmp_3'和student_program的'depthwise_conv2d_11.tmp_0'尺寸一致,可以组成蒸馏损失函数。 4. 合并program (merge)并添加蒸馏lossmerge操作将student_program和teacher_program中的所有Variables和Op都将被添加到同一个Program中,同时为了避免两个program中有同名变量会引起命名冲突,merge也会为teacher_program中的Variables添加一个同一的命名前缀name_prefix,其默认值是'teacher_'为了确保teacher网络和student网络输入的数据是一样的,merge操作也会对两个program的输入数据层进行合并操作,所以需要指定一个数据层名称的映射关系data_name_map,key是teacher的输入数据名称,value是student的 ###Code data_name_map = {'image': 'image'} main = slim.dist.merge(teacher_program, student_program, data_name_map, fluid.CPUPlace()) with fluid.program_guard(student_program, student_startup): l2_loss = slim.dist.l2_loss('teacher_bn5c_branch2b.output.1.tmp_3', 'depthwise_conv2d_11.tmp_0', student_program) loss = l2_loss + avg_cost opt = fluid.optimizer.Momentum(0.01, 0.9) opt.minimize(loss) exe.run(student_startup) ###Output _____no_output_____ ###Markdown 5. 模型训练为了快速执行该示例,我们选取简单的MNIST数据,Paddle框架的`paddle.dataset.mnist`包定义了MNIST数据的下载和读取。代码如下: ###Code train_reader = paddle.batch( paddle.dataset.mnist.train(), batch_size=128, drop_last=True) train_feeder = fluid.DataFeeder(['image', 'label'], fluid.CPUPlace(), student_program) for data in train_reader(): acc1, acc5, loss_np = exe.run(student_program, feed=train_feeder.feed(data), fetch_list=[acc_top1.name, acc_top5.name, loss.name]) print("Acc1: {:.6f}, Acc5: {:.6f}, Loss: {:.6f}".format(acc1.mean(), acc5.mean(), loss_np.mean())) ###Output _____no_output_____
intro-to-python-ml/week1/intro-to-python.ipynb
###Markdown Introduction to Python by Pradip Gupta ([email protected]) Data types in python 1. int2. float3. string 4. list5. tuple6. boolean7. dictionary8. set 1. Int ###Code a = 10 print(a) print(A) #everything is case-sensitive # Any text followed by '#' is not conisdered python code. These are called comments # shortcut for commenting/un-commenting in Jupyter is : select the line(s) which you want to comment and then use Ctrl+/ # short cut for executing your code is shift+enter type(a) ###Output _____no_output_____ ###Markdown 2. Float ###Code z = 10. z type(z) ###Output _____no_output_____ ###Markdown 3. String - Different operations that can be done on strings like split, join, index, count, find, upper,lower,len,min,max. ###Code b = '2' # or b = "2" (single or double quotes do not make a difference in python) b ###Output _____no_output_____ ###Markdown Some operations are not allowed on different types: ###Code a+b ###Output _____no_output_____ ###Markdown But some of them are allowed: ###Code a*b b = "hello" a*b ###Output _____no_output_____ ###Markdown String variables can be combined: ###Code c = 'hyderabad' c b+c ###Output _____no_output_____ ###Markdown In order to include variable of another type in to string you have to convert it: ###Code str(a)+c len(c) Finally = 2 ###Output _____no_output_____ ###Markdown Variable NamingYou can pretty much chose any name as a variable name, however there are few rules that you'd need to follow* Variable names can not have spaces in between.* They can not contain any special character except underscore [i.e. _ ]* Variable names can not start with a number , however you can have numbers anywhere else in the variable nameIn addition to these rules there are few good practices when you are naming your variables. Google python style guide recommends following way of naming variables based on what type of variables they are.module_name, package_name, ClassName, method_name, ExceptionName, function_name, GLOBAL_CONSTANT_NAME, global_var_name, instance_var_name, function_parameter_name, local_var_name*Note: Its not mandatory to use underscore with long variable names , just that it makes your variable name easily readable.*Lets see an example below , you decide yourself which variable name is more readable. **Reserved Words:**['False', 'None', 'True', 'and', 'as', 'assert', 'break', 'class', 'continue', 'def', 'del', 'elif', 'else', 'except', 'finally', 'for', 'from', 'global', 'if', 'import', 'in', 'is', 'lambda', 'nonlocal', 'not', 'or', 'pass', 'raise', 'return', 'try', 'while', 'with', 'yield'] Everything is an object In IPython you can get the list of object's methods and attributes by typing dot and pressing TAB: ###Code c= "hydeRabad" ###Output _____no_output_____ ###Markdown Methods are basically default functions that can be applied to our variable: ###Code c.upper() c.title() c.count('a') c.find('d') ###Output _____no_output_____ ###Markdown If you need help on method in IPython type something like: ###Code c.find? ###Output _____no_output_____ ###Markdown Or open bracket and press SHIFT+TAB: ###Code c.find( ###Output _____no_output_____ ###Markdown Int variable is also an object: ###Code a = 2 a.bit_length() ###Output _____no_output_____ ###Markdown Methods can be combined (kind of a pipeline) ###Code c.title().count('a').bit_length() ###Output _____no_output_____ ###Markdown Some basic numeric Operations for simple algebaric operations , usual symbols work. Below are some examples. ###Code x=1.22 y=20 x+y x-y x*y x/y x//y x**y ###Output _____no_output_____ ###Markdown you need to use double asterisks for exponents as shown below ###Code x=2 y=3 x**y ###Output _____no_output_____ ###Markdown you can do multiple operations separated by comma ###Code x+2,y+3,x+y,x**(x+y) ###Output _____no_output_____ ###Markdown Subsetting a string with indicesFor extracting a part of the string, we don’t need a special function. We can simple indexing using square brackets “[]”. While using this indexing , we need to keep few things in mind* counting starts with 0. first character gets counted as zero* [a:b] , means starting at ath position and extracting until (b-1)th position * [a:] , means starting at ath position till end* [:a], means starting at the beginning until (a-1)th positionnegative sign with indices mean counting from right instead of natural left.* [:-a], means starting at the beginning and going up til ath position from end. counting from end also starts with 0.* [-a:], means starting at (a-1)th character from the end and going up till last character.* [a:-b], means starting at ath character at the beginning and going till bth position from end* [-a:-b], means starting at (a+1)th character from the end and going up till bth position from endLets see these in action ###Code z = "GOA to Leh" print(z) z[-8:5] z[3:] z[:3] z[:-3] #Inverse-indexing ###Output _____no_output_____ ###Markdown * [:-a], means starting at the beginning and going up til ath position from end. counting from end also starts with 0.* [-a:], means starting at (a-1)th character from the end and going up till last character. ###Code z[-6:-2] ###Output _____no_output_____ ###Markdown Other useful functions that we will explore for string operations : count , find & split.functions that i want you to explore on your own further are family of function for checking whether a particular piece of string belongs to a certain type or not. Here are those functions :```Ex: Find out what following functions do : isalnum,isalpha,isdigit,islower,isupper. Discuss this on Q&A forum , if you face any issue in figuring out what these functions do.``` ###Code print(z) len(z) z.lower().count("o") z.find("a") path = "path/to/dir" path.split("/") path.endswith(".") ###Output _____no_output_____ ###Markdown 4. Lists - List is mutable (can be changed) - List uses square bracket - For modifying a list, we can use functions like: append, extend, insert, pop, remove and clear. - Other possible functions are: sorted, sort, min, max, len. In order to create list put coma separated values in square brackets: ###Code l = [1,2,3,4,5] l ###Output _____no_output_____ ###Markdown Values in list can be any type: ###Code l = ['one', 'two', 'three', 'four', 'five'] l ###Output _____no_output_____ ###Markdown Combined ###Code l = ['one', 2, 'three', 4.0, 3+2] l ###Output _____no_output_____ ###Markdown Any type means ANY type: ###Code l = ['one', 2, 'three', [1,2,3,4,5], 3+2] l ###Output _____no_output_____ ###Markdown You can access list values by index: ###Code l[0] ###Output _____no_output_____ ###Markdown Oh, yes, indexing starts with zero, so for Matlab/R users please note. ###Code l[1] ###Output _____no_output_____ ###Markdown Let's have a look at the 4th element of our list: ###Code l[3] ###Output _____no_output_____ ###Markdown It's also a list, and its values can be accessed by indexes as well: ###Code l[3][4] ###Output _____no_output_____ ###Markdown You also can acces multiple elements of the list using slices: ###Code l[1:3] ###Output _____no_output_____ ###Markdown Slice will start with the first slice index and go up to but not including the second slice index. ###Code l[3] x = [1,2,3,70,-10,0,99,"a","b","c"] x.append("new") x.append([1,2,3]) x.extend([1,2,3]) x.insert(2,"another") x=x+[3,4,5] x.pop() x.pop(3) x.remove("another") x = [1,5,3,66,77,22] x.sort() x x.reverse() x ###Output _____no_output_____ ###Markdown 5. Boolean ###Code x=True y=False type(x) type(y) x and y not x ###Output _____no_output_____ ###Markdown Writing Conditions* checking equality : "=="* in-equality : "!="* greater than : ">"* less than : "<"* greater than or equal to : ">="* less than or equal to : "<=" ###Code x=34 y=32 x==y x>y x="possible" y="impossible" x in y x not in y # Let's see the conditionals available v1 = "Jennifer" v2 = "Python" v3 = 45 v4 = 67 v5 = 45 # Test for equality print (v1 == v2) # Test for greater than and greater than equal print (v4 > v3) # Test for lesser than and lesser than equal print (v4 < v3) # Inequality print (v1 != v2) # Note: v1 = 45 v2 = "45" print (v1 == v2) # False print (v1 == int(v2)) # True # Ignore case when comparing two strings s1 = "Jennifer" s2 = "jennifer" print (s1 == s2) # False print (s1.lower() == s2.lower()) # True # OR print (s1.upper() == s2.upper()) # True ###Output False True True ###Markdown 6. Dictionaries - Dict is represented by {} - The dict class allows creating an associative array of keys and values. Keys must be unique immutable objects - Dict is not an ordered set - We can append or modify a dictonary ###Code d= {"actor":"nasir", "animal":"dog", "earth":1, "list":[1,2,3]} d.keys() d.values() #Change value of existing key d["animal"] = "cat" d #Add key t dict d["pet"] = "dog" d # removing a key value pair del d['actor'] print(d) # element access in loop for elem in d: print('value for key:%s ' %(elem) ,'is :%s' %(d[elem])) for ele in d.items(): print(ele) print("Key {} Value {}".format(ele[0],ele[1])) print("my name is pradip") name = "pradip" title = "gupta" print("my name is",name) print("my name is {} bbb {} mmm".format(name, title)) len(d.items()) d.items() for a,b in d.items(): print('value for key:%s is %s' %(a,b)) ###Output _____no_output_____ ###Markdown 7. Sets - Set class provides mapping of unique immutable elements ###Code animals = {'cat', 'dog','cat'} animals animals.add("horse") animals animals. a={1,2,3,4,5,6} b={4,5,6,7,8,9} c=a.union(b) b a.intersection(b) ###Output _____no_output_____ ###Markdown Difference between function symmetric_difference and difference is as follows:* s.difference(t) : new set with elements in s but not in t* s.symmetric_difference(t) : new set with elements in either s or t but not bothlets see with our own examples : ###Code a.difference(b) b.difference(a) a.symmetric_difference(b) ###Output _____no_output_____ ###Markdown 8. Tuples - Tuple is like a list but it is immutable (cannot be changed) - Tuple uses parentheses ###Code t = 12345, 54321, 'hello!' t t =( 12345, 54321, 'hello!') # same thing t u = t, (1, 2, 3, 4, 5) u t[0]=21 # you can not reassign len(t) t[-1] ###Output _____no_output_____ ###Markdown Why did changing ‘y’ also change ‘x’? ###Code x = [1,2,3] y = x.copy() y.append(10) y x ###Output _____no_output_____ ###Markdown 1. Variables are simply names that refer to objects. Doing y = x doesn’t create a copy of the list – it creates a new variable y that refers to the same object x refers to. This means that there is only one object (the list), and both x and y refer to it.2. Lists are mutable, which means that you can change their content. ###Code x = 5 # ints are immutable y = x x = x + 1 # 5 can't be mutated, we are creating a new object here y x ###Output _____no_output_____ ###Markdown List Comprehensions with For Loops ###Code x=range(8) ###Output _____no_output_____ ###Markdown From this we want to exract remainder for each value of x when that gets divided by 2.operator `%` calculates remainder. a%2 will caclulate remiander when a is dvivided by 2. ###Code 10%3 x=range(8) x_mod_2=[] for a in x: x_mod_2.append(a%2) x_mod_2 x_mod_2=[a%2 for a in x] x_mod_2 ###Output _____no_output_____ ###Markdown Control Structures For loop: This loop will print all elements from the list *l* ###Code l = ['one', 2, 'three', [1,2,3,4,5], 3+2] for ele in l: ele=2*ele print(ele) ###Output oneone 4 threethree [1, 2, 3, 4, 5, 1, 2, 3, 4, 5] 10 ###Markdown Two interesting thins here. First: indentation, it's in the code, you must use it, otherwise code will not work: ###Code for element in l: print(element) for i,ele in enumerate(l): print("value at index {} is {}".format(i,ele)) ###Output value at index 0 is one value at index 1 is 2 value at index 2 is three value at index 3 is [1, 2, 3, 4, 5] value at index 4 is 5 ###Markdown Second - you can iterate through the elements of the list. There is an option to iterate through a bunch of numbers as we used to in Matlab: ###Code for index in range(5): print(l[index]) ###Output _____no_output_____ ###Markdown where *range* is just generating a sequence of numbers: ###Code start = 5 end =20 gap = 5 list(range(start,end,gap)) ###Output _____no_output_____ ###Markdown IF-Else The if-statement tells your script, "If this boolean expression is True, then run the code under it, otherwise skip it.” ###Code marks = 80 if marks >=90: grade = "A" elif marks < 90 and marks >=70: grade = "B" elif marks >40: grade = "C" else: grade = "Fail" grade if expr1: # # # elif expr2: # # # # else: # # a = 5 b = 4 if a > b: print ("a is greater than b"); if (a > b): print ("a is greater than b") print ("blocks are defined by indentation") elif (a < b): print ("a is less than b") else: print ("a is equal to b") ###Output _____no_output_____ ###Markdown Continue and Break ###Code for z in range(10): if z == 5: break print(z) ###Output 0 1 2 3 4 ###Markdown While ###Code a = 10 b = 1 while a > b: b = b + 1 a = a - b print(a, b) a = 0 while True: a = a +1 if a == 5: break print(a) # Filing a dictionary d1 = {} while 1: key = input("Enter a key: ") value = input("Enter a value: ") d1[key] = value; if input("exit? ") == "yes": break; print (d1) ###Output Enter a key: k1 Enter a value: 2 exit? no Enter a key: k2 Enter a value: 3 exit? YES Enter a key: k3 Enter a value: 4 exit? yes {'k1': '2', 'k2': '3', 'k3': '4'} ###Markdown Modules Pure python does not do much. To do some specific tasks you need to import modules. Here I am going to demonstrate several ways to do so. The most common one is to import complete library. In this example we import *urllib2* - a library for opening URLs using a variety of protocols. ###Code import math import numpy as np ###Output _____no_output_____ ###Markdown Writing your own functions in python - The keyword **def** introduces a function definition. It must be followed by the **function_name** and the **parenthesized** list of formal parameters. - The statements that form the body of the function start at the next line, and must be indented. ###Code def my_func(start,end,gap=1): ''' function to add 3 numbers params: x: type int y: type int z: type int return: a: type int ''' a = list(range(start,end,gap)) return a help(my_func) result = my_func(5,20,5) result result = my_func(end=20,start=5) result def my_func2(x,y): if x>y: return y print(x) print(y) return x def add(new_func, x,y): c = new_func(x,y) #5 return c*x*y #5*5*3 def new_func(x,y): x = x-1 #4 y = y-2 #1 return x+y #5 add(new_func,5,3) my_func2(3,5) # writing a function in python def my_func(x): if x%2!=0: return(math.log(x)) else: return(x) my_func(19) x x_ls3=[my_func(a) for a in x] x_ls3 def mysum(x=10,y=10): return(2*x+3*y) mysum(2,3) ###Output _____no_output_____ ###Markdown Writing your own Classes: - Class are a collection of similar kind of variables and functions. - The primary difference between a class and a function is that, Class can have objects. - In the definition of the class Point “object” means that this class has no inheritance ###Code class Circle(object): def __init__(self, radius=5): self.r = radius def area(self): self.a = self.r**2*3.14 return self.a def perimeter(self): self.p = 2*3.14*self.r return self.p def sumr(self): return self.a+self.p c = Circle(6) c.area(), c.perimeter() c.sumr() # we start with keyword class and then give our class a name class Point(object): def __init__(self, x, y): '''Defines x and y variables''' self.X = x self.Y = y # self is used as reference to internal objects that get # created using values supplied from outside # default function _init_ is used to create internal objects # which can be accessed by other functions in the class def length(self): return(math.sqrt(self.X**2+self.Y**2)) # all function inside a class will have access to internal objects # created inside the class # you can understand self to be default Point class object def distance(self, other): dx = self.X - other.X dy = self.Y - other.Y return(math.sqrt(dx**2 + dy**2)) # a function inside the class can take input multiple objects # of the same class z=Point(2,3) y=Point(4,10) print(Point.distance(y,z)) print(Point.length(y)) print(Point.length(z)) ###Output _____no_output_____ ###Markdown Lambda, filter and map Lambda - The lambda operator or lambda function is a way to create small anonymous functions, i.e. functions without a name. - These functions are **throw-away functions**, i.e. they are just needed where they have been created. - Lambda functions are mainly used in combination with the functions `filter()` and `map()` The general syntax of a lambda function is quite simple: **lambda argument_list: expression** ###Code sum = lambda x, y : x + y sum(3,4) ###Output _____no_output_____ ###Markdown Map `map()` is a function which takes two arguments: `r = map(func, seq)` - The **first argument** func is the **name of a function** and the **second argument** a **sequence** (e.g. a list) seq. - `map()` applies the function func to all the elements of the sequence seq. - `map()` **returns an iterator**.The true power of lambda can be seen when used in conjugation with map() function. ###Code C = [39.2, 36.5, 37.3, 38, 37.8] F = list(map(lambda x: (float(9)/5)*x + 32, C)) F C = list(map(lambda x: (float(5)/9)*(x-32), F)) C ###Output _____no_output_____ ###Markdown Filter The filter function also takes in two arguments a function and a sequence. `filter(function, sequence)`It offers an elegant way to filter out all the elements of a `sequence` "sequence", for which the function `func` returns True. i.e. an item will be produced by the iterator result of filter(function, sequence) if item is included in the sequence "sequence" and if function(item) returns True. ###Code list(filter(lambda x: x % 2==0, range(0,100))) ###Output _____no_output_____ ###Markdown Exercises ###Code # Write a function to find all the even number between 1 to n. Take n as input from user. # Convert Celsius to Fahrenheit # Formula: ((float(9)/5)*x + 32) Celsius = [39.2, 36.5, 37.3, 37.8] for num in range(100): if(num % 2 == 0): print (num) Fahrenheit = [((float(9)/5)*x + 32) for x in Celsius] print (Fahrenheit) ###Output _____no_output_____
jupyter-notebooks/leetcode & code fighht problems/.ipynb_checkpoints/Count Cloud (work in progress)-checkpoint.ipynb
###Markdown Cout CloudsGiven a 2D grid skyMap composed of '1's (clouds) and '0's (clear sky), count the number of clouds. A cloud is surrounded by clear sky, and is formed by connecting adjacent clouds horizontally or vertically. You can assume that all four edges of the skyMap are surrounded by clear sky.ExampleFor```skyMap = [['0', '1', '1', '0', '1'], ['0', '1', '1', '1', '1'], ['0', '0', '0', '0', '1'], ['1', '0', '0', '1', '1']]```the output should becountClouds(skyMap) = 2;For```skyMap = [['0', '1', '0', '0', '1'], ['1', '1', '0', '0', '0'], ['0', '0', '1', '0', '1'], ['0', '0', '1', '1', '0'], ['1', '0', '1', '1', '0']]```the output should becountClouds(skyMap) = 5.Input/Output```[time limit] 4000ms (py3)[input] array.array.char skyMap```A 2D grid that represents a map of the sky, as described above.Guaranteed constraints:```0 ≤ skyMap.length ≤ 300,0 ≤ skyMap[i].length ≤ 300.```[output] integerThe number of clouds in the given skyMap, as described above. ###Code skyMap = [['0', '1', '1', '0', '1'], ['0', '1', '1', '1', '1'], ['0', '0', '0', '0', '1'], ['1', '0', '0', '1', '1']] def get_left(skyMap, i, j): i = i - 1 if i < 0: return False return skyMap[j][i] def get_right(skyMap, i, j): i = i + 1 if i > len(skyMap[0]) - 1: return False return skyMap[j][i] def get_up(skyMap, i, j): j = j - 1 if j < 0: return False return skyMap[j][i] def get_down(skyMap, i, j): j = j + 1 if j > len(skyMap)-1: return False return skyMap[j][i] def countClouds(skyMap): pass # print(get_up(skyMap, 0, 2)) ###Output _____no_output_____
MachineLearning/numpy/numpy-statistical.ipynb
###Markdown some statistical function - Statistics- min,max- mean- median- average- variance- standard deviationM ###Code import numpy as np a=np.array([[1,2,3,4],[7,6,2,0]]) print(a) # min - max row wise print(np.min(a)) print(np.min(a,axis=0)) print(np.min(a,axis=1)) # mean -average b=np.array([1,2,3,4,5]) print(sum(b)/5) print(np.mean(b)) print(np.mean(a)) print(np.mean(a,axis=0)) print(np.mean(a,axis=1)) # median c=np.array([1,5,4,2,0]) print(np.median(c)) # mean vs average is weighted w=np.array([1,2,3,4,5]) print(np.mean(w)) print(np.average(w,weights=w)) # standard deviation in python u=np.mean(c) mystd=np.sqrt(np.mean(abs(c-u)**2)) print(mystd) print(np.std(c)) # variance -square of standard deviation print(mystd**2) print(np.var(c)) ###Output 1.854723699099141 1.854723699099141 3.440000000000001 3.4400000000000004
Notebooks/MNIST/AutoEncoder/Autoencoders_Keras.ipynb
###Markdown Files & Data (Google Colab)If you're running this notebook on Google colab, you do not have access to the `solutions` folder you get by cloning the repository locally. The following lines will allow you to build the folders and the files you need for this TP.**WARNING 1** Do not run this line localy.**WARNING 2** The magic command `%load` does not work work on google colab, you will have to copy-paste the solution on the notebook. ###Code ! mkdir image ! wget . https://github.com/wikistat/High-Dimensional-Deep-Learning/raw/master/AutoEncoder/vae_mlp_decoder.png ! wget . https://github.com/wikistat/High-Dimensional-Deep-Learning/raw/master/AutoEncoder/vae_mlp_vae.png ! wget image https://github.com/wikistat/High-Dimensional-Deep-Learning/raw/master/AutoEncoder/image/vae_2.svg ! wget image https://github.com/wikistat/High-Dimensional-Deep-Learning/raw/master/AutoEncoder/image/vae_3.svg ! mkdir solutions ! wget solutions https://github.com/wikistat/High-Dimensional-Deep-Learning/raw/master/AutoEncoder/solutions/compare_sparsity_decoded_imgs.py ! wget solutions https://github.com/wikistat/High-Dimensional-Deep-Learning/raw/master/AutoEncoder/solutions/compare_sparsity_encoded_imgs.py ! wget solutions https://github.com/wikistat/High-Dimensional-Deep-Learning/raw/master/AutoEncoder/solutions/convolutional_autoencoder.py ! wget solutions https://github.com/wikistat/High-Dimensional-Deep-Learning/raw/master/AutoEncoder/solutions/decoded_images_both_method.py ! wget solutions https://github.com/wikistat/High-Dimensional-Deep-Learning/raw/master/AutoEncoder/solutions/decoder_vae.py ! wget solutions https://github.com/wikistat/High-Dimensional-Deep-Learning/raw/master/AutoEncoder/solutions/generate_single_sample.py ! wget solutions https://github.com/wikistat/High-Dimensional-Deep-Learning/raw/master/AutoEncoder/solutions/simple_autoencoder.py ! wget solutions https://github.com/wikistat/High-Dimensional-Deep-Learning/raw/master/AutoEncoder/solutions/train_denoise_model.py ###Output _____no_output_____ ###Markdown High Dimensional & Deep Learning : Autoencoders What is an Autoencoder ? Autoencoder architecture Objective During this TP we will build different autoencoders with Keras and Tensorflow. Here are the main objectives :* Build a autoencoder based on simple perceptron layer.* Add regularization on layer and understand its effects.* Build a convolutional autoencoder.* Use a convolutional autoencoder to solve denoising problem.* Manipulate the library in order to get and observe the result at different point of the dataflow.The dataset used all along with TP is the MNIST dataset. Library ###Code from tensorflow.keras.datasets import mnist import tensorflow.keras.preprocessing.image as kpi import tensorflow.keras.models as km import tensorflow.keras.layers as kl import tensorflow.keras.regularizers as kr import numpy as np import matplotlib.pyplot as plt import tensorflow tensorflow.__version__ ###Output _____no_output_____ ###Markdown Dataset As we won't apply any supervised algorithm in this TP, we do not need to load the `Y` variable. ###Code (x_train, _), (x_test, _) = mnist.load_data() ###Output _____no_output_____ ###Markdown As seen in the previous TP, it is better to normalize the dataset before to apply algorithm on it. ###Code x_train = x_train.astype('float32') / 255. x_test = x_test.astype('float32') / 255. print(x_train.shape) print(x_test.shape) fig = plt.figure(figsize=(5,5)) ax = fig.add_subplot(1,1,1) x = kpi.img_to_array(x_train[0]) ax.imshow(x[:,:,0], interpolation='nearest', cmap="binary") ax.grid(False) plt.axis('off') plt.show() ###Output _____no_output_____ ###Markdown Building a simple autoencoderWe will first build a very simple architecture where :* the **encoder layer** : is a `Dense` layer composed of 32 neurons (the latent variable) with a `Relu` activation function :$$relu(x) = max(0,x)$$* the **decoded layer** : is a `Dense` layer composed of 784 neurons (the input dimension) with a `Sigmoid`activation function.$$sigmoid(x) = \frac{1}{1+\text{e}^x}$$ We first reshape the data from to be 1D. ###Code x_train_flatten = x_train.reshape((len(x_train), np.prod(x_train.shape[1:]))) x_test_flatten = x_test.reshape((len(x_test), np.prod(x_test.shape[1:]))) x_train_flatten.shape, x_test_flatten.shape ###Output _____no_output_____ ###Markdown Write the model **Exercice** : write the simple model described above in keras. ###Code n_latent = 32 n_input = 784 # %load solutions/simple_autoencoder.py ###Output _____no_output_____ ###Markdown We then learn the model. Note that the target variable are the original image. ###Code autoencoder.compile(optimizer='adam', loss='binary_crossentropy') autoencoder.fit(x_train_flatten, x_train_flatten, epochs=25, batch_size=256, validation_data=(x_test_flatten, x_test_flatten)) ###Output _____no_output_____ ###Markdown **Question** : We use the binary cross entropy here as in the original paper [1](https://arxiv.org/pdf/1312.6114.pdf)? Does it look like a intuitive choice? Why?How is the loss evoluting during training? Check outputsWe will no check how the model performs. We produce first the encoded-decoded images. ###Code decoded_test_imgs = autoencoder.predict(x_test_flatten) ###Output _____no_output_____ ###Markdown The following function display both the input and the output of the autoencoder model. ###Code n = 10 # how many digits we will display plt.figure(figsize=(20, 4)) for i in range(n): # display original ax = plt.subplot(3, n, i + 1) plt.imshow(x_test[i].reshape(28, 28)) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) # display reconstruction ax = plt.subplot(3, n, i + 1 + n) plt.imshow(decoded_test_imgs[i].reshape(28, 28)) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) plt.show() ###Output _____no_output_____ ###Markdown **Question** : What can you say about this results? Check latent variableThe keras model that we have written above does not allow us to retrieve the latent variable. In order to do so, we have to re-write the model in order to get this variable late.We first write the encoder part. ###Code encoder = km.Sequential(name="EncoderModel") encoder.add(kl.Dense(n_latent, activation='relu', input_shape=(n_input,),name="encoder_layer")) ###Output _____no_output_____ ###Markdown We then write the decoder as another independent model ###Code decoder = km.Sequential(name="DecoderModel") decoder.add(kl.Dense(n_input, activation='sigmoid', input_shape =(n_latent,), name = "decoded_layer" )) ###Output _____no_output_____ ###Markdown We finally writhe the autoencoder model by adding the two previous model ###Code autoencoder = km.Sequential(name="EncoderDecoder") autoencoder.add(encoder) autoencoder.add(decoder) ###Output _____no_output_____ ###Markdown The model is well composed of the association of the two previous model. ###Code autoencoder.summary() ###Output _____no_output_____ ###Markdown You can acces the two sub model with the following syntax ###Code autoencoder.get_layer("EncoderModel").summary() ###Output _____no_output_____ ###Markdown The model can then be learned the same way. ###Code autoencoder.compile(optimizer='adam', loss='binary_crossentropy') autoencoder.fit(x_train_flatten, x_train_flatten, epochs=25, batch_size=256, validation_data=(x_test_flatten, x_test_flatten)) ###Output _____no_output_____ ###Markdown **Question** What can you say about the loss value of the model. We can now access and produce easily the latent variable ###Code encoded_imgs = encoder.predict(x_test_flatten) encoded_imgs.shape n = 10 # how many digits we will display plt.figure(figsize=(20, 4)) for i in range(n): # display original ax = plt.subplot(2, n, i + 1) plt.imshow(x_test[i].reshape(28, 28)) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) # display encoded imgs ax = plt.subplot(2, n, i + 1 + n) plt.imshow(encoded_imgs[i].reshape(8, 4)) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) plt.show() ###Output _____no_output_____ ###Markdown You can produce the decoded images by :* Using the decoded part on the encoded images.* Using the all architecture on the original image.**Exercise** : Check that both methods produce the same results ###Code # %load solutions/decoded_images_both_method.py ###Output _____no_output_____ ###Markdown Sparse autoencoder In the previous example the autoencoder is only constrained by the size of the hidden layer. In the following figure you can see the distribution of the number of latent variable set to zero for the 10.000 test images. ###Code fig = plt.figure(figsize=(9,5)) ax = fig.add_subplot(1,1,1) ax.hist(np.sum(encoded_imgs==0,axis=1), width=0.9, bins=np.arange(-0.5,10.5,1)) ax.set_xticks(np.arange(10)) plt.show() ###Output _____no_output_____ ###Markdown Another way to get a sparser encoded representation of the images is to add a *sparsity contraint* on the activity function of the hidden layer. Regularizers enable to avoid overfitting by adding some constraint on the weight we want to control. Cost function = Loss (say, binary cross entropy) + Regularization term Cost function = Loss + $\lambda$ $\sum w$, where in our case $\lambda = 10e-5$ and $w$ are the weight of the encoder model. ###Code l = 10e-5 sparse_encoder = km.Sequential(name="SparseEncoderModel") sparse_encoder.add(kl.Dense(n_latent, activation='relu', input_shape=(n_input,), activity_regularizer=kr.l1(l) ,name="encoder_layer")) sparse_decoder = km.Sequential(name="SparseDecoderModel") sparse_decoder.add(kl.Dense(n_input, activation='sigmoid', input_shape =(n_latent,), name = "decoded_layer" )) sparse_autoencoder = km.Sequential(name="SparseEncoderDecoder") sparse_autoencoder.add(sparse_encoder) sparse_autoencoder.add(sparse_decoder) ###Output _____no_output_____ ###Markdown you can now train the model as previously ###Code sparse_autoencoder.compile(optimizer='adam', loss='binary_crossentropy') sparse_autoencoder.fit(x_train_flatten, x_train_flatten, epochs=25, batch_size=256,validation_data=(x_test_flatten, x_test_flatten)) ###Output _____no_output_____ ###Markdown **Question** : What can you say on the loss function compare to the previous model?**Exercise** : Check that the encoded images obtained with the sparse autoencoder are indeed sparser than the ones obtain by the first autoencoder ###Code # %load solutions/compare_sparsity_encoded_imgs.py ###Output _____no_output_____ ###Markdown **Exercise** : Compare the decoded images obtain by the first and the sparse model ###Code # %load solutions/compare_sparsity_decoded_imgs.py ###Output _____no_output_____ ###Markdown Convolutional Autoencoder In the previous part, we have seen very simple autoencoder where both encoder and decoder part are composed of single layer. They both can be composed of more layers (deep autoencoder) and with differents types of layer.As seen in the previous TP, convolutional layers are the best layer to use when dealing with images. **Exercise** : Implement a convolutional Autoencoder with the folowwing architecture: conv_decoder = km.Sequential(name="ConvDecoderModel")conv_decoder.add(kl.Conv2D(8, (3, 3), activation='relu', input_shape = conv_encoder.get_output_shape_at(-1)[-3:], padding='same'))conv_decoder.add(kl.UpSampling2D((2, 2)))conv_decoder.add(kl.Conv2D(8, (3, 3), activation='relu', padding='same'))conv_decoder.add(kl.UpSampling2D((2, 2)))conv_decoder.add(kl.Conv2D(16, (3, 3), activation='relu'))conv_decoder.add(kl.UpSampling2D((2, 2)))conv_decoder.add(kl.Conv2D(1, (3, 3), activation='sigmoid', padding='same'))`Encoder`* A 2d convolutional layer, 16 filters of size 3x3* A 2Dmaxpooling layer with filters of size 2x2* A 2d convolutial layer, 8 filters of size 3x3* A 2Dmaxpooling layer with filters of size 2x2* A 2d convolutial layer, 8 filters of size 3x3* A 2Dmaxpooling layer with filters of size 2x2`Decoder`* A 2d convolutional layer, 8 filters of size 3x3* A 2Dupsampling layer with filters of size 2x2* A 2d convolutional layer, 8 filters of size 3x3* A 2Dupsampling layer with filters of size 2x2* A 2d convolutional layer, 16 filters of size 3x3* A 2Dupsampling layer with filters of size 2x2* A 2d convolutional layer, 1 filters of size 3x3, with SIGMOID activation*All padding are `SAME` padding and all convolutional activation function but last are `RELU`* ###Code # %load solutions/convolutional_autoencoder.py conv_autoencoder = km.Sequential(name="ConvAutoencoderModel") conv_autoencoder.add(conv_encoder) conv_autoencoder.add(conv_decoder) conv_autoencoder.summary() conv_autoencoder.compile(optimizer='adam', loss='binary_crossentropy') conv_autoencoder.fit(x_train_conv, x_train_conv, epochs=10, batch_size=256, validation_data=(x_test_conv, x_test_conv)) conv_autoencoder.evaluate(x_train_conv, x_train_conv) ###Output _____no_output_____ ###Markdown **Question** What can you say about the loss function? ###Code encoded_imgs = conv_encoder.predict(x_test_conv) decoded_imgs = conv_autoencoder.predict(x_test_conv) n = 10 plt.figure(figsize=(20, 4)) for i in range(n): # display original ax = plt.subplot(2, n, i+1) plt.imshow(x_test[i].reshape(28, 28)) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) # display reconstruction ax = plt.subplot(2, n, i + n+1) plt.imshow(decoded_imgs[i].reshape(28, 28)) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) plt.show() ###Output _____no_output_____ ###Markdown Application to denoising We now know how to build a convolutional autoencoder. We will now see how it can be used to solved denoising problem. We first create fake noisy data ###Code # Add random noise noise_factor = 0.5 x_train_noisy = x_train_conv + noise_factor * np.random.normal(loc=0.0, scale=1.0, size=x_train_conv.shape) x_test_noisy = x_test_conv + noise_factor * np.random.normal(loc=0.0, scale=1.0, size=x_test_conv.shape) # Value greater than 1 are set to 1 and value lower than 0 are set to zero x_train_noisy = np.clip(x_train_noisy, 0., 1.) x_test_noisy = np.clip(x_test_noisy, 0., 1.) ###Output _____no_output_____ ###Markdown Let's observe the noise we create ###Code n = 10 plt.figure(figsize=(20, 4)) for i in range(n): # display original ax = plt.subplot(2, n, i+1) plt.imshow(x_test[i].reshape(28, 28)) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) # noisy data ax = plt.subplot(2, n, i + n+1) plt.imshow(x_test_noisy[i].reshape(28, 28)) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) plt.show() ###Output _____no_output_____ ###Markdown **Exercise** : Now let's train the same convolutional model that we built above. But let train this model with noisy data as an input and the original data as the output. ###Code # %load solutions/train_denoise_model.py ###Output _____no_output_____ ###Markdown Now, we pass the noisy test data into the trained autoencorder in order to denoise this data. ###Code x_test_denoised = conv_autoencoder.predict(x_test_noisy) ###Output _____no_output_____ ###Markdown Here are the results of the denoised data ###Code n = 10 plt.figure(figsize=(20, 4)) for i in range(n): # display original ax = plt.subplot(3, n, i+1) plt.imshow(x_test[i].reshape(28, 28)) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) # noisy data ax = plt.subplot(3, n, i + n+1) plt.imshow(x_test_noisy[i].reshape(28, 28)) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) # denoised data ax = plt.subplot(3, n, i + 1 + 2*n) plt.imshow(x_test_denoised[i].reshape(28, 28)) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) plt.show() ###Output _____no_output_____
model_car.ipynb
###Markdown Read data ###Code cd 'drive/My Drive/Colab Notebooks/car_ml/car_ml_prise/car_ml_prise' cd 'data' df = pd.read_hdf('car.h5') df.shape df.columns df.columns.values def group_and_plot(feat_groupby, feat_agg='price_value', agg_funcs=[np.mean, np.median, np.size], feat_sort='mean', top=50, subplots=True): return( df .groupby(feat_groupby)[feat_agg] .agg(agg_funcs) .sort_values(by=feat_sort, ascending=False) .head(top) ).plot(kind='bar', figsize=(15,7), subplots=subplots) ###Output _____no_output_____ ###Markdown Dummy model ###Code df.select_dtypes(np.number).columns feats = ['car_id'] X = df[feats].values y = df['price_value'].values model = DummyRegressor() model.fit(X, y) y_pred = model.predict(X) mae(y, y_pred) [x for x in df.columns if 'price' in x] df['price_currency'].value_counts() df = df[ df['price_currency'] != 'EUR'] df.shape ###Output _____no_output_____ ###Markdown Feauters ###Code SUFFIX_CAT = '__cat' for feat in df.columns: if isinstance(df[feat][0], list): continue factorized_values = df[feat].factorize()[0] if SUFFIX_CAT in feat: df[feat] = factorized_values else: df[feat + SUFFIX_CAT]= factorized_values cat_feats = [x for x in df.columns if SUFFIX_CAT in x] cat_feats = [x for x in cat_feats if 'price' not in x] len(cat_feats) X = df[cat_feats].values y = df['price_value'].values model = DecisionTreeRegressor(max_depth=5) scores = cross_val_score(model, X, y,cv=3, scoring='neg_mean_absolute_error') np.mean(scores) m = DecisionTreeRegressor(max_depth=5) m.fit(X, y) imp = PermutationImportance(m, random_state=0).fit(X, y) eli5.show_weights(imp, feature_names=cat_feats) group_and_plot('param_moc__cat') group_and_plot('param_napęd__cat') ###Output _____no_output_____
Particles_MultiT.ipynb
###Markdown Simulation------ ###Code from deep_boltzmann.models.particle_dimer import ParticleDimer from deep_boltzmann.sampling.metropolis import MetropolisGauss # load trajectory data trajdict = np.load(paper_dir + 'local_data/particles_tilted/trajdata_long.npz') import ast params = ast.literal_eval(str(trajdict['params'])) traj_closed_train = trajdict['traj_closed_train_hungarian'] traj_open_train = trajdict['traj_open_train_hungarian'] traj_closed_test = trajdict['traj_closed_test_hungarian'] traj_open_test = trajdict['traj_open_test_hungarian'] # create model params['grid_k'] = 0.0 model = ParticleDimer(params=params) print(model.params) plt.figure(figsize=(4, 3)) model.plot_dimer_energy(plt.gca()); plt.xlabel('Dimer distance / nm') plt.xticks([0.5, 1.0, 1.5, 2.0, 2.5]) plt.ylim(-5, 25) #plt.savefig(paper_dir + 'figs/particle_dimer_potential.pdf', bbox_inches='tight') etraj_closed_train = model.energy(traj_closed_train) etraj_open_train = model.energy(traj_open_train) etraj_closed_test = model.energy(traj_closed_test) etraj_open_test = model.energy(traj_open_test) plt.hist(etraj_closed_train, 50, histtype='stepfilled', color='blue', alpha=0.2); plt.hist(etraj_closed_train, 50, histtype='step', color='blue', linewidth=2); plt.hist(etraj_open_train, 50, histtype='stepfilled', color='red', alpha=0.2); plt.hist(etraj_open_train, 50, histtype='step', color='red', linewidth=2); plt.xlabel('Energy / kT') plt.yticks([]) plt.ylabel('Probability') plt.figure(figsize=(10, 4)) ax1 = plt.subplot2grid((1, 3), (0, 0), colspan=2) ax2 = plt.subplot2grid((1, 3), (0, 2)) ax1.plot(model.dimer_distance(traj_closed_train), color='blue', alpha=0.7) ax1.plot(model.dimer_distance(traj_open_train), color='red', alpha=0.7) ax1.set_xlim(0, 20000) ax1.set_ylim(0.5, 2.5) ax1.set_xlabel('Time / steps') ax1.set_ylabel('Dimer distance / a.u.') ax2.hist(model.dimer_distance(traj_closed_train), 30, orientation='horizontal', histtype='stepfilled', color='blue', alpha=0.2); ax2.hist(model.dimer_distance(traj_closed_train), 30, orientation='horizontal', histtype='step', color='blue', linewidth=2); ax2.hist(model.dimer_distance(traj_open_train), 30, orientation='horizontal', histtype='stepfilled', color='red', alpha=0.2); ax2.hist(model.dimer_distance(traj_open_train), 30, orientation='horizontal', histtype='step', color='red', linewidth=2); ax2.set_xticks([]) ax2.set_yticks([]) ax2.set_ylim(0.5, 2.5) ax2.set_xlabel('Probability') fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(11, 5)) model.draw_config(traj_closed_train[0], axis=axes[1], dimercolor='blue', alpha=0.8); model.draw_config(traj_open_train[2], axis=axes[0], dimercolor='red', alpha=0.8); #plt.savefig(paper_dir + 'figs/particle_MD_configurations.pdf', bbox_inches='tight') x = np.vstack([traj_closed_train, traj_open_train]) xval = np.vstack([traj_closed_test, traj_open_test]) ###Output _____no_output_____ ###Markdown Load and test------ ###Code from deep_boltzmann.networks.invertible import EnergyInvNet from deep_boltzmann.networks.plot import test_generate_x, test_xz_projection network = EnergyInvNet.load('../local_data/particles_tilted/trained/network_0.pkl', model) fig, axes = test_xz_projection(network.Txz, [traj_closed_train, traj_open_train], rctrajs=[model.dimer_distance(traj_closed_train), model.dimer_distance(traj_open_train)], subplots=np.array([True, True, False, True]), colors=['blue', 'red']); axes[1].set_xlim(-4, 4) axes[1].set_ylim(-4, 4) #plt.savefig(paper_dir + 'figs/particle_transformation.pdf', bbox_inches='tight') network.std_z(x) test_xz_projection(network.Txz, [traj_closed_test, traj_open_test], rctrajs=[model.dimer_distance(traj_closed_test), model.dimer_distance(traj_open_test)], colors=['blue', 'red']); network.std_z(xval) bs_z_05, bs_x_05, bs_Ez_05, bs_Ex_05, bs_w_05 = network.sample(temperature=0.5, nsample=20000) print(bs_Ex_05.min(), bs_Ex_05.max()) bs_z_10, bs_x_10, bs_Ez_10, bs_Ex_10, bs_w_10 = network.sample(temperature=1.0, nsample=20000) print(bs_Ex_10.min(), bs_Ex_10.max()) bs_z_20, bs_x_20, bs_Ez_20, bs_Ex_20, bs_w_20 = network.sample(temperature=2.0, nsample=20000) print(bs_Ex_20.min(), bs_Ex_20.max()) # round-trip time from deep_boltzmann.util import count_transitions n_transitions = count_transitions(model.dimer_distance(bs_x_10), 1.0, 2.0) round_trip_time = 20000 / n_transitions print(round_trip_time) test_generate_x(model, [traj_closed_train, traj_open_train], bs_Ex_05, max_energy=250, colors=['blue', 'red']); test_generate_x(model, [traj_closed_train, traj_open_train], bs_Ex_10, max_energy=250, colors=['blue', 'red']); I_closed_10 = np.where(model.dimer_distance(bs_x_10) < 1.3)[0] I_ts_10 = np.where(np.logical_and(model.dimer_distance(bs_x_10) > 1.3, model.dimer_distance(bs_x_10) < 1.7))[0] I_open_10 = np.where(model.dimer_distance(bs_x_10) > 1.7)[0] fig, axes = test_generate_x(model, [traj_closed_train, None, traj_open_train], [bs_Ex_10[I_closed_10], bs_Ex_10[I_ts_10], bs_Ex_10[I_open_10]], colors=['blue', 'yellow', 'red'], max_energy=150, figsize=(5, 14), layout=(3, 1), titles=False); axes[0].set_xlabel('') axes[0].set_xticks([]) axes[0].set_xlim(30, 150) axes[0].set_ylim(0, 0.08) axes[1].set_xlabel('') axes[1].set_xticks([]) axes[1].set_xlim(30, 150) axes[1].set_ylim(0, 0.08) axes[2].set_xlim(30, 150) axes[2].set_ylim(0, 0.08) #axes[2].set_ylabel('') #plt.savefig(paper_dir + 'figs/particle_zsampling_energy.pdf', bbox_inches='tight') #fig, axes = plt.subplots(nrows=3, ncols=1, figsize=(5, 17)) model.draw_config(bs_x_10[I_closed_10, :][0], dimercolor='blue', alpha=0.8); #plt.savefig(paper_dir + 'figs/particle_zsampling_structure1.pdf', bbox_inches='tight') model.draw_config(bs_x_10[I_ts_10, :][0], dimercolor='orange', alpha=0.8); #plt.savefig(paper_dir + 'figs/particle_zsampling_structure2.pdf', bbox_inches='tight') model.draw_config(bs_x_10[I_open_10, :][0], dimercolor='red', alpha=0.8); #plt.savefig(paper_dir + 'figs/particle_zsampling_structure3.pdf', bbox_inches='tight') sample_energies = [bs_Ex_10 for i in range(2)] energies_sample_x_low = [se[np.where(se < 100)[0]] for se in sample_energies] test_generate_x(model, [traj_open_train], bs_Ex_10, max_energy=250); test_generate_x(model, [traj_closed_train, traj_open_train], bs_Ex_20, max_energy=250, colors=['blue', 'red']); plt.plot(model.dimer_distance(bs_x_05), bs_Ex_05, linewidth=0, marker='.', markersize=2) plt.ylim(30, 100) plt.plot(model.dimer_distance(bs_x_10), bs_Ex_10, linewidth=0, marker='.', markersize=2) plt.ylim(30, 100) plt.plot(model.dimer_distance(bs_x_20), bs_Ex_20, linewidth=0, marker='.', markersize=2) plt.ylim(20, 100) blind_sample_J = network.TzxJ.predict(bs_z_10)[1][:, 0] plt.plot(model.dimer_distance(bs_z_10), blind_sample_J, linewidth=0, marker='.', markersize=2) ###Output _____no_output_____ ###Markdown Sampling---- ###Code from deep_boltzmann.util import count_transitions, acf from deep_boltzmann.sampling.latent_sampling import GaussianPriorMCMC, plot_latent_sampling, eval_GaussianPriorMCMC, sample_RC from deep_boltzmann.sampling.permutation import HungarianMapper from deep_boltzmann.sampling.analysis import free_energy_bootstrap, mean_finite, std_finite from deep_boltzmann.util import load_obj, save_obj from deep_boltzmann.sampling.umbrella_sampling import UmbrellaSampling # Umbrella sampling - reference us05 = UmbrellaSampling.load('../local_data/particles_tilted/us_T05_F.pkl') us10 = UmbrellaSampling.load('../local_data/particles_tilted/us_T10_F.pkl') us20 = UmbrellaSampling.load('../local_data/particles_tilted/us_T20_F.pkl') umbrella_positions = us10.umbrella_positions pmf_us05 = us05.umbrella_free_energies() pmf_us10 = us10.umbrella_free_energies() pmf_us20 = us20.umbrella_free_energies() pmf_uss = [pmf_us05, pmf_us10, pmf_us20] ###Output _____no_output_____ ###Markdown Model Averaging------ ###Code # run training + analysis scripts to get this file many_sampled_distances = load_obj('../local_data/hydrocarbon_cyc9/trained/distances_sample.pkl') many_sampled_distances.keys() from deep_boltzmann.sampling.analysis import mean_finite, std_finite def mean_free_energy(Ds, Ws): E = [] ndrop=0 for D, W in zip(Ds, Ws): # sort by descending weight I = np.argsort(W)[::-1] D_sorted = D[I][ndrop:] W_sorted = W[I][ndrop:] bins = np.linspace(0.5, 2.5, 30) bin_means = 0.5*(bins[:-1] + bins[1:]) hist, _ = np.histogram(D_sorted, bins=bins, weights=np.exp(W_sorted)) e = -np.log(hist) e -= np.concatenate([e[3:10],e[-10:-3]]).mean() E.append(e) E = np.array(E) return bin_means, mean_finite(E, axis=0, min_finite=2), std_finite(E, axis=0, min_finite=2) bm, mE, sE = mean_free_energy(many_sampled_distances['D05'], many_sampled_distances['W05']) mE = cut_energy(mE, cut=35.0) plt.plot(us05.rc_discretization, us05.rc_free_energies+6.2, linewidth=4, color='#005500', alpha=0.7, label='0.5') plt.errorbar(bm, mE+20, sE, color='black', linewidth=0, marker='.', markersize=8, elinewidth=2) #plt.fill_between(bm, mE+20-1*sE, mE+20+1*sE, color='blue', alpha=0.3) bm, mE, sE = mean_free_energy(many_sampled_distances['D10'], many_sampled_distances['W10']) mE = cut_energy(mE, cut=35.0) plt.plot(us10.rc_discretization, us10.rc_free_energies+0.5, linewidth=4, color='#119900', alpha=0.7, label='1.0') plt.errorbar(bm, mE+8.5, sE, color='black', linewidth=0, marker='.', markersize=8, elinewidth=2) #plt.fill_between(bm, mE+8.5-1*sE, mE+8.5+1*sE, color='black', alpha=0.3) bm, mE, sE = mean_free_energy(many_sampled_distances['D20'], many_sampled_distances['W20']) mE = cut_energy(mE, cut=35.0) plt.plot(us20.rc_discretization, us20.rc_free_energies-5.2, linewidth=4, color='#33FF00', alpha=0.7, label='2.0') plt.errorbar(bm, mE, sE, color='black', linewidth=0, marker='.', markersize=8, elinewidth=2) #plt.fill_between(bm, mE-1*sE, mE+1*sE, color='red', alpha=0.3) plt.legend(loc=9, ncol=3, frameon=False) plt.ylim(-10, 60) plt.xlabel('Dimer distance') plt.ylabel('Free energy difference') #plt.savefig(paper_dir + 'figs/particle_free_energies_temp.pdf') ###Output /Users/noe/anaconda/lib/python3.5/site-packages/ipykernel/__main__.py:13: RuntimeWarning: divide by zero encountered in log /Users/noe/anaconda/lib/python3.5/site-packages/ipykernel/__main__.py:4: RuntimeWarning: invalid value encountered in greater
leetcode_medium.ipynb
###Markdown LeetCode-MEDIUM 400. Nth Digit给你一个整数 n ,请你在无限的整数序列 [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, ...] 中找出并返回第 n 位上的数字。 ###Code class Solution: def findNthDigit(self, n: int) -> int: digit = 1 count = 9 while n > digit * count: n -= digit * count digit += 1 count *= 10 start = 10 ** (digit - 1) num = start + (n-1) // digit return int(str(num)[(n-1)%digit]) # digitIndex = index % d #return num // 10 ** (d- digitIndex -1) % 10 ###Output _____no_output_____ ###Markdown 2070. Most Beautiful Item for Each Query给你一个二维整数数组 items ,其中 items[i] = [pricei, beautyi] 分别表示每一个物品的 价格 和 美丽值 。同时给你一个下标从 0 开始的整数数组 queries 。对于每个查询 queries[j] ,你想求出价格小于等于 queries[j] 的物品中,最大的美丽值 是多少。如果不存在符合条件的物品,那么查询的结果为 0 。请你返回一个长度与 queries 相同的数组 answer,其中 answer[j]是第 j 个查询的答案。 > 二分查找不大于目标值的最大的数 ###Code class Solution: def maximumBeauty(self, items, queries): dic = {} for price, beauty in items: dic[price] = max(dic.get(price, 0), beauty) res = [] lst_price = sorted(list(dic.keys())) max_beauty = 0 n = len(lst_price) lst_beauty = [0] * n for i in range(n): max_beauty = max(max_beauty, dic[lst_price[i]]) lst_beauty[i] = max_beauty for query in queries: left, right = 0, n-1 while left <= right: mid = (left + right) // 2 if lst_price[mid] > query: right = mid - 1 else: left = mid + 1 if right < 0: res.append(0) else: res.append(lst_beauty[left-1]) return res ###Output _____no_output_____ ###Markdown 380. Insert Delete GetRandom O(1)实现RandomizedSet 类:RandomizedSet() 初始化 RandomizedSet 对象bool insert(int val) 当元素 val 不存在时,向集合中插入该项,并返回 true ;否则,返回 false 。bool remove(int val) 当元素 val 存在时,从集合中移除该项,并返回 true ;否则,返回 false 。int getRandom() 随机返回现有集合中的一项(测试用例保证调用此方法时集合中至少存在一个元素)。每个元素应该有 相同的概率 被返回。你必须实现类的所有函数,并满足每个函数的 平均 时间复杂度为 O(1) ###Code class RandomizedSet: def __init__(self): self.lst = [] self.dic = {} def insert(self, val: int) -> bool: if val in self.dic: return False self.dic[val] = len(self.lst) self.lst.append(val) return True def remove(self, val: int) -> bool: if val not in self.dic: return False idx = self.dic[val] last = self.lst[-1] self.lst[idx] = last self.dic[last] = idx del self.dic[val] self.lst.pop() return True def getRandom(self) -> int: return random.choice(self.lst) # Your RandomizedSet object will be instantiated and called as such: # obj = RandomizedSet() # param_1 = obj.insert(val) # param_2 = obj.remove(val) # param_3 = obj.getRandom() ###Output _____no_output_____ ###Markdown 304. Range Sum Query 2D - Immutable给定一个二维矩阵 matrix,以下类型的多个请求:计算其子矩形范围内元素的总和,该子矩阵的 左上角 为 (row1, col1) ,右下角 为 (row2, col2) 。实现 NumMatrix 类:NumMatrix(int[][] matrix) 给定整数矩阵 matrix 进行初始化int sumRegion(int row1, int col1, int row2, int col2) 返回 左上角 (row1, col1) 、右下角 (row2, col2) 所描述的子矩阵的元素 总和 。 ###Code class NumMatrix: def __init__(self, matrix): m, n = len(matrix), len(matrix[0]) self.data = [[0] * n for _ in range(m)] for i in range(m): self.data[i] = list(accumulate(matrix[i])) for i in range(1, m): for j in range(n): self.data[i][j] += self.data[i-1][j] def sumRegion(self, row1: int, col1: int, row2: int, col2: int) -> int: top_left, top_right, bottom_left = 0, 0, 0 bottom_right = self.data[row2][col2] if row1 == 0 and col1 == 0: top_left = 0 top_right = 0 bottom_left = 0 elif row1 == 0: top_left = 0 top_right = 0 bottom_left = self.data[row2][col1-1] elif col1 == 0: top_left = 0 top_right = self.data[row1-1][col2] bottom_left = 0 else: top_left = self.data[row1-1][col1-1] top_right = self.data[row1-1][col2] bottom_left = self.data[row2][col1-1] return bottom_right - top_right - bottom_left + top_left # Your NumMatrix object will be instantiated and called as such: # obj = NumMatrix(matrix) # param_1 = obj.sumRegion(row1,col1,row2,col2) ###Output _____no_output_____ ###Markdown 33. Search in Rotated Sorted Array整数数组 nums 按升序排列,数组中的值 互不相同 。在传递给函数之前,nums 在预先未知的某个下标 k(0 <= k < nums.length)上进行了 旋转,使数组变为 [nums[k], nums[k+1], ..., nums[n-1], nums[0], nums[1], ..., nums[k-1]](下标 从 0 开始 计数)。例如, [0,1,2,4,5,6,7] 在下标 3 处经旋转后可能变为 [4,5,6,7,0,1,2] 。给你 旋转后 的数组 nums 和一个整数 target ,如果 nums 中存在这个目标值 target ,则返回它的下标,否则返回 -1。 ###Code class Solution: def search(self, nums, target): n = len(nums) left, right = 0, n-1 while left <= right: mid = (left + right) // 2 if nums[mid] == target: return mid if nums[0] <= nums[mid]: if nums[0] <= target < nums[mid]: right = mid-1 else: left = mid+1 else: if nums[mid] < target <= nums[n-1]: left = mid + 1 else: right = mid - 1 return -1 ###Output _____no_output_____ ###Markdown 1390. Four Divisors给你一个整数数组 nums,请你返回该数组中恰有四个因数的这些整数的各因数之和。如果数组中不存在满足题意的整数,则返回 0 。[https://leetcode-cn.com/problems/four-divisors/solution/si-yin-shu-by-leetcode-solution/](https://leetcode-cn.com/problems/four-divisors/solution/si-yin-shu-by-leetcode-solution/) ###Code class Solution: def sumFourDivisors(self, nums): res = 0 for num in nums: tmp_sum = 0 count = 0 i = 1 flag = True while i * i <= num: if num % i == 0: count += 1 tmp_sum += i if i * i != num: count += 1 tmp_sum += num // i if count > 4: flag = False break i += 1 if flag and count == 4: res += tmp_sum return res ###Output _____no_output_____ ###Markdown 372. Super Pow你的任务是计算 ab 对 1337 取模,a 是一个正整数,b 是一个非常大的正整数且会以数组形式给出。 ###Code class Solution: def superPow(self, a, b): MOD = 1337 res = 1 for each in b[::-1]: res = res * pow(a, each, MOD) % MOD a = pow(a, 10, MOD) return res ###Output _____no_output_____ ###Markdown 50. Pow(x, n)实现 pow(x, n) ,即计算 x 的 n 次幂函数(即,xn)。 * 递归 ###Code class Solution: def myPow(self, x: float, n: int) -> float: def help(m): if m == 0: return 1 y = help(m // 2) return y * y if m % 2==0 else y * y * x return help(n) if n >= 0 else 1 / help(-n) ###Output _____no_output_____ ###Markdown * 迭代 ###Code class Solution: def myPow(self, x: float, n: int) -> float: sign = 1 if n < 0: sign = 0 n = -n res = 1 while n: if n & 1: res *= x x = x * x n >>= 1 return res if sign else 1/res ###Output _____no_output_____ ###Markdown 1034. Coloring A Border给你一个大小为 m x n 的整数矩阵 grid ,表示一个网格。另给你三个整数 row、col 和 color 。网格中的每个值表示该位置处的网格块的颜色。当两个网格块的颜色相同,而且在四个方向中任意一个方向上相邻时,它们属于同一 连通分量 。连通分量的边界 是指连通分量中的所有与不在分量中的网格块相邻(四个方向上)的所有网格块,或者在网格的边界上(第一行/列或最后一行/列)的所有网格块。请你使用指定颜色 color 为所有包含网格块 grid[row][col] 的 连通分量的边界 进行着色,并返回最终的网格 grid 。 ###Code class Solution: def colorBorder(self, grid, row, col, color): visited = set() boundary = set() dq = collections.deque([(row, col)]) m, n = len(grid), len(grid[0]) lst = [0, -1, 0, 1, 0] while dq: cur_row, cur_col = dq.popleft() cur_color = grid[cur_row][cur_col] flag = False for i in range(4): to_row = cur_row + lst[i] to_col = cur_col + lst[i+1] if to_row < 0 or to_row >= m or to_col < 0 or to_col >= n: flag = True else: if (to_row, to_col) in visited: continue if grid[to_row][to_col] != cur_color: flag = True else: dq.append((to_row, to_col)) if flag: boundary.add((cur_row, cur_col)) visited.add((cur_row, cur_col)) for row, col in boundary: grid[row][col] = color return grid ###Output _____no_output_____ ###Markdown 1247. Minimum Swaps to Make Strings Equal有两个长度相同的字符串 s1 和 s2,且它们其中 只含有 字符 "x" 和 "y",你需要通过「交换字符」的方式使这两个字符串相同。每次「交换字符」的时候,你都可以在两个字符串中各选一个字符进行交换。交换只能发生在两个不同的字符串之间,绝对不能发生在同一个字符串内部。也就是说,我们可以交换 s1[i] 和 s2[j],但不能交换 s1[i] 和 s1[j]。最后,请你返回使 s1 和 s2 相同的最小交换次数,如果没有方法能够使得这两个字符串相同,则返回 -1 。 ###Code class Solution: def minimumSwap(self, s1: str, s2: str) -> int: count_1 = 0 count_2 = 0 for c1, c2 in zip(s1, s2): if c1 == 'x' and c2 == 'y': count_1 += 1 elif c1 == 'y' and c2 == 'x': count_2 += 1 if (count_1 + count_2) % 2 == 1: return -1 if count_1 % 2 == 0: return count_1 // 2 + count_2 // 2 else: return count_1 // 2 + count_2 // 2 + 2 ###Output _____no_output_____ ###Markdown 5. Longest Palindromic Substring给你一个字符串 s,找到 s 中最长的回文子串。 * method 1 ###Code class Solution: def longestPalindrome(self, s: str) -> str: n = len(s) if n == 1: return s dp = [[False] * n for _ in range(n)] for i in range(n): dp[i][i] = True max_len = 1 idx = 0 for length in range(2, n+1): for i in range(n): j = length + i - 1 if j >= n: break if s[i] != s[j]: dp[i][j] = False else: if j-i < 3: dp[i][j] = True else: dp[i][j] = dp[i+1][j-1] if dp[i][j] and length > max_len: max_len = length idx = i return s[idx : idx + max_len] ###Output _____no_output_____ ###Markdown 851. Loud and Rich有一组 n 个人作为实验对象,从 0 到 n - 1 编号,其中每个人都有不同数目的钱,以及不同程度的安静值(quietness)。为了方便起见,我们将编号为 x 的人简称为 "person x "。给你一个数组 richer ,其中 richer[i] = [ai, bi] 表示 person ai 比 person bi 更有钱。另给你一个整数数组 quiet ,其中 quiet[i] 是 person i 的安静值。richer 中所给出的数据 逻辑自恰(也就是说,在 person x 比 person y 更有钱的同时,不会出现 person y 比 person x 更有钱的情况 )。现在,返回一个整数数组 answer 作为答案,其中 answer[x] = y 的前提是,在所有拥有的钱肯定不少于 person x 的人中,person y 是最安静的人(也就是安静值 quiet[y] 最小的人)。[link](https://leetcode-cn.com/problems/loud-and-rich/) ###Code class Solution: def loudAndRich(self, richer, quiet): dic = defaultdict(list) for x, y in richer: dic[y].append(x) n = len(quiet) res = list(range(n)) for i in range(n): visited = set() queue = deque([i]) quiter = quiet[i] while queue: person = queue.pop() if quiet[person] < quiter: res[i] = person quiter = quiet[person] visited.add(person) for each in dic[person]: if each not in visited: queue.appendleft(each) return res ###Output _____no_output_____ ###Markdown 6. ZigZag Conversion将一个给定字符串 s 根据给定的行数 numRows ,以从上往下、从左到右进行 Z 字形排列。[https://leetcode-cn.com/problems/zigzag-conversion/](https://leetcode-cn.com/problems/zigzag-conversion/) ###Code class Solution: def convert(self, s: str, numRows: int) -> str: n = len(s) if n == 1 or numRows == 1: return s res = '' for i in range(numRows): step_1 = 2 * (numRows-i-1) step_2 = 2 * i idx = i while idx < n: res += s[idx] if step_1 == 0 or step_2 == 0: idx += step_1 + step_2 tmp_idx = -1 else: tmp_idx = idx + step_1 if tmp_idx < n: res += s[tmp_idx] idx += step_1 + step_2 return res ###Output _____no_output_____ ###Markdown 11. Container With Most Water给你 n 个非负整数 a1,a2,...,an,每个数代表坐标中的一个点 (i, ai) 。在坐标内画 n 条垂直线,垂直线 i 的两个端点分别为 (i, ai) 和 (i, 0) 。找出其中的两条线,使得它们与 x 轴共同构成的容器可以容纳最多的水。[https://leetcode-cn.com/problems/container-with-most-water/](https://leetcode-cn.com/problems/container-with-most-water/) ###Code class Solution: def maxArea(self, height): left, right = 0, len(height)-1 res = 0 while left < right: res = max(res, min(height[left], height[right]) * (right-left)) if height[left] < height[right]: left += 1 else: right -= 1 return res ###Output _____no_output_____ ###Markdown 12. Integer to Roman[https://leetcode-cn.com/problems/integer-to-roman/](https://leetcode-cn.com/problems/integer-to-roman/) ###Code class Solution: def intToRoman(self, num: int) -> str: mapper = [(1000, 'M'), (900, 'CM'), (500, 'D'), (400, 'CD'), (100, 'C'), (90, 'XC'), (50, 'L'), (40, 'XL'), (10, 'X'), (9, 'IX'), (5, 'V'), (4, 'IV'), (1, 'I')] res = '' for val, symbol in mapper: while num >= val: res += symbol num -= val if num == 0: break return res ###Output _____no_output_____ ###Markdown 16. 3Sum Closest给你一个长度为 n 的整数数组 nums 和 一个目标值 target。请你从 nums 中选出三个整数,使它们的和与 target 最接近。返回这三个数的和。假定每组输入只存在恰好一个解。[https://leetcode-cn.com/problems/3sum-closest/](https://leetcode-cn.com/problems/3sum-closest/) ###Code class Solution: def threeSumClosest(self, nums, target): nums.sort() res = float('inf') n = len(nums) for i in range(n): j, k = i+1, n-1 while j < k: total = nums[i] + nums[j] + nums[k] if total == target: return target if abs(total-target) < abs(res-target): res = total if total > target: k -= 1 else: j += 1 return res ###Output _____no_output_____ ###Markdown 18. 4Sum给你一个由 n 个整数组成的数组 nums ,和一个目标值 target 。请你找出并返回满足下述全部条件且不重复的四元组 [nums[a], nums[b], nums[c], nums[d]] (若两个四元组元素一一对应,则认为两个四元组重复):* 0 <= a, b, c, d < n* a、b、c 和 d 互不相同* nums[a] + nums[b] + nums[c] + nums[d] == target你可以按 任意顺序 返回答案 。[https://leetcode-cn.com/problems/4sum/](https://leetcode-cn.com/problems/4sum/) ###Code class Solution: def fourSum(self, nums, target): n = len(nums) if n < 4: return [] nums.sort() res = [] for i in range(n): if i > 0 and nums[i] == nums[i-1]: continue for j in range(i+1, n): if j > i+1 and nums[j] == nums[j-1]: continue left, right = j+1, n-1 while left < right: total = nums[i] + nums[j] + nums[left] + nums[right] if total == target: res.append([nums[i], nums[j], nums[left], nums[right]]) while left < right and nums[left] == nums[left+1]: left += 1 while left < right and nums[right] == nums[right-1]: right -= 1 left += 1 right -= 1 elif total < target: left += 1 else: right -= 1 return res ###Output _____no_output_____ ###Markdown 475. Heaters冬季已经来临。 你的任务是设计一个有固定加热半径的供暖器向所有房屋供暖。在加热器的加热半径范围内的每个房屋都可以获得供暖。现在,给出位于一条水平线上的房屋 houses 和供暖器 heaters 的位置,请你找出并返回可以覆盖所有房屋的最小加热半径。说明:所有供暖器都遵循你的半径标准,加热的半径也一样。[https://leetcode-cn.com/problems/heaters/](https://leetcode-cn.com/problems/heaters/) ###Code class Solution: def findRadius(self, houses, heaters): res = 0 heaters.sort() n = len(heaters) for house in houses: pos = bisect.bisect_right(heaters, house) if pos == 0: res = max(res, abs(heaters[pos] - house)) elif 0 < pos < n: res = max(res, min(house - heaters[pos-1], heaters[pos] - house)) else: res = max(res, abs(heaters[pos-1] - house)) return res ###Output _____no_output_____ ###Markdown 1705. Maximum Number of Eaten Apples有一棵特殊的苹果树,一连 n 天,每天都可以长出若干个苹果。在第 i 天,树上会长出 apples[i] 个苹果,这些苹果将会在 days[i] 天后(也就是说,第 i + days[i] 天时)腐烂,变得无法食用。也可能有那么几天,树上不会长出新的苹果,此时用 apples[i] == 0 且 days[i] == 0 表示。你打算每天 最多 吃一个苹果来保证营养均衡。注意,你可以在这 n 天之后继续吃苹果。给你两个长度为 n 的整数数组 days 和 apples ,返回你可以吃掉的苹果的最大数目。[https://leetcode-cn.com/problems/maximum-number-of-eaten-apples/](https://leetcode-cn.com/problems/maximum-number-of-eaten-apples) ###Code class Solution: def eatenApples(self, apples, days): hp = [] res = 0 i = 0 while i < len(apples): while hp and hp[0][0] < i: heapq.heappop(hp) if apples[i]: heapq.heappush(hp, [i+days[i]-1, apples[i]]) if hp: hp[0][1] -= 1 if hp[0][1] == 0: heapq.heappop(hp) res += 1 i += 1 while hp: while hp and hp[0][0] < i: heapq.heappop(hp) if not hp: break day, apple = heapq.heappop(hp) num = min(day-i+1, apple) res += num i += num return res ###Output _____no_output_____ ###Markdown 718. Maximum Length of Repeated Subarray给两个整数数组 A 和 B ,返回两个数组中公共的、长度最长的子数组的长度。[https://leetcode-cn.com/problems/maximum-length-of-repeated-subarray/](https://leetcode-cn.com/problems/maximum-length-of-repeated-subarray/) 1. 动态规划转化成求最长公共子前缀问题,dp[i][j]表示nums1[i:]和nums[j:]的最长公共前缀 2. 滑动窗口注意如何枚举两序列所有的对其方式 3. Rabin-Karp + 二分 ###Code class Solution: def findLength(self, nums1, nums2): m, n = len(nums1), len(nums2) left = 0 right = min(m, n) base = 100 mod = 10**9 + 7 res = 0 def helper(length): h1 = 0 for i in range(length): h1 = (h1*base + nums1[i]) % mod s = {h1} r = pow(base, length-1, mod) for i in range(length, m): h1 = ((h1 - nums1[i-length] * r) * base + nums1[i]) % mod s.add(h1) h2 = 0 for i in range(length): h2 = (h2*base + nums2[i]) % mod if h2 in s: return True for i in range(length, n): h2 = ((h2 - nums2[i-length] * r) * base + nums2[i]) % mod if h2 in s: return True return False while left <= right: mid = (left+right) // 2 if helper(mid): res = mid left = mid+1 else: right = mid-1 return res ###Output _____no_output_____
examples/03_Multitask_Exact_GPs/Hadamard_Multitask_GP_Regression.ipynb
###Markdown Hadamard Multitask GP Regression IntroductionThis notebook demonstrates how to perform "Hadamard" multitask regression. This differs from the [multitask gp regression example notebook](./Multitask_GP_Regression.ipynb) in one key way:- Here, we assume that we have observations for **one task per input**. For each input, we specify the task of the input that we observe. (The kernel that we learn is expressed as a Hadamard product of an input kernel and a task kernel)- In the other notebook, we assume that we observe all tasks per input. (The kernel in that notebook is the Kronecker product of an input kernel and a task kernel).Multitask regression, first introduced in [this paper](https://papers.nips.cc/paper/3189-multi-task-gaussian-process-prediction.pdf) learns similarities in the outputs simultaneously. It's useful when you are performing regression on multiple functions that share the same inputs, especially if they have similarities (such as being sinusodial).Given inputs $x$ and $x'$, and tasks $i$ and $j$, the covariance between two datapoints and two tasks is given by$$ k([x, i], [x', j]) = k_\text{inputs}(x, x') * k_\text{tasks}(i, j)$$where $k_\text{inputs}$ is a standard kernel (e.g. RBF) that operates on the inputs.$k_\text{task}$ is a special kernel - the `IndexKernel` - which is a lookup table containing inter-task covariance. ###Code import math import torch import gpytorch from matplotlib import pyplot as plt %matplotlib inline %load_ext autoreload %autoreload 2 ###Output _____no_output_____ ###Markdown Set up training dataIn the next cell, we set up the training data for this example. For each task we'll be using 50 random points on [0,1), which we evaluate the function on and add Gaussian noise to get the training labels. Note that different inputs are used for each task.We'll have two functions - a sine function (y1) and a cosine function (y2). ###Code train_x1 = torch.rand(50) train_x2 = torch.rand(50) train_y1 = torch.sin(train_x1 * (2 * math.pi)) + torch.randn(train_x1.size()) * 0.2 train_y2 = torch.cos(train_x2 * (2 * math.pi)) + torch.randn(train_x2.size()) * 0.2 ###Output _____no_output_____ ###Markdown Set up a Hadamard multitask modelThe model should be somewhat similar to the `ExactGP` model in the [simple regression example](../01_Exact_GPs/Simple_GP_Regression.ipynb).The differences:1. The model takes two input: the inputs (x) and indices. The indices indicate which task the observation is for.2. Rather than just using a RBFKernel, we're using that in conjunction with a IndexKernel.3. We don't use a ScaleKernel, since the IndexKernel will do some scaling for us. (This way we're not overparameterizing the kernel.) ###Code class MultitaskGPModel(gpytorch.models.ExactGP): def __init__(self, train_x, train_y, likelihood): super(MultitaskGPModel, self).__init__(train_x, train_y, likelihood) self.mean_module = gpytorch.means.ConstantMean() self.covar_module = gpytorch.kernels.RBFKernel() # We learn an IndexKernel for 2 tasks # (so we'll actually learn 2x2=4 tasks with correlations) self.task_covar_module = gpytorch.kernels.IndexKernel(num_tasks=2, rank=1) def forward(self,x,i): mean_x = self.mean_module(x) # Get input-input covariance covar_x = self.covar_module(x) # Get task-task covariance covar_i = self.task_covar_module(i) # Multiply the two together to get the covariance we want covar = covar_x.mul(covar_i) return gpytorch.distributions.MultivariateNormal(mean_x, covar) likelihood = gpytorch.likelihoods.GaussianLikelihood() train_i_task1 = torch.full_like(train_x1, dtype=torch.long, fill_value=0) train_i_task2 = torch.full_like(train_x2, dtype=torch.long, fill_value=1) full_train_x = torch.cat([train_x1, train_x2]) full_train_i = torch.cat([train_i_task1, train_i_task2]) full_train_y = torch.cat([train_y1, train_y2]) # Here we have two iterms that we're passing in as train_inputs model = MultitaskGPModel((full_train_x, full_train_i), full_train_y, likelihood) ###Output _____no_output_____ ###Markdown Training the modelIn the next cell, we handle using Type-II MLE to train the hyperparameters of the Gaussian process.See the [simple regression example](../01_Exact_GPs/Simple_GP_Regression.ipynb) for more info on this step. ###Code # this is for running the notebook in our testing framework import os smoke_test = ('CI' in os.environ) training_iterations = 2 if smoke_test else 50 # Find optimal model hyperparameters model.train() likelihood.train() # Use the adam optimizer optimizer = torch.optim.Adam([ {'params': model.parameters()}, # Includes GaussianLikelihood parameters ], lr=0.1) # "Loss" for GPs - the marginal log likelihood mll = gpytorch.mlls.ExactMarginalLogLikelihood(likelihood, model) for i in range(training_iterations): optimizer.zero_grad() output = model(full_train_x, full_train_i) loss = -mll(output, full_train_y) loss.backward() print('Iter %d/50 - Loss: %.3f' % (i + 1, loss.item())) optimizer.step() ###Output Iter 1/50 - Loss: 0.956 Iter 2/50 - Loss: 0.916 Iter 3/50 - Loss: 0.877 Iter 4/50 - Loss: 0.839 Iter 5/50 - Loss: 0.803 Iter 6/50 - Loss: 0.767 Iter 7/50 - Loss: 0.730 Iter 8/50 - Loss: 0.692 Iter 9/50 - Loss: 0.653 Iter 10/50 - Loss: 0.613 Iter 11/50 - Loss: 0.575 Iter 12/50 - Loss: 0.539 Iter 13/50 - Loss: 0.504 Iter 14/50 - Loss: 0.470 Iter 15/50 - Loss: 0.435 Iter 16/50 - Loss: 0.399 Iter 17/50 - Loss: 0.362 Iter 18/50 - Loss: 0.325 Iter 19/50 - Loss: 0.287 Iter 20/50 - Loss: 0.250 Iter 21/50 - Loss: 0.215 Iter 22/50 - Loss: 0.182 Iter 23/50 - Loss: 0.151 Iter 24/50 - Loss: 0.122 Iter 25/50 - Loss: 0.093 Iter 26/50 - Loss: 0.066 Iter 27/50 - Loss: 0.040 Iter 28/50 - Loss: 0.016 Iter 29/50 - Loss: -0.005 Iter 30/50 - Loss: -0.022 Iter 31/50 - Loss: -0.037 Iter 32/50 - Loss: -0.049 Iter 33/50 - Loss: -0.059 Iter 34/50 - Loss: -0.066 Iter 35/50 - Loss: -0.071 Iter 36/50 - Loss: -0.074 Iter 37/50 - Loss: -0.075 Iter 38/50 - Loss: -0.074 Iter 39/50 - Loss: -0.071 Iter 40/50 - Loss: -0.068 Iter 41/50 - Loss: -0.065 Iter 42/50 - Loss: -0.062 Iter 43/50 - Loss: -0.060 Iter 44/50 - Loss: -0.058 Iter 45/50 - Loss: -0.057 Iter 46/50 - Loss: -0.056 Iter 47/50 - Loss: -0.057 Iter 48/50 - Loss: -0.058 Iter 49/50 - Loss: -0.060 Iter 50/50 - Loss: -0.062 ###Markdown Make predictions with the model ###Code # Set into eval mode model.eval() likelihood.eval() # Initialize plots f, (y1_ax, y2_ax) = plt.subplots(1, 2, figsize=(8, 3)) # Test points every 0.02 in [0,1] test_x = torch.linspace(0, 1, 51) tast_i_task1 = torch.full_like(test_x, dtype=torch.long, fill_value=0) test_i_task2 = torch.full_like(test_x, dtype=torch.long, fill_value=1) # Make predictions - one task at a time # We control the task we cae about using the indices # The gpytorch.settings.fast_pred_var flag activates LOVE (for fast variances) # See https://arxiv.org/abs/1803.06058 with torch.no_grad(), gpytorch.settings.fast_pred_var(): observed_pred_y1 = likelihood(model(test_x, tast_i_task1)) observed_pred_y2 = likelihood(model(test_x, test_i_task2)) # Define plotting function def ax_plot(ax, train_y, train_x, rand_var, title): # Get lower and upper confidence bounds lower, upper = rand_var.confidence_region() # Plot training data as black stars ax.plot(train_x.detach().numpy(), train_y.detach().numpy(), 'k*') # Predictive mean as blue line ax.plot(test_x.detach().numpy(), rand_var.mean.detach().numpy(), 'b') # Shade in confidence ax.fill_between(test_x.detach().numpy(), lower.detach().numpy(), upper.detach().numpy(), alpha=0.5) ax.set_ylim([-3, 3]) ax.legend(['Observed Data', 'Mean', 'Confidence']) ax.set_title(title) # Plot both tasks ax_plot(y1_ax, train_y1, train_x1, observed_pred_y1, 'Observed Values (Likelihood)') ax_plot(y2_ax, train_y2, train_x2, observed_pred_y2, 'Observed Values (Likelihood)') ###Output _____no_output_____ ###Markdown Hadamard Multitask GP Regression IntroductionThis notebook demonstrates how to perform "Hadamard" multitask regression. This differs from the [multitask gp regression example notebook](./Multitask_GP_Regression.ipynb) in one key way:- Here, we assume that we have observations for **one task per input**. For each input, we specify the task of the input that we observe. (The kernel that we learn is expressed as a Hadamard product of an input kernel and a task kernel)- In the other notebook, we assume that we observe all tasks per input. (The kernel in that notebook is the Kronecker product of an input kernel and a task kernel).Multitask regression, first introduced in [this paper](https://papers.nips.cc/paper/3189-multi-task-gaussian-process-prediction.pdf) learns similarities in the outputs simultaneously. It's useful when you are performing regression on multiple functions that share the same inputs, especially if they have similarities (such as being sinusodial).Given inputs $x$ and $x'$, and tasks $i$ and $j$, the covariance between two datapoints and two tasks is given by$$ k([x, i], [x', j]) = k_\text{inputs}(x, x') * k_\text{tasks}(i, j)$$where $k_\text{inputs}$ is a standard kernel (e.g. RBF) that operates on the inputs.$k_\text{task}$ is a special kernel - the `IndexKernel` - which is a lookup table containing inter-task covariance. ###Code import math import torch import gpytorch from matplotlib import pyplot as plt %matplotlib inline %load_ext autoreload %autoreload 2 ###Output _____no_output_____ ###Markdown Set up training dataIn the next cell, we set up the training data for this example. For each task we'll be using 50 random points on [0,1), which we evaluate the function on and add Gaussian noise to get the training labels. Note that different inputs are used for each task.We'll have two functions - a sine function (y1) and a cosine function (y2). ###Code train_x1 = torch.rand(50) train_x2 = torch.rand(50) train_y1 = torch.sin(train_x1 * (2 * math.pi)) + torch.randn(train_x1.size()) * 0.2 train_y2 = torch.cos(train_x2 * (2 * math.pi)) + torch.randn(train_x2.size()) * 0.2 ###Output _____no_output_____ ###Markdown Set up a Hadamard multitask modelThe model should be somewhat similar to the `ExactGP` model in the [simple regression example](../01_Exact_GPs/Simple_GP_Regression.ipynb).The differences:1. The model takes two input: the inputs (x) and indices. The indices indicate which task the observation is for.2. Rather than just using a RBFKernel, we're using that in conjunction with a IndexKernel.3. We don't use a ScaleKernel, since the IndexKernel will do some scaling for us. (This way we're not overparameterizing the kernel.) ###Code class MultitaskGPModel(gpytorch.models.ExactGP): def __init__(self, train_x, train_y, likelihood): super(MultitaskGPModel, self).__init__(train_x, train_y, likelihood) self.mean_module = gpytorch.means.ConstantMean() self.covar_module = gpytorch.kernels.RBFKernel() # We learn an IndexKernel for 2 tasks # (so we'll actually learn 2x2=4 tasks with correlations) self.task_covar_module = gpytorch.kernels.IndexKernel(num_tasks=2, rank=1) def forward(self,x,i): mean_x = self.mean_module(x) # Get input-input covariance covar_x = self.covar_module(x) # Get task-task covariance covar_i = self.task_covar_module(i) # Multiply the two together to get the covariance we want covar = covar_x.mul(covar_i) return gpytorch.distributions.MultivariateNormal(mean_x, covar) likelihood = gpytorch.likelihoods.GaussianLikelihood() train_i_task1 = torch.full_like(train_x1, dtype=torch.long, fill_value=0) train_i_task2 = torch.full_like(train_x2, dtype=torch.long, fill_value=1) full_train_x = torch.cat([train_x1, train_x2]) full_train_i = torch.cat([train_i_task1, train_i_task2]) full_train_y = torch.cat([train_y1, train_y2]) # Here we have two iterms that we're passing in as train_inputs model = MultitaskGPModel((full_train_x, full_train_i), full_train_y, likelihood) ###Output _____no_output_____ ###Markdown Training the modelIn the next cell, we handle using Type-II MLE to train the hyperparameters of the Gaussian process.See the [simple regression example](../01_Exact_GPs/Simple_GP_Regression.ipynb) for more info on this step. ###Code # this is for running the notebook in our testing framework import os smoke_test = ('CI' in os.environ) training_iterations = 2 if smoke_test else 50 # Find optimal model hyperparameters model.train() likelihood.train() # Use the adam optimizer optimizer = torch.optim.Adam(model.parameters(), lr=0.1) # Includes GaussianLikelihood parameters # "Loss" for GPs - the marginal log likelihood mll = gpytorch.mlls.ExactMarginalLogLikelihood(likelihood, model) for i in range(training_iterations): optimizer.zero_grad() output = model(full_train_x, full_train_i) loss = -mll(output, full_train_y) loss.backward() print('Iter %d/50 - Loss: %.3f' % (i + 1, loss.item())) optimizer.step() ###Output Iter 1/50 - Loss: 0.956 Iter 2/50 - Loss: 0.916 Iter 3/50 - Loss: 0.877 Iter 4/50 - Loss: 0.839 Iter 5/50 - Loss: 0.803 Iter 6/50 - Loss: 0.767 Iter 7/50 - Loss: 0.730 Iter 8/50 - Loss: 0.692 Iter 9/50 - Loss: 0.653 Iter 10/50 - Loss: 0.613 Iter 11/50 - Loss: 0.575 Iter 12/50 - Loss: 0.539 Iter 13/50 - Loss: 0.504 Iter 14/50 - Loss: 0.470 Iter 15/50 - Loss: 0.435 Iter 16/50 - Loss: 0.399 Iter 17/50 - Loss: 0.362 Iter 18/50 - Loss: 0.325 Iter 19/50 - Loss: 0.287 Iter 20/50 - Loss: 0.250 Iter 21/50 - Loss: 0.215 Iter 22/50 - Loss: 0.182 Iter 23/50 - Loss: 0.151 Iter 24/50 - Loss: 0.122 Iter 25/50 - Loss: 0.093 Iter 26/50 - Loss: 0.066 Iter 27/50 - Loss: 0.040 Iter 28/50 - Loss: 0.016 Iter 29/50 - Loss: -0.005 Iter 30/50 - Loss: -0.022 Iter 31/50 - Loss: -0.037 Iter 32/50 - Loss: -0.049 Iter 33/50 - Loss: -0.059 Iter 34/50 - Loss: -0.066 Iter 35/50 - Loss: -0.071 Iter 36/50 - Loss: -0.074 Iter 37/50 - Loss: -0.075 Iter 38/50 - Loss: -0.074 Iter 39/50 - Loss: -0.071 Iter 40/50 - Loss: -0.068 Iter 41/50 - Loss: -0.065 Iter 42/50 - Loss: -0.062 Iter 43/50 - Loss: -0.060 Iter 44/50 - Loss: -0.058 Iter 45/50 - Loss: -0.057 Iter 46/50 - Loss: -0.056 Iter 47/50 - Loss: -0.057 Iter 48/50 - Loss: -0.058 Iter 49/50 - Loss: -0.060 Iter 50/50 - Loss: -0.062 ###Markdown Make predictions with the model ###Code # Set into eval mode model.eval() likelihood.eval() # Initialize plots f, (y1_ax, y2_ax) = plt.subplots(1, 2, figsize=(8, 3)) # Test points every 0.02 in [0,1] test_x = torch.linspace(0, 1, 51) tast_i_task1 = torch.full_like(test_x, dtype=torch.long, fill_value=0) test_i_task2 = torch.full_like(test_x, dtype=torch.long, fill_value=1) # Make predictions - one task at a time # We control the task we cae about using the indices # The gpytorch.settings.fast_pred_var flag activates LOVE (for fast variances) # See https://arxiv.org/abs/1803.06058 with torch.no_grad(), gpytorch.settings.fast_pred_var(): observed_pred_y1 = likelihood(model(test_x, tast_i_task1)) observed_pred_y2 = likelihood(model(test_x, test_i_task2)) # Define plotting function def ax_plot(ax, train_y, train_x, rand_var, title): # Get lower and upper confidence bounds lower, upper = rand_var.confidence_region() # Plot training data as black stars ax.plot(train_x.detach().numpy(), train_y.detach().numpy(), 'k*') # Predictive mean as blue line ax.plot(test_x.detach().numpy(), rand_var.mean.detach().numpy(), 'b') # Shade in confidence ax.fill_between(test_x.detach().numpy(), lower.detach().numpy(), upper.detach().numpy(), alpha=0.5) ax.set_ylim([-3, 3]) ax.legend(['Observed Data', 'Mean', 'Confidence']) ax.set_title(title) # Plot both tasks ax_plot(y1_ax, train_y1, train_x1, observed_pred_y1, 'Observed Values (Likelihood)') ax_plot(y2_ax, train_y2, train_x2, observed_pred_y2, 'Observed Values (Likelihood)') ###Output _____no_output_____ ###Markdown Hadamard Multitask GP Regression IntroductionThis notebook demonstrates how to perform "Hadamard" multitask regression. This differs from the [multitask gp regression example notebook](./Multitask_GP_Regression.ipynb) in one key way:- Here, we assume that we have observations for **one task per input**. For each input, we specify the task of the input that we observe. (The kernel that we learn is expressed as a Hadamard product of an input kernel and a task kernel)- In the other notebook, we assume that we observe all tasks per input. (The kernel in that notebook is the Kronecker product of an input kernel and a task kernel).Multitask regression, first introduced in [this paper](https://papers.nips.cc/paper/3189-multi-task-gaussian-process-prediction.pdf) learns similarities in the outputs simultaneously. It's useful when you are performing regression on multiple functions that share the same inputs, especially if they have similarities (such as being sinusodial).Given inputs $x$ and $x'$, and tasks $i$ and $j$, the covariance between two datapoints and two tasks is given by$$ k([x, i], [x', j]) = k_\text{inputs}(x, x') * k_\text{tasks}(i, j)$$where $k_\text{inputs}$ is a standard kernel (e.g. RBF) that operates on the inputs.$k_\text{task}$ is a special kernel - the `IndexKernel` - which is a lookup table containing inter-task covariance. ###Code import math import torch import gpytorch from matplotlib import pyplot as plt %matplotlib inline %load_ext autoreload %autoreload 2 ###Output _____no_output_____ ###Markdown Set up training dataIn the next cell, we set up the training data for this example. For each task we'll be using 50 random points on [0,1), which we evaluate the function on and add Gaussian noise to get the training labels. Note that different inputs are used for each task.We'll have two functions - a sine function (y1) and a cosine function (y2). ###Code train_x1 = torch.rand(50) train_x2 = torch.rand(50) train_y1 = torch.sin(train_x1 * (2 * math.pi)) + torch.randn(train_x1.size()) * 0.2 train_y2 = torch.cos(train_x2 * (2 * math.pi)) + torch.randn(train_x2.size()) * 0.2 ###Output _____no_output_____ ###Markdown Set up a Hadamard multitask modelThe model should be somewhat similar to the `ExactGP` model in the [simple regression example](../01_Exact_GPs/Simple_GP_Regression.ipynb).The differences:1. The model takes two input: the inputs (x) and indices. The indices indicate which task the observation is for.2. Rather than just using a RBFKernel, we're using that in conjunction with a IndexKernel.3. We don't use a ScaleKernel, since the IndexKernel will do some scaling for us. (This way we're not overparameterizing the kernel.) ###Code class MultitaskGPModel(gpytorch.models.ExactGP): def __init__(self, train_x, train_y, likelihood): super(MultitaskGPModel, self).__init__(train_x, train_y, likelihood) self.mean_module = gpytorch.means.ConstantMean() self.covar_module = gpytorch.kernels.RBFKernel() # We learn an IndexKernel for 2 tasks # (so we'll actually learn 2x2=4 tasks with correlations) self.task_covar_module = gpytorch.kernels.IndexKernel(num_tasks=2, rank=1) def forward(self,x,i): mean_x = self.mean_module(x) # Get input-input covariance covar_x = self.covar_module(x) # Get task-task covariance covar_i = self.task_covar_module(i) # Multiply the two together to get the covariance we want covar = covar_x.mul(covar_i) return gpytorch.distributions.MultivariateNormal(mean_x, covar) likelihood = gpytorch.likelihoods.GaussianLikelihood() train_i_task1 = torch.full((train_x1.shape[0],1), dtype=torch.long, fill_value=0) train_i_task2 = torch.full((train_x2.shape[0],1), dtype=torch.long, fill_value=1) full_train_x = torch.cat([train_x1, train_x2]) full_train_i = torch.cat([train_i_task1, train_i_task2]) full_train_y = torch.cat([train_y1, train_y2]) # Here we have two iterms that we're passing in as train_inputs model = MultitaskGPModel((full_train_x, full_train_i), full_train_y, likelihood) ###Output _____no_output_____ ###Markdown Training the modelIn the next cell, we handle using Type-II MLE to train the hyperparameters of the Gaussian process.See the [simple regression example](../01_Exact_GPs/Simple_GP_Regression.ipynb) for more info on this step. ###Code # this is for running the notebook in our testing framework import os smoke_test = ('CI' in os.environ) training_iterations = 2 if smoke_test else 50 # Find optimal model hyperparameters model.train() likelihood.train() # Use the adam optimizer optimizer = torch.optim.Adam(model.parameters(), lr=0.1) # Includes GaussianLikelihood parameters # "Loss" for GPs - the marginal log likelihood mll = gpytorch.mlls.ExactMarginalLogLikelihood(likelihood, model) for i in range(training_iterations): optimizer.zero_grad() output = model(full_train_x, full_train_i) loss = -mll(output, full_train_y) loss.backward() print('Iter %d/50 - Loss: %.3f' % (i + 1, loss.item())) optimizer.step() ###Output Iter 1/50 - Loss: 1.000 Iter 2/50 - Loss: 0.960 Iter 3/50 - Loss: 0.918 Iter 4/50 - Loss: 0.874 Iter 5/50 - Loss: 0.831 Iter 6/50 - Loss: 0.788 Iter 7/50 - Loss: 0.745 Iter 8/50 - Loss: 0.702 Iter 9/50 - Loss: 0.659 Iter 10/50 - Loss: 0.618 Iter 11/50 - Loss: 0.580 Iter 12/50 - Loss: 0.545 Iter 13/50 - Loss: 0.511 Iter 14/50 - Loss: 0.478 Iter 15/50 - Loss: 0.445 Iter 16/50 - Loss: 0.412 Iter 17/50 - Loss: 0.378 Iter 18/50 - Loss: 0.344 Iter 19/50 - Loss: 0.309 Iter 20/50 - Loss: 0.276 Iter 21/50 - Loss: 0.243 Iter 22/50 - Loss: 0.211 Iter 23/50 - Loss: 0.182 Iter 24/50 - Loss: 0.154 Iter 25/50 - Loss: 0.129 Iter 26/50 - Loss: 0.105 Iter 27/50 - Loss: 0.085 Iter 28/50 - Loss: 0.067 Iter 29/50 - Loss: 0.052 Iter 30/50 - Loss: 0.039 Iter 31/50 - Loss: 0.029 Iter 32/50 - Loss: 0.021 Iter 33/50 - Loss: 0.015 Iter 34/50 - Loss: 0.012 Iter 35/50 - Loss: 0.010 Iter 36/50 - Loss: 0.009 Iter 37/50 - Loss: 0.009 Iter 38/50 - Loss: 0.009 Iter 39/50 - Loss: 0.009 Iter 40/50 - Loss: 0.008 Iter 41/50 - Loss: 0.008 Iter 42/50 - Loss: 0.009 Iter 43/50 - Loss: 0.009 Iter 44/50 - Loss: 0.010 Iter 45/50 - Loss: 0.010 Iter 46/50 - Loss: 0.010 Iter 47/50 - Loss: 0.010 Iter 48/50 - Loss: 0.008 Iter 49/50 - Loss: 0.007 Iter 50/50 - Loss: 0.005 ###Markdown Make predictions with the model ###Code # Set into eval mode model.eval() likelihood.eval() # Initialize plots f, (y1_ax, y2_ax) = plt.subplots(1, 2, figsize=(8, 3)) # Test points every 0.02 in [0,1] test_x = torch.linspace(0, 1, 51) test_i_task1 = torch.full((test_x.shape[0],1), dtype=torch.long, fill_value=0) test_i_task2 = torch.full((test_x.shape[0],1), dtype=torch.long, fill_value=1) # Make predictions - one task at a time # We control the task we cae about using the indices # The gpytorch.settings.fast_pred_var flag activates LOVE (for fast variances) # See https://arxiv.org/abs/1803.06058 with torch.no_grad(), gpytorch.settings.fast_pred_var(): observed_pred_y1 = likelihood(model(test_x, test_i_task1)) observed_pred_y2 = likelihood(model(test_x, test_i_task2)) # Define plotting function def ax_plot(ax, train_y, train_x, rand_var, title): # Get lower and upper confidence bounds lower, upper = rand_var.confidence_region() # Plot training data as black stars ax.plot(train_x.detach().numpy(), train_y.detach().numpy(), 'k*') # Predictive mean as blue line ax.plot(test_x.detach().numpy(), rand_var.mean.detach().numpy(), 'b') # Shade in confidence ax.fill_between(test_x.detach().numpy(), lower.detach().numpy(), upper.detach().numpy(), alpha=0.5) ax.set_ylim([-3, 3]) ax.legend(['Observed Data', 'Mean', 'Confidence']) ax.set_title(title) # Plot both tasks ax_plot(y1_ax, train_y1, train_x1, observed_pred_y1, 'Observed Values (Likelihood)') ax_plot(y2_ax, train_y2, train_x2, observed_pred_y2, 'Observed Values (Likelihood)') ###Output _____no_output_____
notebooks/01-02 Strings.ipynb
###Markdown 01 - 02 Python StringsA python string is usually a bit of text that you want to display or use or export out of the program that you are writing (to a file or over the network). Technically, strings are immutable `sequence` of characters.> We will talk and learn more about sequences in Data Structures module.Python has a built-in string class called `str` with many handy features. Python knows you want something to be a string when you enclose the text with either single quotes ( ' ) or double quotes ( " ). You must've seen this in our previous tutorials. If not, check a very basic example below: ###Code var = 'Hello World' print("Contents of var: ", var) print("Type of var: ", type(var)) ###Output Contents of var: Hello World Type of var: <class 'str'> ###Markdown A string literal can span multiple lines, to do that there must be a backslash ( `\` ) at the end of each line to escape the newline because by default the return key on the keyboard is considered as the end of line. However if you do not feel comfortable using backslashes, you can put your text between triple quotes ( `"""` ) or ( `'''` ). If you don't want characters prefaced by `\` to be interpreted as special characters, you can use raw strings by adding an alphabet`r` before the first quote. A very basic example would be something like this: ###Code print('C:\name\of\dir') # even using triple quotes won't save you! print(r'C:\name\of\dir') ###Output C:\name\of\dir ###Markdown 01 - 02.01 String ConcatenationPython strings are 'immutable'. What it means is that they cannot be changed after they are created. So if we concatenate the two strings, python will take the two strings and build a new, third string, with the concatenated value of the first and the second string. ###Code var1 = 'Hello' # String 1 var2 = 'World' # String 2 var3 = var1 + var2 # Concatenate two string as String 3 print(var3) ###Output HelloWorld ###Markdown Concatenation of strings will also happen when the string literals are placed next to each other. ###Code var1 = 'Hello' 'World' print(var1) ###Output HelloWorld ###Markdown Concatenation can only be preformed on variables of same datatype. i.e a string concatenation can only be performed on two strings or two variables that have str as their datatype. If you try to perform string concatenation on a string and an integer, Python will trow a `TypeError`. ###Code var1 = 'Hello' var2 = 1 print(var1+var2) ###Output _____no_output_____ ###Markdown 01 - 02.02 String IndexingCharacters in a string can be accessed using the standard [ ] syntax. Python uses zero-based indexing which means that first character in a string will be indexed at `0th` location. So, for example if the string is '`Python`' then, its length can be obtained as ###Code var1 = 'Python' len(var1) ###Output _____no_output_____ ###Markdown and its positional values can be obtained by ###Code var1[0] var1[5] ###Output _____no_output_____ ###Markdown Now, if we try to change the positional value to something else, we will get a `TypeError` proving that strings are immutable ###Code var1[0] = 'J' ###Output _____no_output_____ ###Markdown Apart from obtaining positional values of `var1` using (*positive*) index values between `0` to `5` (or between `0` and `len(var1)-1`), we can also index it by entering negative index values ###Code var1[-6] var1[-1] ###Output _____no_output_____ ###Markdown This works because when you enter a non negative index value, it is considered as indexed from **left to right** and when you enter the negative index values (negative indexing starts from `-1`), python's interpreter is intelligent enough to understand that you meant to get the value indexed from right to left.``` +---+---+---+---+---+---+ | P | y | t | h | o | n | +---+---+---+---+---+---+ 0 1 2 3 4 5 -6 -5 -4 -3 -2 -1``` 01 - 02.03 String SlicingThe 'Slice' syntax is a handy way to refer to sub-parts of strings. The slice `var1[start : end]` is the elements beginning at start and extending up to end (**not including end**). Look at the above Python string literals representation and work on following examples: ###Code var1[0:3] var1[:-3] var1[:2]+var1[-4:] ###Output _____no_output_____ ###Markdown 01 - 02.04 String FormattingEverything that we have seen till now had a string that **cannot** be modified but what if we now want to modify a few words of the string and leave the remaining string unmodified?For people familiar with C++, Python has a printf() - like facility to put together a string using `%` operator. Python uses `%d` operator to insert an integer, `%s` for string and `%f` for floating point. Example: ###Code text = '%s World. %s %d %d %f' print(text) text %('Hello', 'Check', 1, 2, 3) ###Output _____no_output_____ ###Markdown However, with the [PEP 3101](https://www.python.org/dev/peps/pep-3101/), the `%` operator has been replaced with a string method called `format` which can take arbitrary number of arguments.According to this method, the "fields to be replaced" are surrounded by curly braces `{ }`. The curly braces and the "code" inside will be substituted with a formatted value from the arguments passed to the `format` method. Anything else, which is not contained in curly braces will be literally printed, i.e. without any changes. ###Code text = '{} World. {} {} {} {}' # you can also do # text = '{0} World. {1} {2} {3} {4}' print(text) text.format('Hello', 'Check', 1, 2, 3) ###Output _____no_output_____ ###Markdown Notice a minor difference in the output of `text` variable when we used `%`.. it is the datatype of value `3`. We want to pass the positional value as integer but want to make sure that it is formatted as a float in the final form. The way to do this is by specifying the type of value expected in the curly braces preceeded by a colon ( `:` ) ###Code text = '{} World. {} {} {} {:.2f}' # you can also do #text = '{val1} World. {val2} {val3} {val4} {val5:.2f}' print(text) text.format('Hello', 'Check', 1, 2, 3) # if you uncomment the previous cell, then use this: #text.format(val1='Hello', val2='Check', val3=1, val4=2, val5=3) ###Output _____no_output_____ ###Markdown We will learn more neat tricks about `format` method as we proceed through the modules. Keep an eye out for such tricks.. 01 - 02.05 Built-in String MethodsNow that we know about the string and some basic manipulation on strings, lets look at some more built-in methods that can be used. .. 02.05.01 capitalizeIt returns a copy of the string with only its first character capitalized.> Remember the same string is not modified because strings are immutableExample: ###Code var1 = 'python' var1.capitalize() ###Output _____no_output_____ ###Markdown .. 02.05.02 centerReturn the *input string* centered in a string of length width. Padding is done using the specified fill character (default is a space).Example: ###Code var1.center(10, '!') ###Output _____no_output_____ ###Markdown .. 02.05.03 countThe method count() returns the number of occurrences of a sub-string in the range [start, end].Example: ###Code var1.count('t', 0, len(var1)) ###Output _____no_output_____ ###Markdown .. 02.05.04 endswithTRUE if the string ends with the specified suffix, otherwise FALSE. ###Code var1 = "Hello World" var1.endswith("World") ###Output _____no_output_____ ###Markdown .. 02.05.05 findIt determines if the sub string occurs in string. Optionally between beg and end. If found, it returns the index value. Otherwise returns -1Example: ###Code var2 = "This is a test string" var2.find("is") ###Output _____no_output_____ ###Markdown .. 02.05.06 rfindSame as find() but searches backwards in string .. 02.05.07 indexSame as find but raises an exception if sub string is not found. .. 02.05.08 isalnumReturns true if the string has at least one character and all characters are alphanumeric and false otherwise.Example: ###Code var3 = 'Welcome2015' var3.isalnum() ###Output _____no_output_____ ###Markdown .. 02.05.09 joinConcatenates the string representations of elements in sequence into a string, with separator string.Example: ###Code var1 = '' var2 = ('p', 'y', 't', 'h', 'o', 'n') var1.join(var2) ###Output _____no_output_____ ###Markdown .. 02.05.10 stripReturns a copy of string in which all chars have been stripped from the beginning and the end of the string.Example: ###Code var = '.......python' var.lstrip('.') ###Output _____no_output_____ ###Markdown .. 02.05.11 rstripRemoves all trailing whitespaces in a string. .. 02.05.12 maxReturns the maximum alphabetical character from the string.Example: ###Code var = 'python' max(var) # This is very helpful when used with integers ###Output _____no_output_____ ###Markdown .. 02.05.13 minReturns the minimum alphabetical character from the string. .. 02.05.14 replaceReturns a copy of the string with all occurrences of sub string old by new. If the optional argument max is given, only the first count occurrences are replaced.Example: ###Code var = 'This is Python' var.replace('is', 'was', 1) ###Output _____no_output_____ ###Markdown .. 02.05.15 rjustReturns a space-padded string with the original string right-justified to a total width of width columnExample: ###Code var = 'Python' var.rjust(10,'$') ###Output _____no_output_____ ###Markdown .. 02.05.16 splitReturns a `list` of all the words in the string separated by a separator string.> We will study about lists a little later.. for now, just remember that it is one of the data structure of pythonExample: ###Code var = 'This is Python' var.split(' ') ###Output _____no_output_____
Lec6/Assignment5_CIFAR-100_with_ResNet.ipynb
###Markdown Assignment5. CIFAR-100 Classification with ResNetLab8에서는 assignment4-CIFAR-10 classification with CNN코드의 model architecture 부분을 수정하여 ResNet architecture를 통해 CIFAR-100 classification을 해보았습니다.Assignment5에서는 model capacity와 hyperparameter들을 적절히 조절하여 CIFAR-100 classification의 성능을 높여봅시다.[ResNet implementation with Pytorch](https://github.com/pytorch/vision/blob/master/torchvision/models/resnet.py)를 참고하였습니다. 아래 명령을 통하여 Colab 서버 컴퓨터 내에 결과들을 저장할 results 폴더를 생성합니다. 이미 생성되어 있는 경우 `mkdir: cannot create directory 'results': File exists`와 같은 에러가 발생하나, 폴더가 이미 존재한다면 상관없으니 넘어가 줍시다. ###Code !mkdir results import torch import torchvision import torchvision.transforms as transforms import torch.nn as nn import torch.nn.functional as F import torch.optim as optim import argparse import numpy as np import time from copy import deepcopy # Add Deepcopy for args import seaborn as sns import matplotlib.pyplot as plt ###Output _____no_output_____ ###Markdown Data Preparation기존 CIFAR-10 데이터 저장 코드에서 10을 100으로 바꿔주기만 하면 CIFAR-100 dataset을 사용할 수 있습니다. ###Code transform = transforms.Compose( [transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) trainset = torchvision.datasets.CIFAR100(root='./data', train=True, download=True, transform=transform) trainset, valset = torch.utils.data.random_split(trainset, [40000, 10000]) testset = torchvision.datasets.CIFAR100(root='./data', train=False, download=True, transform=transform) partition = {'train': trainset, 'val':valset, 'test':testset} ###Output Downloading https://www.cs.toronto.edu/~kriz/cifar-100-python.tar.gz to ./data/cifar-100-python.tar.gz Files already downloaded and verified ###Markdown Model Architecture[ResNet implementation with Pytorch](https://github.com/pytorch/vision/blob/master/torchvision/models/resnet.py)를 참고하여 ResNet architecture를 구현해봅시다. conv3x3 and conv1x1 functions자주 사용하게 될 1x1과 3x3 filter convolutional layer는 꼭 필요한 parameter인 `in_planes`, `out_planes`, `stride`만을 받아 convolutional layer module를 return해주는 함수를 만들어 사용합시다. ###Code def conv3x3(in_planes, out_planes, stride=1): """3x3 convolution with padding""" return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride, padding=1, bias=False) def conv1x1(in_planes, out_planes, stride=1): """1x1 convolution""" return nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=stride, bias=False) ###Output _____no_output_____ ###Markdown BasicBlock Module2개의 3x3 convolution layer와 skip connection으로 구성된 `BasicBlock` module을 구현해봅시다.[BasicBlock Module Image](https://imgur.com/a/M9gZjWc) ###Code class BasicBlock(nn.Module): expansion = 1 def __init__(self, inplanes, planes, stride=1, downsample=None): super(BasicBlock, self).__init__() self.conv1 = conv3x3(inplanes, planes, stride) self.bn1 = nn.BatchNorm2d(planes) self.relu = nn.ReLU(inplace=True) self.conv2 = conv3x3(planes, planes) self.bn2 = nn.BatchNorm2d(planes) self.downsample = downsample self.stride = stride def forward(self, x): identity = x out = self.conv1(x) out = self.bn1(out) out = self.relu(out) out = self.conv2(out) out = self.bn2(out) if self.downsample is not None: identity = self.downsample(x) out += identity out = self.relu(out) return out ###Output _____no_output_____ ###Markdown Bottleneck Module50개 이상의 layer를 가진 ResNet architecture에서 computational efficiency를 증가시키기 위해 3x3 convolution layer 앞뒤로 1x1 convolution layer를 추가한 `Bottleneck` module을 구현해 봅시다.[Bottleneck Module Image](https://imgur.com/a/HrpmJbU) ###Code class Bottleneck(nn.Module): expansion = 4 def __init__(self, inplanes, planes, stride=1, downsample=None): super(Bottleneck, self).__init__() self.conv1 = conv1x1(inplanes, planes) self.bn1 = nn.BatchNorm2d(planes) self.conv2 = conv3x3(planes, planes, stride) self.bn2 = nn.BatchNorm2d(planes) self.conv3 = conv1x1(planes, planes * self.expansion) self.bn3 = nn.BatchNorm2d(planes * self.expansion) self.relu = nn.ReLU(inplace=True) self.downsample = downsample self.stride = stride def forward(self, x): identity = x out = self.conv1(x) out = self.bn1(out) out = self.relu(out) out = self.conv2(out) out = self.bn2(out) out = self.relu(out) out = self.conv3(out) out = self.bn3(out) if self.downsample is not None: identity = self.downsample(x) out += identity out = self.relu(out) return out ###Output _____no_output_____ ###Markdown ResNet Module적절한 Block type과 layer 수, 그리고 최종적으로 분류할 class 갯수를 받아 ResNet architecture를 구현해봅시다. ###Code class ResNet(nn.Module): def __init__(self, block, layers, num_classes=1000, zero_init_residual=False): super(ResNet, self).__init__() self.inplanes = 64 self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3, bias=False) self.bn1 = nn.BatchNorm2d(64) self.relu = nn.ReLU(inplace=True) self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) self.layer1 = self._make_layer(block, 64, layers[0]) self.layer2 = self._make_layer(block, 128, layers[1], stride=2) self.layer3 = self._make_layer(block, 256, layers[2], stride=2) self.layer4 = self._make_layer(block, 512, layers[3], stride=2) self.avgpool = nn.AdaptiveAvgPool2d((1, 1)) self.fc = nn.Linear(512 * block.expansion, num_classes) for m in self.modules(): if isinstance(m, nn.Conv2d): nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') elif isinstance(m, nn.BatchNorm2d): nn.init.constant_(m.weight, 1) nn.init.constant_(m.bias, 0) # Zero-initialize the last BN in each residual branch, # so that the residual branch starts with zeros, and each residual block behaves like an identity. # This improves the model by 0.2~0.3% according to https://arxiv.org/abs/1706.02677 if zero_init_residual: for m in self.modules(): if isinstance(m, Bottleneck): nn.init.constant_(m.bn3.weight, 0) elif isinstance(m, BasicBlock): nn.init.constant_(m.bn2.weight, 0) def _make_layer(self, block, planes, blocks, stride=1): downsample = None if stride != 1 or self.inplanes != planes * block.expansion: downsample = nn.Sequential( conv1x1(self.inplanes, planes * block.expansion, stride), nn.BatchNorm2d(planes * block.expansion), ) layers = [] layers.append(block(self.inplanes, planes, stride, downsample)) self.inplanes = planes * block.expansion for _ in range(1, blocks): layers.append(block(self.inplanes, planes)) return nn.Sequential(*layers) def forward(self, x): x = self.conv1(x) x = self.bn1(x) x = self.relu(x) x = self.maxpool(x) x = self.layer1(x) x = self.layer2(x) x = self.layer3(x) x = self.layer4(x) x = self.avgpool(x) x = x.view(x.size(0), -1) x = self.fc(x) return x ###Output _____no_output_____ ###Markdown Train, Validate, Test and Experiment ###Code def train(net, partition, optimizer, criterion, args): trainloader = torch.utils.data.DataLoader(partition['train'], batch_size=args.train_batch_size, shuffle=True, num_workers=2) net.train() correct = 0 total = 0 train_loss = 0.0 for i, data in enumerate(trainloader, 0): optimizer.zero_grad() # [21.01.05 오류 수정] 매 Epoch 마다 .zero_grad()가 실행되는 것을 매 iteration 마다 실행되도록 수정했습니다. # get the inputs inputs, labels = data inputs = inputs.cuda() labels = labels.cuda() outputs = net(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() train_loss += loss.item() _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() train_loss = train_loss / len(trainloader) train_acc = 100 * correct / total return net, train_loss, train_acc def validate(net, partition, criterion, args): valloader = torch.utils.data.DataLoader(partition['val'], batch_size=args.test_batch_size, shuffle=False, num_workers=2) net.eval() correct = 0 total = 0 val_loss = 0 with torch.no_grad(): for data in valloader: images, labels = data images = images.cuda() labels = labels.cuda() outputs = net(images) loss = criterion(outputs, labels) val_loss += loss.item() _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() val_loss = val_loss / len(valloader) val_acc = 100 * correct / total return val_loss, val_acc def test(net, partition, args): testloader = torch.utils.data.DataLoader(partition['test'], batch_size=args.test_batch_size, shuffle=False, num_workers=2) net.eval() correct = 0 total = 0 with torch.no_grad(): for data in testloader: images, labels = data images = images.cuda() labels = labels.cuda() outputs = net(images) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() test_acc = 100 * correct / total return test_acc def experiment(partition, args): net = net = CNN('VGG19', 3) net.cuda() criterion = nn.CrossEntropyLoss() if args.optim == 'SGD': optimizer = optim.SGD(net.parameters(), lr=args.lr, weight_decay=args.l2) elif args.optim == 'RMSprop': optimizer = optim.RMSprop(net.parameters(), lr=args.lr, weight_decay=args.l2) elif args.optim == 'Adam': optimizer = optim.Adam(net.parameters(), lr=args.lr, weight_decay=args.l2) else: raise ValueError('In-valid optimizer choice') train_losses = [] val_losses = [] train_accs = [] val_accs = [] for epoch in range(args.epoch): # loop over the dataset multiple times ts = time.time() net, train_loss, train_acc = train(net, partition, optimizer, criterion, args) val_loss, val_acc = validate(net, partition, criterion, args) te = time.time() train_losses.append(train_loss) val_losses.append(val_loss) train_accs.append(train_acc) val_accs.append(val_acc) print('Epoch {}, Acc(train/val): {:2.2f}/{:2.2f}, Loss(train/val) {:2.2f}/{:2.2f}. Took {:2.2f} sec'.format(epoch, train_acc, val_acc, train_loss, val_loss, te-ts)) test_acc = test(net, partition, args) result = {} result['train_losses'] = train_losses result['val_losses'] = val_losses result['train_accs'] = train_accs result['val_accs'] = val_accs result['train_acc'] = train_acc result['val_acc'] = val_acc result['test_acc'] = test_acc return vars(args), result ###Output _____no_output_____ ###Markdown Manage Experiment Result ###Code import hashlib import json from os import listdir from os.path import isfile, join import pandas as pd def save_exp_result(setting, result): exp_name = setting['exp_name'] del setting['epoch'] del setting['test_batch_size'] hash_key = hashlib.sha1(str(setting).encode()).hexdigest()[:6] filename = './results/{}-{}.json'.format(exp_name, hash_key) result.update(setting) with open(filename, 'w') as f: json.dump(result, f) def load_exp_result(exp_name): dir_path = './results' filenames = [f for f in listdir(dir_path) if isfile(join(dir_path, f)) if '.json' in f] list_result = [] for filename in filenames: if exp_name in filename: with open(join(dir_path, filename), 'r') as infile: results = json.load(infile) list_result.append(results) df = pd.DataFrame(list_result) # .drop(columns=[]) return df ###Output _____no_output_____ ###Markdown Visualizatin Utility ###Code def plot_acc(var1, var2, df): fig, ax = plt.subplots(1, 3) fig.set_size_inches(15, 6) sns.set_style("darkgrid", {"axes.facecolor": ".9"}) sns.barplot(x=var1, y='train_acc', hue=var2, data=df, ax=ax[0]) sns.barplot(x=var1, y='val_acc', hue=var2, data=df, ax=ax[1]) sns.barplot(x=var1, y='test_acc', hue=var2, data=df, ax=ax[2]) ax[0].set_title('Train Accuracy') ax[1].set_title('Validation Accuracy') ax[2].set_title('Test Accuracy') def plot_loss_variation(var1, var2, df, **kwargs): list_v1 = df[var1].unique() list_v2 = df[var2].unique() list_data = [] for value1 in list_v1: for value2 in list_v2: row = df.loc[df[var1]==value1] row = row.loc[df[var2]==value2] train_losses = list(row.train_losses)[0] val_losses = list(row.val_losses)[0] for epoch, train_loss in enumerate(train_losses): list_data.append({'type':'train', 'loss':train_loss, 'epoch':epoch, var1:value1, var2:value2}) for epoch, val_loss in enumerate(val_losses): list_data.append({'type':'val', 'loss':val_loss, 'epoch':epoch, var1:value1, var2:value2}) df = pd.DataFrame(list_data) g = sns.FacetGrid(df, row=var2, col=var1, hue='type', **kwargs) g = g.map(plt.plot, 'epoch', 'loss', marker='.') g.add_legend() g.fig.suptitle('Train loss vs Val loss') plt.subplots_adjust(top=0.89) # 만약 Title이 그래프랑 겹친다면 top 값을 조정해주면 됩니다! 함수 인자로 받으면 그래프마다 조절할 수 있겠죠? def plot_acc_variation(var1, var2, df, **kwargs): list_v1 = df[var1].unique() list_v2 = df[var2].unique() list_data = [] for value1 in list_v1: for value2 in list_v2: row = df.loc[df[var1]==value1] row = row.loc[df[var2]==value2] train_accs = list(row.train_accs)[0] val_accs = list(row.val_accs)[0] test_acc = list(row.test_acc)[0] for epoch, train_acc in enumerate(train_accs): list_data.append({'type':'train', 'Acc':train_acc, 'test_acc':test_acc, 'epoch':epoch, var1:value1, var2:value2}) for epoch, val_acc in enumerate(val_accs): list_data.append({'type':'val', 'Acc':val_acc, 'test_acc':test_acc, 'epoch':epoch, var1:value1, var2:value2}) df = pd.DataFrame(list_data) g = sns.FacetGrid(df, row=var2, col=var1, hue='type', **kwargs) g = g.map(plt.plot, 'epoch', 'Acc', marker='.') def show_acc(x, y, metric, **kwargs): plt.scatter(x, y, alpha=0.3, s=1) metric = "Test Acc: {:1.3f}".format(list(metric.values)[0]) plt.text(0.05, 0.95, metric, horizontalalignment='left', verticalalignment='center', transform=plt.gca().transAxes, bbox=dict(facecolor='yellow', alpha=0.5, boxstyle="round,pad=0.1")) g = g.map(show_acc, 'epoch', 'Acc', 'test_acc') g.add_legend() g.fig.suptitle('Train Accuracy vs Val Accuracy') plt.subplots_adjust(top=0.89) ###Output _____no_output_____ ###Markdown Assignment5. CIFAR-100 Classification with ResNetLab8에서는 assignment4-CIFAR-10 classification with CNN코드의 model architecture 부분을 수정하여 ResNet architecture를 통해 CIFAR-100 classification을 해보았습니다.Assignment5에서는 model capacity와 hyperparameter들을 적절히 조절하여 CIFAR-100 classification의 성능을 높여봅시다.[ResNet implementation with Pytorch](https://github.com/pytorch/vision/blob/master/torchvision/models/resnet.py)를 참고하였습니다. 아래 명령을 통하여 Colab 서버 컴퓨터 내에 결과들을 저장할 results 폴더를 생성합니다. 이미 생성되어 있는 경우 `mkdir: cannot create directory 'results': File exists`와 같은 에러가 발생하나, 폴더가 이미 존재한다면 상관없으니 넘어가 줍시다. ###Code !mkdir results import torch import torchvision import torchvision.transforms as transforms import torch.nn as nn import torch.nn.functional as F import torch.optim as optim import argparse import numpy as np import time from copy import deepcopy # Add Deepcopy for args import seaborn as sns import matplotlib.pyplot as plt ###Output _____no_output_____ ###Markdown Data Preparation기존 CIFAR-10 데이터 저장 코드에서 10을 100으로 바꿔주기만 하면 CIFAR-100 dataset을 사용할 수 있습니다. ###Code transform = transforms.Compose( [transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) trainset = torchvision.datasets.CIFAR100(root='./data', train=True, download=True, transform=transform) trainset, valset = torch.utils.data.random_split(trainset, [40000, 10000]) testset = torchvision.datasets.CIFAR100(root='./data', train=False, download=True, transform=transform) partition = {'train': trainset, 'val':valset, 'test':testset} ###Output Downloading https://www.cs.toronto.edu/~kriz/cifar-100-python.tar.gz to ./data/cifar-100-python.tar.gz Files already downloaded and verified ###Markdown Model Architecture[ResNet implementation with Pytorch](https://github.com/pytorch/vision/blob/master/torchvision/models/resnet.py)를 참고하여 ResNet architecture를 구현해봅시다. conv3x3 and conv1x1 functions자주 사용하게 될 1x1과 3x3 filter convolutional layer는 꼭 필요한 parameter인 `in_planes`, `out_planes`, `stride`만을 받아 convolutional layer module를 return해주는 함수를 만들어 사용합시다. ###Code def conv3x3(in_planes, out_planes, stride=1): """3x3 convolution with padding""" return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride, padding=1, bias=False) def conv1x1(in_planes, out_planes, stride=1): """1x1 convolution""" return nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=stride, bias=False) ###Output _____no_output_____ ###Markdown BasicBlock Module2개의 3x3 convolution layer와 skip connection으로 구성된 `BasicBlock` module을 구현해봅시다.[BasicBlock Module Image](https://imgur.com/a/M9gZjWc) ###Code class BasicBlock(nn.Module): expansion = 1 def __init__(self, inplanes, planes, stride=1, downsample=None): super(BasicBlock, self).__init__() self.conv1 = conv3x3(inplanes, planes, stride) self.bn1 = nn.BatchNorm2d(planes) self.relu = nn.ReLU(inplace=True) self.conv2 = conv3x3(planes, planes) self.bn2 = nn.BatchNorm2d(planes) self.downsample = downsample self.stride = stride def forward(self, x): identity = x out = self.conv1(x) out = self.bn1(out) out = self.relu(out) out = self.conv2(out) out = self.bn2(out) if self.downsample is not None: identity = self.downsample(x) out += identity out = self.relu(out) return out ###Output _____no_output_____ ###Markdown Bottleneck Module50개 이상의 layer를 가진 ResNet architecture에서 computational efficiency를 증가시키기 위해 3x3 convolution layer 앞뒤로 1x1 convolution layer를 추가한 `Bottleneck` module을 구현해 봅시다.[Bottleneck Module Image](https://imgur.com/a/HrpmJbU) ###Code class Bottleneck(nn.Module): expansion = 4 def __init__(self, inplanes, planes, stride=1, downsample=None): super(Bottleneck, self).__init__() self.conv1 = conv1x1(inplanes, planes) self.bn1 = nn.BatchNorm2d(planes) self.conv2 = conv3x3(planes, planes, stride) self.bn2 = nn.BatchNorm2d(planes) self.conv3 = conv1x1(planes, planes * self.expansion) self.bn3 = nn.BatchNorm2d(planes * self.expansion) self.relu = nn.ReLU(inplace=True) self.downsample = downsample self.stride = stride def forward(self, x): identity = x out = self.conv1(x) out = self.bn1(out) out = self.relu(out) out = self.conv2(out) out = self.bn2(out) out = self.relu(out) out = self.conv3(out) out = self.bn3(out) if self.downsample is not None: identity = self.downsample(x) out += identity out = self.relu(out) return out ###Output _____no_output_____ ###Markdown ResNet Module적절한 Block type과 layer 수, 그리고 최종적으로 분류할 class 갯수를 받아 ResNet architecture를 구현해봅시다. ###Code class ResNet(nn.Module): def __init__(self, block, layers, num_classes=1000, zero_init_residual=False): super(ResNet, self).__init__() self.inplanes = 64 self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3, bias=False) self.bn1 = nn.BatchNorm2d(64) self.relu = nn.ReLU(inplace=True) self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) self.layer1 = self._make_layer(block, 64, layers[0]) self.layer2 = self._make_layer(block, 128, layers[1], stride=2) self.layer3 = self._make_layer(block, 256, layers[2], stride=2) self.layer4 = self._make_layer(block, 512, layers[3], stride=2) self.avgpool = nn.AdaptiveAvgPool2d((1, 1)) self.fc = nn.Linear(512 * block.expansion, num_classes) for m in self.modules(): if isinstance(m, nn.Conv2d): nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') elif isinstance(m, nn.BatchNorm2d): nn.init.constant_(m.weight, 1) nn.init.constant_(m.bias, 0) # Zero-initialize the last BN in each residual branch, # so that the residual branch starts with zeros, and each residual block behaves like an identity. # This improves the model by 0.2~0.3% according to https://arxiv.org/abs/1706.02677 if zero_init_residual: for m in self.modules(): if isinstance(m, Bottleneck): nn.init.constant_(m.bn3.weight, 0) elif isinstance(m, BasicBlock): nn.init.constant_(m.bn2.weight, 0) def _make_layer(self, block, planes, blocks, stride=1): downsample = None if stride != 1 or self.inplanes != planes * block.expansion: downsample = nn.Sequential( conv1x1(self.inplanes, planes * block.expansion, stride), nn.BatchNorm2d(planes * block.expansion), ) layers = [] layers.append(block(self.inplanes, planes, stride, downsample)) self.inplanes = planes * block.expansion for _ in range(1, blocks): layers.append(block(self.inplanes, planes)) return nn.Sequential(*layers) def forward(self, x): x = self.conv1(x) x = self.bn1(x) x = self.relu(x) x = self.maxpool(x) x = self.layer1(x) x = self.layer2(x) x = self.layer3(x) x = self.layer4(x) x = self.avgpool(x) x = x.view(x.size(0), -1) x = self.fc(x) return x ###Output _____no_output_____ ###Markdown Train, Validate, Test and Experiment ###Code def train(net, partition, optimizer, criterion, args): trainloader = torch.utils.data.DataLoader(partition['train'], batch_size=args.train_batch_size, shuffle=True, num_workers=2) net.train() optimizer.zero_grad() correct = 0 total = 0 train_loss = 0.0 for i, data in enumerate(trainloader, 0): # get the inputs inputs, labels = data inputs = inputs.cuda() labels = labels.cuda() outputs = net(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() train_loss += loss.item() _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() train_loss = train_loss / len(trainloader) train_acc = 100 * correct / total return net, train_loss, train_acc def validate(net, partition, criterion, args): valloader = torch.utils.data.DataLoader(partition['val'], batch_size=args.test_batch_size, shuffle=False, num_workers=2) net.eval() correct = 0 total = 0 val_loss = 0 with torch.no_grad(): for data in valloader: images, labels = data images = images.cuda() labels = labels.cuda() outputs = net(images) loss = criterion(outputs, labels) val_loss += loss.item() _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() val_loss = val_loss / len(valloader) val_acc = 100 * correct / total return val_loss, val_acc def test(net, partition, args): testloader = torch.utils.data.DataLoader(partition['test'], batch_size=args.test_batch_size, shuffle=False, num_workers=2) net.eval() correct = 0 total = 0 with torch.no_grad(): for data in testloader: images, labels = data images = images.cuda() labels = labels.cuda() outputs = net(images) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() test_acc = 100 * correct / total return test_acc def experiment(partition, args): net = net = CNN('VGG19', 3) net.cuda() criterion = nn.CrossEntropyLoss() if args.optim == 'SGD': optimizer = optim.SGD(net.parameters(), lr=args.lr, weight_decay=args.l2) elif args.optim == 'RMSprop': optimizer = optim.RMSprop(net.parameters(), lr=args.lr, weight_decay=args.l2) elif args.optim == 'Adam': optimizer = optim.Adam(net.parameters(), lr=args.lr, weight_decay=args.l2) else: raise ValueError('In-valid optimizer choice') train_losses = [] val_losses = [] train_accs = [] val_accs = [] for epoch in range(args.epoch): # loop over the dataset multiple times ts = time.time() net, train_loss, train_acc = train(net, partition, optimizer, criterion, args) val_loss, val_acc = validate(net, partition, criterion, args) te = time.time() train_losses.append(train_loss) val_losses.append(val_loss) train_accs.append(train_acc) val_accs.append(val_acc) print('Epoch {}, Acc(train/val): {:2.2f}/{:2.2f}, Loss(train/val) {:2.2f}/{:2.2f}. Took {:2.2f} sec'.format(epoch, train_acc, val_acc, train_loss, val_loss, te-ts)) test_acc = test(net, partition, args) result = {} result['train_losses'] = train_losses result['val_losses'] = val_losses result['train_accs'] = train_accs result['val_accs'] = val_accs result['train_acc'] = train_acc result['val_acc'] = val_acc result['test_acc'] = test_acc return vars(args), result ###Output _____no_output_____ ###Markdown Manage Experiment Result ###Code import hashlib import json from os import listdir from os.path import isfile, join import pandas as pd def save_exp_result(setting, result): exp_name = setting['exp_name'] del setting['epoch'] del setting['test_batch_size'] hash_key = hashlib.sha1(str(setting).encode()).hexdigest()[:6] filename = './results/{}-{}.json'.format(exp_name, hash_key) result.update(setting) with open(filename, 'w') as f: json.dump(result, f) def load_exp_result(exp_name): dir_path = './results' filenames = [f for f in listdir(dir_path) if isfile(join(dir_path, f)) if '.json' in f] list_result = [] for filename in filenames: if exp_name in filename: with open(join(dir_path, filename), 'r') as infile: results = json.load(infile) list_result.append(results) df = pd.DataFrame(list_result) # .drop(columns=[]) return df ###Output _____no_output_____ ###Markdown Visualizatin Utility ###Code def plot_acc(var1, var2, df): fig, ax = plt.subplots(1, 3) fig.set_size_inches(15, 6) sns.set_style("darkgrid", {"axes.facecolor": ".9"}) sns.barplot(x=var1, y='train_acc', hue=var2, data=df, ax=ax[0]) sns.barplot(x=var1, y='val_acc', hue=var2, data=df, ax=ax[1]) sns.barplot(x=var1, y='test_acc', hue=var2, data=df, ax=ax[2]) ax[0].set_title('Train Accuracy') ax[1].set_title('Validation Accuracy') ax[2].set_title('Test Accuracy') def plot_loss_variation(var1, var2, df, **kwargs): list_v1 = df[var1].unique() list_v2 = df[var2].unique() list_data = [] for value1 in list_v1: for value2 in list_v2: row = df.loc[df[var1]==value1] row = row.loc[df[var2]==value2] train_losses = list(row.train_losses)[0] val_losses = list(row.val_losses)[0] for epoch, train_loss in enumerate(train_losses): list_data.append({'type':'train', 'loss':train_loss, 'epoch':epoch, var1:value1, var2:value2}) for epoch, val_loss in enumerate(val_losses): list_data.append({'type':'val', 'loss':val_loss, 'epoch':epoch, var1:value1, var2:value2}) df = pd.DataFrame(list_data) g = sns.FacetGrid(df, row=var2, col=var1, hue='type', **kwargs) g = g.map(plt.plot, 'epoch', 'loss', marker='.') g.add_legend() g.fig.suptitle('Train loss vs Val loss') plt.subplots_adjust(top=0.89) # 만약 Title이 그래프랑 겹친다면 top 값을 조정해주면 됩니다! 함수 인자로 받으면 그래프마다 조절할 수 있겠죠? def plot_acc_variation(var1, var2, df, **kwargs): list_v1 = df[var1].unique() list_v2 = df[var2].unique() list_data = [] for value1 in list_v1: for value2 in list_v2: row = df.loc[df[var1]==value1] row = row.loc[df[var2]==value2] train_accs = list(row.train_accs)[0] val_accs = list(row.val_accs)[0] test_acc = list(row.test_acc)[0] for epoch, train_acc in enumerate(train_accs): list_data.append({'type':'train', 'Acc':train_acc, 'test_acc':test_acc, 'epoch':epoch, var1:value1, var2:value2}) for epoch, val_acc in enumerate(val_accs): list_data.append({'type':'val', 'Acc':val_acc, 'test_acc':test_acc, 'epoch':epoch, var1:value1, var2:value2}) df = pd.DataFrame(list_data) g = sns.FacetGrid(df, row=var2, col=var1, hue='type', **kwargs) g = g.map(plt.plot, 'epoch', 'Acc', marker='.') def show_acc(x, y, metric, **kwargs): plt.scatter(x, y, alpha=0.3, s=1) metric = "Test Acc: {:1.3f}".format(list(metric.values)[0]) plt.text(0.05, 0.95, metric, horizontalalignment='left', verticalalignment='center', transform=plt.gca().transAxes, bbox=dict(facecolor='yellow', alpha=0.5, boxstyle="round,pad=0.1")) g = g.map(show_acc, 'epoch', 'Acc', 'test_acc') g.add_legend() g.fig.suptitle('Train Accuracy vs Val Accuracy') plt.subplots_adjust(top=0.89) ###Output _____no_output_____
Learn/Python/Applied Data Science/Data Science in Marketing Customer Segmentation with Python part 2/Data Science in Marketing Customer Segmentation with Python part 2.ipynb
###Markdown [Mencari Jumlah Cluster yang Optimal](https://academy.dqlab.id/main/livecode/294/563/2815) ###Code from kmodes.kmodes import KModes from kmodes.kprototypes import KPrototypes import pandas as pd import seaborn as sns import matplotlib.pyplot as plt df_model = pd.read_csv('https://dqlab-dataset.s3-ap-southeast-1.amazonaws.com/df-customer-segmentation.csv') # Melakukan Iterasi untuk Mendapatkan nilai Cost cost = {} for k in range(2,10): kproto = KPrototypes(n_clusters = k,random_state=75) kproto.fit_predict(df_model, categorical=[0,1,2]) cost[k]= kproto.cost_ # Memvisualisasikan Elbow Plot sns.pointplot(x=list(cost.keys()), y=list(cost.values())) plt.show() ###Output _____no_output_____ ###Markdown [Membuat Model](https://academy.dqlab.id/main/livecode/294/563/2816) ###Code import pickle from kmodes.kmodes import KModes from kmodes.kprototypes import KPrototypes kproto = KPrototypes(n_clusters=5, random_state = 75) kproto = kproto.fit(df_model, categorical=[0,1,2]) #Save Model pickle.dump(kproto, open('cluster.pkl', 'wb')) ###Output _____no_output_____ ###Markdown [Menggunakan Model](https://academy.dqlab.id/main/livecode/294/563/2817) ###Code import pandas as pd df = pd.read_csv("https://dqlab-dataset.s3-ap-southeast-1.amazonaws.com/customer_segments.txt", sep="\t") # Menentukan segmen tiap pelanggan clusters = kproto.predict(df_model, categorical=[0,1,2]) print('segmen pelanggan: {}\n'.format(clusters)) # Menggabungkan data awal dan segmen pelanggan df_final = df.copy() df_final['cluster'] = clusters print(df_final.head()) ###Output segmen pelanggan: [1 2 4 4 0 3 1 4 3 3 4 4 1 1 0 3 3 4 0 2 0 4 3 0 0 4 0 3 4 4 2 1 2 0 3 0 3 1 3 2 3 0 3 0 3 0 4 1 3 1] Customer_ID Nama Pelanggan Jenis Kelamin Umur Profesi \ 0 CUST-001 Budi Anggara Pria 58 Wiraswasta 1 CUST-002 Shirley Ratuwati Wanita 14 Pelajar 2 CUST-003 Agus Cahyono Pria 48 Professional 3 CUST-004 Antonius Winarta Pria 53 Professional 4 CUST-005 Ibu Sri Wahyuni, IR Wanita 41 Wiraswasta Tipe Residen NilaiBelanjaSetahun cluster 0 Sector 9497927 1 1 Cluster 2722700 2 2 Cluster 5286429 4 3 Cluster 5204498 4 4 Cluster 10615206 0 ###Markdown [Menampilkan Cluster Tiap Pelanggan](https://academy.dqlab.id/main/livecode/294/563/2818) ###Code # Menampilkan data pelanggan berdasarkan cluster nya for i in range (0,5): print('\nPelanggan cluster: {}\n'.format(i)) print(df_final[df_final['cluster']== i]) ###Output Pelanggan cluster: 0 Customer_ID Nama Pelanggan Jenis Kelamin Umur Profesi \ 4 CUST-005 Ibu Sri Wahyuni, IR Wanita 41 Wiraswasta 14 CUST-015 Shirley Ratuwati Wanita 20 Wiraswasta 18 CUST-019 Mega Pranoto Wanita 32 Wiraswasta 20 CUST-021 Lestari Fabianto Wanita 38 Wiraswasta 23 CUST-024 Putri Ginting Wanita 39 Wiraswasta 24 CUST-025 Julia Setiawan Wanita 29 Wiraswasta 26 CUST-027 Grace Mulyati Wanita 35 Wiraswasta 33 CUST-034 Deasy Arisandi Wanita 21 Wiraswasta 35 CUST-036 Ni Made Suasti Wanita 30 Wiraswasta 41 CUST-042 Yuliana Wati Wanita 26 Wiraswasta 43 CUST-044 Anna Wanita 18 Wiraswasta 45 CUST-046 Elfira Surya Wanita 25 Wiraswasta Tipe Residen NilaiBelanjaSetahun cluster 4 Cluster 10615206 0 14 Cluster 10365668 0 18 Cluster 10884508 0 20 Cluster 9222070 0 23 Cluster 10259572 0 24 Sector 10721998 0 26 Cluster 9114159 0 33 Sector 9759822 0 35 Cluster 9678994 0 41 Cluster 9880607 0 43 Cluster 9339737 0 45 Sector 10099807 0 Pelanggan cluster: 1 Customer_ID Nama Pelanggan Jenis Kelamin Umur Profesi Tipe Residen \ 0 CUST-001 Budi Anggara Pria 58 Wiraswasta Sector 6 CUST-007 Cahyono, Agus Pria 64 Wiraswasta Sector 12 CUST-013 Cahaya Putri Wanita 64 Wiraswasta Cluster 13 CUST-014 Mario Setiawan Pria 60 Wiraswasta Cluster 31 CUST-032 Chintya Winarni Wanita 47 Wiraswasta Sector 37 CUST-038 Agatha Salim Wanita 46 Wiraswasta Sector 47 CUST-048 Maria Hutagalung Wanita 45 Wiraswasta Sector 49 CUST-050 Lianna Nugraha Wanita 55 Wiraswasta Sector NilaiBelanjaSetahun cluster 0 9497927 1 6 9837260 1 12 9333168 1 13 9471615 1 31 10663179 1 37 10477127 1 47 10390732 1 49 10569316 1 Pelanggan cluster: 2 Customer_ID Nama Pelanggan Jenis Kelamin Umur Profesi Tipe Residen \ 1 CUST-002 Shirley Ratuwati Wanita 14 Pelajar Cluster 19 CUST-020 Irene Novianto Wanita 16 Pelajar Sector 30 CUST-031 Eviana Handry Wanita 19 Mahasiswa Cluster 32 CUST-033 Cecilia Kusnadi Wanita 19 Mahasiswa Cluster 39 CUST-040 Irene Darmawan Wanita 14 Pelajar Sector NilaiBelanjaSetahun cluster 1 2722700 2 19 2896845 2 30 3042773 2 32 3047926 2 39 2861855 2 Pelanggan cluster: 3 Customer_ID Nama Pelanggan Jenis Kelamin Umur Profesi \ 5 CUST-006 Rosalina Kurnia Wanita 24 Professional 8 CUST-009 Elisabeth Suryadinata Wanita 29 Professional 9 CUST-010 Mario Setiawan Pria 33 Professional 15 CUST-016 Bambang Rudi Pria 35 Professional 16 CUST-017 Yuni Sari Wanita 32 Ibu Rumah Tangga 22 CUST-023 Denny Amiruddin Pria 34 Professional 27 CUST-028 Adeline Huang Wanita 40 Ibu Rumah Tangga 34 CUST-035 Ida Ayu Wanita 39 Professional 36 CUST-037 Felicia Tandiono Wanita 25 Professional 38 CUST-039 Gina Hidayat Wanita 20 Professional 40 CUST-041 Shinta Aritonang Wanita 24 Ibu Rumah Tangga 42 CUST-043 Yenna Sumadi Wanita 31 Professional 44 CUST-045 Rismawati Juni Wanita 22 Professional 48 CUST-049 Josephine Wahab Wanita 33 Ibu Rumah Tangga Tipe Residen NilaiBelanjaSetahun cluster 5 Cluster 5215541 3 8 Sector 5993218 3 9 Cluster 5257448 3 15 Cluster 5262521 3 16 Cluster 5677762 3 22 Cluster 5239290 3 27 Cluster 6631680 3 34 Sector 5962575 3 36 Sector 5972787 3 38 Cluster 5257775 3 40 Cluster 6820976 3 42 Cluster 5268410 3 44 Cluster 5211041 3 48 Sector 4992585 3 Pelanggan cluster: 4 Customer_ID Nama Pelanggan Jenis Kelamin Umur Profesi \ 2 CUST-003 Agus Cahyono Pria 48 Professional 3 CUST-004 Antonius Winarta Pria 53 Professional 7 CUST-008 Danang Santosa Pria 52 Professional 10 CUST-011 Maria Suryawan Wanita 50 Professional 11 CUST-012 Erliana Widjaja Wanita 49 Professional 17 CUST-018 Nelly Halim Wanita 63 Ibu Rumah Tangga 21 CUST-022 Novita Purba Wanita 52 Professional 25 CUST-026 Christine Winarto Wanita 55 Professional 28 CUST-029 Tia Hartanti Wanita 56 Professional 29 CUST-030 Rosita Saragih Wanita 46 Ibu Rumah Tangga 46 CUST-047 Mira Kurnia Wanita 55 Ibu Rumah Tangga Tipe Residen NilaiBelanjaSetahun cluster 2 Cluster 5286429 4 3 Cluster 5204498 4 7 Cluster 5223569 4 10 Sector 5987367 4 11 Sector 5941914 4 17 Cluster 5340690 4 21 Cluster 5298157 4 25 Cluster 5269392 4 28 Cluster 5271845 4 29 Sector 5020976 4 46 Cluster 6130724 4 ###Markdown [Visualisasi Hasil Clustering - Box Plot](https://academy.dqlab.id/main/livecode/294/563/2819) ###Code import matplotlib.pyplot as plt # Data Numerical kolom_numerik = ['Umur','NilaiBelanjaSetahun'] for i in kolom_numerik: plt.figure(figsize=(6,4)) ax = sns.boxplot(x = 'cluster',y = i, data = df_final) plt.title('\nBox Plot {}\n'.format(i), fontsize=12) plt.show() ###Output _____no_output_____ ###Markdown [Visualisasi Hasil Clustering - Count Plot](https://academy.dqlab.id/main/livecode/294/563/2820) ###Code import matplotlib.pyplot as plt # Data Kategorikal kolom_categorical = ['Jenis Kelamin','Profesi','Tipe Residen'] for i in kolom_categorical: plt.figure(figsize=(6,4)) ax = sns.countplot(data = df_final, x = 'cluster', hue = i ) plt.title('\nCount Plot {}\n'.format(i), fontsize=12) ax.legend(loc="upper center") for p in ax.patches: ax.annotate(format(p.get_height(), '.0f'), (p.get_x() + p.get_width() / 2., p.get_height()), ha = 'center', va = 'center', xytext = (0, 10), textcoords = 'offset points') sns.despine(right=True,top = True, left = True) ax.axes.yaxis.set_visible(False) plt.show() ###Output _____no_output_____ ###Markdown [Menamakan Cluster](https://academy.dqlab.id/main/livecode/294/563/2821) ###Code # Mapping nama kolom df_final['segmen'] = df_final['cluster'].map({ 0: 'Diamond Young Member', 1: 'Diamond Senior Member', 2: 'Silver Member', 3: 'Gold Young Member', 4: 'Gold Senior Member' }) print(df_final.info()) print(df_final.head()) ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 50 entries, 0 to 49 Data columns (total 9 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 Customer_ID 50 non-null object 1 Nama Pelanggan 50 non-null object 2 Jenis Kelamin 50 non-null object 3 Umur 50 non-null int64 4 Profesi 50 non-null object 5 Tipe Residen 50 non-null object 6 NilaiBelanjaSetahun 50 non-null int64 7 cluster 50 non-null uint16 8 segmen 50 non-null object dtypes: int64(2), object(6), uint16(1) memory usage: 3.3+ KB None Customer_ID Nama Pelanggan Jenis Kelamin Umur Profesi \ 0 CUST-001 Budi Anggara Pria 58 Wiraswasta 1 CUST-002 Shirley Ratuwati Wanita 14 Pelajar 2 CUST-003 Agus Cahyono Pria 48 Professional 3 CUST-004 Antonius Winarta Pria 53 Professional 4 CUST-005 Ibu Sri Wahyuni, IR Wanita 41 Wiraswasta Tipe Residen NilaiBelanjaSetahun cluster segmen 0 Sector 9497927 1 Diamond Senior Member 1 Cluster 2722700 2 Silver Member 2 Cluster 5286429 4 Gold Senior Member 3 Cluster 5204498 4 Gold Senior Member 4 Cluster 10615206 0 Diamond Young Member ###Markdown [Mempersiapkan Data Baru](https://academy.dqlab.id/main/livecode/294/564/2826) ###Code # Data Baru data = [{ 'Customer_ID': 'CUST-100' , 'Nama Pelanggan': 'Joko' , 'Jenis Kelamin': 'Pria', 'Umur': 45, 'Profesi': 'Wiraswasta', 'Tipe Residen': 'Cluster' , 'NilaiBelanjaSetahun': 8230000 }] # Membuat Data Frame new_df = pd.DataFrame(data) # Melihat Data print(new_df) ###Output Customer_ID Nama Pelanggan Jenis Kelamin Umur Profesi Tipe Residen \ 0 CUST-100 Joko Pria 45 Wiraswasta Cluster NilaiBelanjaSetahun 0 8230000 ###Markdown [Membuat Fungsi Data Pemrosesan](https://academy.dqlab.id/main/livecode/294/564/2827) ###Code def data_preprocess(data): # Konversi Kategorikal data kolom_kategorikal = ['Jenis Kelamin','Profesi','Tipe Residen'] df_encode = data[kolom_kategorikal].copy() ## Jenis Kelamin df_encode['Jenis Kelamin'] = df_encode['Jenis Kelamin'].map({ 'Pria': 0, 'Wanita' : 1 }) ## Profesi df_encode['Profesi'] = df_encode['Profesi'].map({ 'Ibu Rumah Tangga': 0, 'Mahasiswa' : 1, 'Pelajar': 2, 'Professional': 3, 'Wiraswasta': 4 }) ## Tipe Residen df_encode['Tipe Residen'] = df_encode['Tipe Residen'].map({ 'Cluster': 0, 'Sector' : 1 }) # Standardisasi Numerical Data kolom_numerik = ['Umur','NilaiBelanjaSetahun'] df_std = data[kolom_numerik].copy() ## Standardisasi Kolom Umur df_std['Umur'] = (df_std['Umur'] - 37.5)/14.7 ## Standardisasi Kolom Nilai Belanja Setahun df_std['NilaiBelanjaSetahun'] = (df_std['NilaiBelanjaSetahun'] - 7069874.8)/2590619.0 # Menggabungkan Kategorikal dan numerikal data df_model = df_encode.merge(df_std, left_index = True, right_index=True, how = 'left') return df_model # Menjalankan fungsi new_df_model = data_preprocess(new_df) print(new_df_model) ###Output Jenis Kelamin Profesi Tipe Residen Umur NilaiBelanjaSetahun 0 0 4 0 0.510204 0.447818 ###Markdown [Memanggil Model dan Melakukan Prediksi](https://academy.dqlab.id/main/livecode/294/564/2828) ###Code def modelling (data): # Memanggil Model kpoto = pickle.load(open('cluster.pkl', 'rb')) # Melakukan Prediksi clusters = kpoto.predict(data,categorical=[0,1,2]) return clusters # Menjalankan Fungsi clusters = modelling(new_df_model) print(clusters) ###Output [1] ###Markdown [Menamakan Segmen](https://academy.dqlab.id/main/livecode/294/564/2829) ###Code def menamakan_segmen (data_asli, clusters): # Menggabungkan cluster dan data asli final_df = data_asli.copy() final_df['cluster'] = clusters # Menamakan segmen final_df['segmen'] = final_df['cluster'].map({ 0: 'Diamond Young Member', 1: 'Diamond Senior Member', 2: 'Silver Students', 3: 'Gold Young Member', 4: 'Gold Senior Member' }) return final_df # Menjalankan Fungsi new_final_df = menamakan_segmen(new_df,clusters) print(new_final_df) ###Output Customer_ID Nama Pelanggan Jenis Kelamin Umur Profesi Tipe Residen \ 0 CUST-100 Joko Pria 45 Wiraswasta Cluster NilaiBelanjaSetahun cluster segmen 0 8230000 1 Diamond Senior Member
nbs/dl1/lesson3-planet-me.ipynb
###Markdown Multi-label prediction with Planet Amazon dataset ###Code %reload_ext autoreload %autoreload 2 %matplotlib inline !curl -s https://course.fast.ai/setup/colab | bash from fastai.vision import * ###Output _____no_output_____ ###Markdown Getting the data The planet dataset isn't available on the [fastai dataset page](https://course.fast.ai/datasets) due to copyright restrictions. You can download it from Kaggle however. Let's see how to do this by using the [Kaggle API](https://github.com/Kaggle/kaggle-api) as it's going to be pretty useful to you if you want to join a competition or use other Kaggle datasets later on.First, install the Kaggle API by uncommenting the following line and executing it, or by executing it in your terminal (depending on your platform you may need to modify this slightly to either add `source activate fastai` or similar, or prefix `pip` with a path. Have a look at how `conda install` is called for your platform in the appropriate *Returning to work* section of https://course.fast.ai/. (Depending on your environment, you may also need to append "--user" to the command.) ###Code ! pip install kaggle --upgrade ###Output _____no_output_____ ###Markdown Then you need to upload your credentials from Kaggle on your instance. Login to kaggle and click on your profile picture on the top left corner, then 'My account'. Scroll down until you find a button named 'Create New API Token' and click on it. This will trigger the download of a file named 'kaggle.json'.Upload this file to the directory this notebook is running in, by clicking "Upload" on your main Jupyter page, then uncomment and execute the next two commands (or run them in a terminal). ###Code from google.colab import drive drive.mount('/content/gdrive') ! mkdir -p ~/.kaggle/ ! cp '/content/gdrive/My Drive/colab/kaggle.json' ~/.kaggle/ ###Output _____no_output_____ ###Markdown You're all set to download the data from [planet competition](https://www.kaggle.com/c/planet-understanding-the-amazon-from-space). You **first need to go to its main page and accept its rules**, and run the two cells below (uncomment the shell commands to download and unzip the data). If you get a `403 forbidden` error it means you haven't accepted the competition rules yet (you have to go to the competition page, click on *Rules* tab, and then scroll to the bottom to find the *accept* button). ###Code path = Config.data_path()/'planet' path.mkdir(parents=True, exist_ok=True) path ! kaggle competitions download -c planet-understanding-the-amazon-from-space -f train-jpg.tar.7z -p {path} ! kaggle competitions download -c planet-understanding-the-amazon-from-space -f train_v2.csv -p {path} ! unzip -q -n {path}/train_v2.csv.zip -d {path} ###Output _____no_output_____ ###Markdown To extract the content of this file, we'll need 7zip, so uncomment the following line if you need to install it (or run `sudo apt install p7zip-full` in your terminal). ###Code ! conda install -y -c haasad eidl7zip ###Output /bin/bash: conda: command not found ###Markdown And now we can unpack the data (uncomment to run - this might take a few minutes to complete). ###Code ! 7za -bd -y -so x {path}/train-jpg.tar.7z | tar xf - -C {path} ###Output _____no_output_____ ###Markdown Multiclassification Contrary to the pets dataset studied in last lesson, here each picture can have multiple labels. If we take a look at the csv file containing the labels (in 'train_v2.csv' here) we see that each 'image_name' is associated to several tags separated by spaces. ###Code df = pd.read_csv(path/'train_v2.csv') df.head() ###Output _____no_output_____ ###Markdown To put this in a `DataBunch` while using the [data block API](https://docs.fast.ai/data_block.html), we then need to using `ImageItemList` (and not `ImageDataBunch`). This will make sure the model created has the proper loss function to deal with the multiple classes. ###Code tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) ###Output _____no_output_____ ###Markdown We use parentheses around the data block pipeline below, so that we can use a multiline statement without needing to add '\\'. ###Code np.random.seed(42) src = (ImageItemList.from_csv(path, 'train_v2.csv', folder='train-jpg', suffix='.jpg') .random_split_by_pct(0.2) .label_from_df(label_delim=' ')) data = (src.transform(tfms, size=128) .databunch().normalize(imagenet_stats)) ###Output _____no_output_____ ###Markdown `show_batch` still works, and show us the different labels separated by `;`. ###Code data.show_batch(rows=3, figsize=(12,9)) ###Output _____no_output_____ ###Markdown To create a `Learner` we use the same function as in lesson 1. Our base architecture is resnet34 again, but the metrics are a little bit differeent: we use `accuracy_thresh` instead of `accuracy`. In lesson 1, we determined the predicition for a given class by picking the final activation that was the biggest, but here, each activation can be 0. or 1. `accuracy_thresh` selects the ones that are above a certain threshold (0.5 by default) and compares them to the ground truth.As for Fbeta, it's the metric that was used by Kaggle on this competition. See [here](https://en.wikipedia.org/wiki/F1_score) for more details. ###Code arch = models.resnet50 acc_02 = partial(accuracy_thresh, thresh=0.2) f_score = partial(fbeta, thresh=0.2) learn = create_cnn(data, arch, metrics=[acc_02, f_score]) ###Output Downloading: "https://download.pytorch.org/models/resnet50-19c8e357.pth" to /root/.torch/models/resnet50-19c8e357.pth 102502400it [00:01, 55965679.90it/s] ###Markdown We use the LR Finder to pick a good learning rate. ###Code learn.lr_find() learn.recorder.plot() ###Output Min numerical gradient: 1.91E-02 ###Markdown Then we can fit the head of our network. ###Code lr = 0.02 learn.fit_one_cycle(5, slice(lr)) learn.save('stage-1-rn50') ###Output _____no_output_____ ###Markdown ...And fine-tune the whole model: ###Code learn.load('stage-1-rn50') learn.unfreeze() learn.lr_find() learn.recorder.plot() learn.fit_one_cycle(5, slice(1e-5, lr/5)) learn.save('stage-2-rn50') data = (src.transform(tfms, size=256) .databunch().normalize(imagenet_stats)) learn.data = data data.train_ds[0][0].shape learn.freeze() learn.lr_find() learn.recorder.plot() lr=1e-3 learn.fit_one_cycle(5, slice(lr)) learn.save('stage-1-256-rn50') learn.unfreeze() learn.lr_find() learn.recorder.plot() learn.fit_one_cycle(5, slice(1e-5, lr/5)) learn.recorder.plot_losses() learn.save('stage-2-256-rn50') ###Output _____no_output_____ ###Markdown You won't really know how you're going until you submit to Kaggle, since the leaderboard isn't using the same subset as we have for training. But as a guide, 50th place (out of 938 teams) on the private leaderboard was a score of `0.930`. ###Code learn.export() ###Output _____no_output_____ ###Markdown fin (This section will be covered in part 2 - please don't ask about it just yet! :) ) ###Code #! kaggle competitions download -c planet-understanding-the-amazon-from-space -f test-jpg.tar.7z -p {path} #! 7za -bd -y -so x {path}/test-jpg.tar.7z | tar xf - -C {path} #! kaggle competitions download -c planet-understanding-the-amazon-from-space -f test-jpg-additional.tar.7z -p {path} #! 7za -bd -y -so x {path}/test-jpg-additional.tar.7z | tar xf - -C {path} test = ImageItemList.from_folder(path/'test-jpg').add(ImageItemList.from_folder(path/'test-jpg-additional')) len(test) learn = load_learner(path, test=test) preds, _ = learn.get_preds(ds_type=DatasetType.Test) thresh = 0.2 labelled_preds = [' '.join([learn.data.classes[i] for i,p in enumerate(pred) if p > thresh]) for pred in preds] labelled_preds[:5] fnames = [f.name[:-4] for f in learn.data.test_ds.items] df = pd.DataFrame({'image_name':fnames, 'tags':labelled_preds}, columns=['image_name', 'tags']) df.to_csv(path/'submission.csv', index=False) ! kaggle competitions submit planet-understanding-the-amazon-from-space -f {path/'submission.csv'} -m "My submission" ###Output Warning: Your Kaggle API key is readable by other users on this system! To fix this, you can run 'chmod 600 /home/ubuntu/.kaggle/kaggle.json' 100%|██████████████████████████████████████| 2.18M/2.18M [00:02<00:00, 1.05MB/s] Successfully submitted to Planet: Understanding the Amazon from Space
YOLO-Object-Detection.ipynb
###Markdown Make the necessary imports. ###Code import os import time import cv2 import numpy as np from model.yolo_model import YOLO ###Output Using TensorFlow backend. ###Markdown Let's define a few functions that we'll call later. ###Code def process_image(img): """Resize, reduce and expand image. # Argument: img: original image. # Returns image: ndarray(64, 64, 3), processed image. """ image = cv2.resize(img, (416, 416), interpolation=cv2.INTER_CUBIC) image = np.array(image, dtype='float32') image /= 255. image = np.expand_dims(image, axis=0) return image def get_classes(file): """Get classes name. # Argument: file: classes name for database. # Returns class_names: List, classes name. """ with open(file) as f: class_names = f.readlines() class_names = [c.strip() for c in class_names] return class_names def draw(image, boxes, scores, classes, all_classes): """Draw the boxes on the image. # Argument: image: original image. boxes: ndarray, boxes of objects. classes: ndarray, classes of objects. scores: ndarray, scores of objects. all_classes: all classes name. """ for box, score, cl in zip(boxes, scores, classes): x, y, w, h = box top = max(0, np.floor(x + 0.5).astype(int)) left = max(0, np.floor(y + 0.5).astype(int)) right = min(image.shape[1], np.floor(x + w + 0.5).astype(int)) bottom = min(image.shape[0], np.floor(y + h + 0.5).astype(int)) cv2.rectangle(image, (top, left), (right, bottom), (255, 0, 0), 2) cv2.putText(image, '{0} {1:.2f}'.format(all_classes[cl], score), (top, left - 6), cv2.FONT_HERSHEY_SIMPLEX, 0.6, (0, 0, 255), 1, cv2.LINE_AA) print('class: {0}, score: {1:.2f}'.format(all_classes[cl], score)) print('box coordinate x,y,w,h: {0}'.format(box)) print() def detect_image(image, yolo, all_classes): """Use yolo v3 to detect images. # Argument: image: original image. yolo: YOLO, yolo model. all_classes: all classes name. # Returns: image: processed image. """ pimage = process_image(image) start = time.time() boxes, classes, scores = yolo.predict(pimage, image.shape) end = time.time() print('time: {0:.2f}s'.format(end - start)) if boxes is not None: draw(image, boxes, scores, classes, all_classes) return image def detect_video(video, yolo, all_classes): """Use yolo v3 to detect video. # Argument: video: video file. yolo: YOLO, yolo model. all_classes: all classes name. """ video_path = os.path.join("videos", "test", video) camera = cv2.VideoCapture(video_path) cv2.namedWindow("detection", cv2.WINDOW_AUTOSIZE) # Prepare for saving the detected video sz = (int(camera.get(cv2.CAP_PROP_FRAME_WIDTH)), int(camera.get(cv2.CAP_PROP_FRAME_HEIGHT))) fourcc = cv2.VideoWriter_fourcc(*'mpeg') vout = cv2.VideoWriter() vout.open(os.path.join("videos", "res", video), fourcc, 20, sz, True) while True: res, frame = camera.read() if not res: break image = detect_image(frame, yolo, all_classes) cv2.imshow("detection", image) # Save the video frame by frame vout.write(image) if cv2.waitKey(110) & 0xff == 27: break vout.release() camera.release() ###Output _____no_output_____ ###Markdown Let's test the model on an image and save the resultant image. ###Code yolo = YOLO(0.5, 0.5) file = 'data/coco_classes.txt' all_classes = get_classes(file) f = 'a.jpg' # path = 'C:\Users\ub226\Desktop\GitHub_Projects\YoloV3\images\test'+f image = cv2.imread('images/test/a.jpg') cv2.imshow('img', image) image = detect_image(image, yolo, all_classes) cv2.imwrite('images/res/' + f, image) ###Output time: 8.84s class: person, score: 0.60 box coordinate x,y,w,h: [2523.36752415 1482.90807486 621.1425662 1302.12979794] class: bicycle, score: 0.84 box coordinate x,y,w,h: [2877.70164013 2007.04590225 1301.56436563 721.28457355] class: bicycle, score: 0.51 box coordinate x,y,w,h: [ 816.12226367 1952.52190018 1265.25861025 818.21106863] ###Markdown Using the camera to use the yolo model though keep in mind that this particular version of yolo takes about 7s to process a frame and thus the video will be really laggy. ###Code video_capture = cv2.VideoCapture(0) while True: _, frame = video_capture.read() # gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) canvas = detect_image(frame, yolo, all_classes) cv2.imshow('Video', canvas) if cv2.waitKey(1) & 0xFF == ord('q'): break video_capture.release() cv2.destroyAllWindows() ###Output time: 5.99s time: 6.47s class: person, score: 0.99 box coordinate x,y,w,h: [158.51669312 115.13115406 400.38696289 316.82581902] class: backpack, score: 0.26 box coordinate x,y,w,h: [192.01856613 339.57638741 282.45540619 74.09663916] time: 5.72s class: person, score: 0.98 box coordinate x,y,w,h: [156.29650116 111.5225172 405.8543396 314.62712288] class: backpack, score: 0.71 box coordinate x,y,w,h: [205.82166672 337.93584824 257.85541534 75.89873314] time: 5.55s class: person, score: 0.99 box coordinate x,y,w,h: [163.75883102 108.71416569 406.59908295 314.06367302] class: backpack, score: 0.86 box coordinate x,y,w,h: [239.36714172 333.13925743 250.74586868 82.40627289] time: 5.72s class: person, score: 1.00 box coordinate x,y,w,h: [162.24330902 110.2440834 401.79824829 313.42540741] time: 6.15s class: person, score: 1.00 box coordinate x,y,w,h: [171.54081345 145.37488461 319.58408356 273.00164223] class: backpack, score: 0.32 box coordinate x,y,w,h: [202.24494934 345.55131912 264.90531921 70.98035574] time: 6.44s class: person, score: 1.00 box coordinate x,y,w,h: [193.91468048 139.3510294 322.52510071 287.0506382 ] class: backpack, score: 0.20 box coordinate x,y,w,h: [207.29797363 345.68063736 254.27907944 68.18960667]
Notebooks/brri-dataset/experimentations/zir/zero_inflated_regression(LGBMC_&_LGBMR).ipynb
###Markdown New Section **LGBMR Documentation link:** https://lightgbm.readthedocs.io/en/latest/pythonapi/lightgbm.LGBMRegressor.html**LGBMC Documentation link:** https://lightgbm.readthedocs.io/en/latest/pythonapi/lightgbm.LGBMClassifier.html**SK-Lego Documentation link:** https://scikit-lego.netlify.app/meta.html ###Code !pip install sklego import pandas as pd import numpy as np import matplotlib.pyplot as plt from sklearn.linear_model import LinearRegression from sklego.meta import ZeroInflatedRegressor #!pip install sklego from lightgbm import LGBMClassifier from lightgbm import LGBMRegressor # global random seed RAND_SEED = 42 ziReg = ZeroInflatedRegressor(classifier=LGBMClassifier(random_state=RAND_SEED), regressor=LGBMRegressor(random_state=RAND_SEED)) # initial model with only random seed and not any hyper-parametes initial_model = ziReg # hyper-parameters n_estimators = [x*5 for x in range(20, 41)] learning_rate = [0.01] param_grid = { 'classifier__learning_rate' : learning_rate, 'classifier__n_estimators': n_estimators, 'regressor__learning_rate': learning_rate, 'regressor__n_estimators': n_estimators } # variables needed for showEvalGraph_regression() function # MODEL_CLASS = ziReg # x_axis_param_name = 'regressor__C' # x_axis_vals = regressor__C ###Output _____no_output_____ ###Markdown 1. Experimentation on the Weather Daily dataset ###Code # Load the train dataset weather_daily_train_df = pd.read_csv('https://raw.githubusercontent.com/ferdouszislam/Weather-WaterLevel-Prediction-ML/main/Datasets/brri-datasets/final-dataset/train/brri-weather_train_regression.csv') # Load the test set weather_daily_test_df = pd.read_csv('https://raw.githubusercontent.com/ferdouszislam/Weather-WaterLevel-Prediction-ML/main/Datasets/brri-datasets/final-dataset/test/brri-weather_test_regression.csv') # train model model, selected_hyperparams, train_r2, train_mae, train_rmse = train_regression(initial_model, param_grid, weather_daily_train_df, cls='Rainfall (mm)') print(f'Selected hyperparameters: {selected_hyperparams}') # performance on the train set print(f'Train set performance: r2-score={train_r2}, mae={train_mae}, rmse={train_rmse}') # # r2-scores graph on the train set # # hyper-parameters selected by GridSearchCV # selected_model_params = selected_hyperparams # #selected_model_params['random_state'] = RAND_SEED # showEvalutationGraph_regression(MODEL_CLASS, weather_daily_train_df, cls='Rainfall (mm)', # x_axis_param_name=x_axis_param_name, x_axis_param_vals=x_axis_vals, # selected_model_params=selected_model_params) # test model test_r2, test_mae, test_rmse = eval_regression(model, weather_daily_test_df, cls='Rainfall (mm)') # performance on the test set print(f'Test set performance: r2-score={test_r2}, mae={test_mae}, rmse={test_rmse}') ###Output Test set performance: r2-score=0.1629, mae=5.8279, rmse=15.8206 ###Markdown 1.1 Apply Pearson Feature Selection to Daily Weather Dataset ###Code # select features from the train dataset weather_daily_fs1_train_df, cols_to_drop = pearson_correlation_fs(weather_daily_train_df, 'Rainfall (mm)') # keep only selected features on the test dataset weather_daily_fs1_test_df = weather_daily_test_df.drop(columns=cols_to_drop) # train model model, selected_hyperparams, train_r2, train_mae, train_rmse = train_regression(initial_model, param_grid, weather_daily_fs1_train_df, cls='Rainfall (mm)') print(f'Selected hyperparameters: {selected_hyperparams}') # performance on the train set print(f'Train set performance: r2-score={train_r2}, mae={train_mae}, rmse={train_rmse}') # # r2-scores graph on the train set # # hyper-parameters selected by GridSearchCV # selected_model_params = selected_hyperparams # #selected_model_params['random_state'] = RAND_SEED # showEvalutationGraph_regression(MODEL_CLASS, weather_daily_fs1_train_df, cls='Rainfall (mm)', # x_axis_param_name=x_axis_param_name, x_axis_param_vals=x_axis_vals, # selected_model_params=selected_model_params) # test model test_r2, test_mae, test_rmse = eval_regression(model, weather_daily_fs1_test_df, cls='Rainfall (mm)') # performance on the test set print(f'Test set performance: r2-score={test_r2}, mae={test_mae}, rmse={test_rmse}') ###Output Test set performance: r2-score=0.166, mae=5.7953, rmse=15.7905 ###Markdown 1.2 Apply SelectKBest Feature Selection to Daily Weather Dataset ###Code # select features from the train dataset weather_daily_fs2_train_df, cols_to_drop = seleckKBest_fs(weather_daily_train_df, 'Rainfall (mm)', is_regression=True) print('features dropped:', cols_to_drop) # keep only selected features on the test dataset weather_daily_fs2_test_df = weather_daily_test_df.drop(columns=cols_to_drop) # train model model, selected_hyperparams, train_r2, train_mae, train_rmse = train_regression(initial_model, param_grid, weather_daily_fs2_train_df, cls='Rainfall (mm)') print(f'Selected hyperparameters: {selected_hyperparams}') # performance on the train set print(f'Train set performance: r2-score={train_r2}, mae={train_mae}, rmse={train_rmse}') # # r2-scores graph on the train set # # hyper-parameters selected by GridSearchCV # selected_model_params = selected_hyperparams # #selected_model_params['random_state'] = RAND_SEED # showEvalutationGraph_regression(MODEL_CLASS, weather_daily_fs2_train_df, cls='Rainfall (mm)', # x_axis_param_name=x_axis_param_name, x_axis_param_vals=x_axis_vals, # selected_model_params=selected_model_params) # test model test_r2, test_mae, test_rmse = eval_regression(model, weather_daily_fs2_test_df, cls='Rainfall (mm)') # performance on the test set print(f'Test set performance: r2-score={test_r2}, mae={test_mae}, rmse={test_rmse}') ###Output Test set performance: r2-score=0.1809, mae=5.897, rmse=15.6493 ###Markdown 1.3 Apply SelectSequential Feature Selection to Daily Weather Dataset ###Code # select features from the train dataset weather_daily_fs3_train_df, cols_to_drop = selectSequential_fs(weather_daily_train_df, 'Rainfall (mm)', is_regression=True) print('features dropped:', cols_to_drop) # keep only selected features on the test dataset weather_daily_fs3_test_df = weather_daily_test_df.drop(columns=cols_to_drop) # train model model, selected_hyperparams, train_r2, train_mae, train_rmse = train_regression(initial_model, param_grid, weather_daily_fs3_train_df, cls='Rainfall (mm)') print(f'Selected hyperparameters: {selected_hyperparams}') # performance on the train set print(f'Train set performance: r2-score={train_r2}, mae={train_mae}, rmse={train_rmse}') # # r2-scores graph on the train set # # hyper-parameters selected by GridSearchCV # selected_model_params = selected_hyperparams # #selected_model_params['random_state'] = RAND_SEED # showEvalutationGraph_regression(MODEL_CLASS, weather_daily_fs3_train_df, cls='Rainfall (mm)', # x_axis_param_name=x_axis_param_name, x_axis_param_vals=x_axis_vals, # selected_model_params=selected_model_params) # # test model test_r2, test_mae, test_rmse = eval_regression(model, weather_daily_fs3_test_df, cls='Rainfall (mm)') # performance on the test set print(f'Test set performance: r2-score={test_r2}, mae={test_mae}, rmse={test_rmse}') ###Output Test set performance: r2-score=0.1264, mae=6.1399, rmse=16.1617
Tutorial_S2_1.ipynb
###Markdown Data Analysis using Jupyter Notebooks Part 2Benjamin J. Morgan Tutorial 1In the CH10009 &ldquo;Data Analysis using Jupyter Notebooks&rdquo; you were introduced to simple programmatic data analysis and plotting using the Python programming language and Jupyter Notebooks. This practical builds on that previous activity. By working through these notebooks, you will 1. Revisit the key ideas from the Part 1 practical, giving you more experience and practice writing your own code.2. Learn how to write your own functions, which can then be used to organise your code, and to perform more sophistocated data analyses. AssessmentThe assessment of this practical has two components:1. After completing **Exercise 3**, save your final notebook (using **File > Save and Checkpoint** in the Jupyter menu), and upload this .ipynb file to Moodle. Please make sure that you have run your notebook from a fresh start, and that it works as expected, before saving and submitting (**Kernel > Restart and Run All** from the Jupyter menu).2. A Moodle quiz. Review of Part 1The CH10009 Part 1 practical covered the following concepts:- Introduction to Jupyter notebooks - Opening Jupyter notebooks - Running code- Code versus Markdown cells- Importing modules- Mathematical functions- Variables- Numbers, Strings, and Lists- `numpy` and arrays- Plotting data with `matplotlib` - line styles - formatting points - labelling axes and adding titles - saving to a PDF- Introduction to data analysis and statistics, using `numpy`: - Some useful `numpy` functions: `min()`, `max()`, `sum()`, `mean()`, `std()` - linear regression, using `scipy.stats.linregress` In this practical you will quickly review these concepts, before building on these to develop your programming and data analysis skills. This review of the Part 1 material will be fairly compact, so if there are any parts you are unsure about, remember that you can always review your Part 1 Tutorial notebooks. These will either still be on your `H:` drive, or you can download them from the [CH10009 Moodle page](https://moodle.bath.ac.uk/course/view.php?id=538). You can also insert new code cells to run any bits of code you like, to check that you understand how the examples work. Code cells can be inserted into any notebook using **Insert > Insert Cell Above** or **Insert > Insert Cell Below** from the Jupyter menu. Using Jupyter notebooksA Jupyter notebook consists of a series of **cells** that contain text. These cells are arranged vertically, top-to-bottom in the document. Any cell can be edited by clicking on it. A cell in **edit mode** is indicated by a green border. A cell with a blue border is in **command mode**. In command mode you are not able to type into a cell, but you can still edit the notebook (reordering cells, executing code, etc.) Commands for editing notebooks can be accessed from the manu at the top of the screen, and commonly used commands have keyboard shortcuts, which will be highlighted in examples using green text. The full list of keyboard shortcuts can be found through **Help > Keyboard Shortcuts** in the menu.To edit a cell in command mode, press enter or double click on the cell. Running Code The default cell type in a Jupyter notebook is a **code** cell. If you open a new notebook it will have one, empty, code cell. And you can always create more cells by clicking in the menu on **Insert > Insert Cell Above** (a) or **Insert > Insert Cell Below** (b). Any code typed into a code cell can be run (or "**executed**") by pressing `Shift-Enter` or pressing the button in the notebook toolbar.This practical consists of an interactive tutorial (this notebook), followed by a a series of exercises. Some code cells in the tutorial will already have code in them, which you can **run** by selecting and pressing `Shift-Enter` or clicking the toolbar button: ###Code # run this cell 2 + 3 ###Output _____no_output_____
notebooks/A2-pcts-validate.ipynb
###Markdown PCTS Validate Counts* Parse PCTS case number, and use prefix and suffix to validate counts ###Code import numpy as np import pandas as pd import geopandas as gpd import intake import pcts_parser catalog = intake.open_catalog("../catalogs/*.yml") bucket_name = 'city-planning-entitlements' ###Output _____no_output_____ ###Markdown Sort out parent-child relationship and generate a case's entire history before dropping duplicates* Final decision: use PARNT_CASE_ID, and fill it in whenever it's missing, because those are the parent cases themselves* Parse case string for some big groups: PAR, ENV, APPEAL, ADM, ENTITLEMENT* bys parent_case: egen max for PAR, ENV, APPEAL, ADM* keep parent because it's ENTITLEMENT, but stores stuff from child cases ###Code df = catalog.pcts2.read() # Find the max of the different application types aggregated = (df.pivot_table(index=['PARENT_CASE'], values = ['env', 'pre_application_review', 'admin', 'appeal'], aggfunc = 'max') .reset_index() ) df = pd.merge(df.drop(columns = ['admin', 'appeal', 'env', 'pre_application_review']), aggregated, on = 'PARENT_CASE', how = 'left', validate = 'm:1') # Drop duplicates, so we keep the history of child cases keep = ['CASE_ID', 'CASE_YR_NBR', 'PARENT_CASE', 'admin', 'appeal', 'env', 'pre_application_review'] df = df[df.CASE_ID == df.PARENT_CASE][keep].drop_duplicates() ###Output _____no_output_____ ###Markdown Get counts for each category* 2010-2019* 2015-2019* 2017, 2018, 2019 individual years ###Code def count_cases(row): entitlement = 0 env = 0 admin = 0 appeal = 0 par = 0 cond1 = (row.admin == 0) cond2 = (row.env == 0) cond3 = (row.pre_application_review == 0) if cond1 and cond2 and cond3: entitlement = 1 if row.env == 1: env = 1 if row.admin == 1: admin = 1 if row.appeal == 1: appeal = 1 if row.pre_application_review == 1: par = 1 return pd.Series([entitlement, env, admin, appeal, par], index=['is_entitlement', 'is_env', 'is_admin', 'is_appeal', 'is_par']) counts = df.apply(count_cases, axis = 1) df2 = pd.concat([df, counts], axis = 1) df2.head() df2010 = df2[(df2.CASE_YR_NBR >= 2010) & (df2.CASE_YR_NBR <= 2019)] df2015 = df2[(df2.CASE_YR_NBR >= 2015) & (df2.CASE_YR_NBR <= 2019)] df2017 = df2[df2.CASE_YR_NBR == 2017] df2018 = df2[df2.CASE_YR_NBR == 2018] df2019 = df2[df2.CASE_YR_NBR == 2019] dataframes = {'2010': df2010, '2015': df2015, '2017': df2017, '2018': df2018, '2019': df2019} for key, value in dataframes.items(): display(key) display(value.agg({'is_entitlement':'sum', 'is_env':'sum', 'is_admin':'sum', 'is_appeal':'sum', 'is_par':'sum'}).reset_index()) # Ugh, 2017-2019 individual years don't exactly line up still ###Output _____no_output_____
2- Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization/week 3/Tensorflow Tutorial.ipynb
###Markdown TensorFlow TutorialWelcome to this week's programming assignment. Until now, you've always used numpy to build neural networks. Now we will step you through a deep learning framework that will allow you to build neural networks more easily. Machine learning frameworks like TensorFlow, PaddlePaddle, Torch, Caffe, Keras, and many others can speed up your machine learning development significantly. All of these frameworks also have a lot of documentation, which you should feel free to read. In this assignment, you will learn to do the following in TensorFlow: - Initialize variables- Start your own session- Train algorithms - Implement a Neural NetworkPrograming frameworks can not only shorten your coding time, but sometimes also perform optimizations that speed up your code. 1 - Exploring the Tensorflow LibraryTo start, you will import the library: ###Code import math import numpy as np import h5py import matplotlib.pyplot as plt import tensorflow as tf from tensorflow.python.framework import ops from tf_utils import load_dataset, random_mini_batches, convert_to_one_hot, predict %matplotlib inline np.random.seed(1) ###Output _____no_output_____ ###Markdown Now that you have imported the library, we will walk you through its different applications. You will start with an example, where we compute for you the loss of one training example. $$loss = \mathcal{L}(\hat{y}, y) = (\hat y^{(i)} - y^{(i)})^2 \tag{1}$$ ###Code y_hat = tf.constant(36, name='y_hat') # Define y_hat constant. Set to 36. y = tf.constant(39, name='y') # Define y. Set to 39 loss = tf.Variable((y - y_hat)**2, name='loss') # Create a variable for the loss init = tf.global_variables_initializer() # When init is run later (session.run(init)), # the loss variable will be initialized and ready to be computed with tf.Session() as session: # Create a session and print the output session.run(init) # Initializes the variables print(session.run(loss)) # Prints the loss ###Output 9 ###Markdown Writing and running programs in TensorFlow has the following steps:1. Create Tensors (variables) that are not yet executed/evaluated. 2. Write operations between those Tensors.3. Initialize your Tensors. 4. Create a Session. 5. Run the Session. This will run the operations you'd written above. Therefore, when we created a variable for the loss, we simply defined the loss as a function of other quantities, but did not evaluate its value. To evaluate it, we had to run `init=tf.global_variables_initializer()`. That initialized the loss variable, and in the last line we were finally able to evaluate the value of `loss` and print its value.Now let us look at an easy example. Run the cell below: ###Code a = tf.constant(2) b = tf.constant(10) c = tf.multiply(a,b) print(c) ###Output Tensor("Mul:0", shape=(), dtype=int32) ###Markdown As expected, you will not see 20! You got a tensor saying that the result is a tensor that does not have the shape attribute, and is of type "int32". All you did was put in the 'computation graph', but you have not run this computation yet. In order to actually multiply the two numbers, you will have to create a session and run it. ###Code sess = tf.Session() print(sess.run(c)) ###Output 20 ###Markdown Great! To summarize, **remember to initialize your variables, create a session and run the operations inside the session**. Next, you'll also have to know about placeholders. A placeholder is an object whose value you can specify only later. To specify values for a placeholder, you can pass in values by using a "feed dictionary" (`feed_dict` variable). Below, we created a placeholder for x. This allows us to pass in a number later when we run the session. ###Code # Change the value of x in the feed_dict x = tf.placeholder(tf.int64, name = 'x') print(sess.run(2 * x, feed_dict = {x: 3})) sess.close() ###Output 6 ###Markdown When you first defined `x` you did not have to specify a value for it. A placeholder is simply a variable that you will assign data to only later, when running the session. We say that you **feed data** to these placeholders when running the session. Here's what's happening: When you specify the operations needed for a computation, you are telling TensorFlow how to construct a computation graph. The computation graph can have some placeholders whose values you will specify only later. Finally, when you run the session, you are telling TensorFlow to execute the computation graph. 1.1 - Linear functionLets start this programming exercise by computing the following equation: $Y = WX + b$, where $W$ and $X$ are random matrices and b is a random vector. **Exercise**: Compute $WX + b$ where $W, X$, and $b$ are drawn from a random normal distribution. W is of shape (4, 3), X is (3,1) and b is (4,1). As an example, here is how you would define a constant X that has shape (3,1):```pythonX = tf.constant(np.random.randn(3,1), name = "X")```You might find the following functions helpful: - tf.matmul(..., ...) to do a matrix multiplication- tf.add(..., ...) to do an addition- np.random.randn(...) to initialize randomly ###Code # GRADED FUNCTION: linear_function def linear_function(): """ Implements a linear function: Initializes W to be a random tensor of shape (4,3) Initializes X to be a random tensor of shape (3,1) Initializes b to be a random tensor of shape (4,1) Returns: result -- runs the session for Y = WX + b """ np.random.seed(1) ### START CODE HERE ### (4 lines of code) X = np.random.randn(3, 1) W = np.random.randn(4, 3) b = np.random.randn(4, 1) Y = tf.add(tf.matmul(W, X), b) ### END CODE HERE ### # Create the session using tf.Session() and run it with sess.run(...) on the variable you want to calculate ### START CODE HERE ### sess = tf.Session() result = sess.run(Y) ### END CODE HERE ### # close the session sess.close() return result print( "result = " + str(linear_function())) ###Output result = [[-2.15657382] [ 2.95891446] [-1.08926781] [-0.84538042]] ###Markdown *** Expected Output ***: **result**[[-2.15657382] [ 2.95891446] [-1.08926781] [-0.84538042]] 1.2 - Computing the sigmoid Great! You just implemented a linear function. Tensorflow offers a variety of commonly used neural network functions like `tf.sigmoid` and `tf.softmax`. For this exercise lets compute the sigmoid function of an input. You will do this exercise using a placeholder variable `x`. When running the session, you should use the feed dictionary to pass in the input `z`. In this exercise, you will have to (i) create a placeholder `x`, (ii) define the operations needed to compute the sigmoid using `tf.sigmoid`, and then (iii) run the session. ** Exercise **: Implement the sigmoid function below. You should use the following: - `tf.placeholder(tf.float32, name = "...")`- `tf.sigmoid(...)`- `sess.run(..., feed_dict = {x: z})`Note that there are two typical ways to create and use sessions in tensorflow: **Method 1:**```pythonsess = tf.Session() Run the variables initialization (if needed), run the operationsresult = sess.run(..., feed_dict = {...})sess.close() Close the session```**Method 2:**```pythonwith tf.Session() as sess: run the variables initialization (if needed), run the operations result = sess.run(..., feed_dict = {...}) This takes care of closing the session for you :)``` ###Code # GRADED FUNCTION: sigmoid def sigmoid(z): """ Computes the sigmoid of z Arguments: z -- input value, scalar or vector Returns: results -- the sigmoid of z """ ### START CODE HERE ### ( approx. 4 lines of code) # Create a placeholder for x. Name it 'x'. x = tf.placeholder(tf.float32, name="x") # compute sigmoid(x) sigmoid = tf.sigmoid(x) # Create a session, and run it. Please use the method 2 explained above. # You should use a feed_dict to pass z's value to x. # Run session and call the output "result" sess = tf.Session() result = tf.Session().run(sigmoid, feed_dict = {x:z}) sess.close() ### END CODE HERE ### return result print ("sigmoid(0) = " + str(sigmoid(0))) print ("sigmoid(12) = " + str(sigmoid(12))) ###Output sigmoid(0) = 0.5 sigmoid(12) = 0.999994 ###Markdown *** Expected Output ***: **sigmoid(0)**0.5 **sigmoid(12)**0.999994 **To summarize, you how know how to**:1. Create placeholders2. Specify the computation graph corresponding to operations you want to compute3. Create the session4. Run the session, using a feed dictionary if necessary to specify placeholder variables' values. 1.3 - Computing the CostYou can also use a built-in function to compute the cost of your neural network. So instead of needing to write code to compute this as a function of $a^{[2](i)}$ and $y^{(i)}$ for i=1...m: $$ J = - \frac{1}{m} \sum_{i = 1}^m \large ( \small y^{(i)} \log a^{ [2] (i)} + (1-y^{(i)})\log (1-a^{ [2] (i)} )\large )\small\tag{2}$$you can do it in one line of code in tensorflow!**Exercise**: Implement the cross entropy loss. The function you will use is: - `tf.nn.sigmoid_cross_entropy_with_logits(logits = ..., labels = ...)`Your code should input `z`, compute the sigmoid (to get `a`) and then compute the cross entropy cost $J$. All this can be done using one call to `tf.nn.sigmoid_cross_entropy_with_logits`, which computes$$- \frac{1}{m} \sum_{i = 1}^m \large ( \small y^{(i)} \log \sigma(z^{[2](i)}) + (1-y^{(i)})\log (1-\sigma(z^{[2](i)})\large )\small\tag{2}$$ ###Code # GRADED FUNCTION: cost def cost(logits, labels): """     Computes the cost using the sigmoid cross entropy          Arguments:     logits -- vector containing z, output of the last linear unit (before the final sigmoid activation)     labels -- vector of labels y (1 or 0) Note: What we've been calling "z" and "y" in this class are respectively called "logits" and "labels" in the TensorFlow documentation. So logits will feed into z, and labels into y.          Returns:     cost -- runs the session of the cost (formula (2)) """ ### START CODE HERE ### # Create the placeholders for "logits" (z) and "labels" (y) (approx. 2 lines) z = tf.placeholder(tf.float32, name="z") y = tf.placeholder(tf.float32, name="y") # Use the loss function (approx. 1 line) cost = tf.nn.sigmoid_cross_entropy_with_logits(logits=z, labels=y) # Create a session (approx. 1 line). See method 1 above. sess = tf.Session() # Run the session (approx. 1 line). cost = sess.run(cost, feed_dict={z: logits, y: labels}) # Close the session (approx. 1 line). See method 1 above. sess.close() ### END CODE HERE ### return cost logits = sigmoid(np.array([0.2,0.4,0.7,0.9])) cost = cost(logits, np.array([0,0,1,1])) print ("cost = " + str(cost)) ###Output cost = [ 1.00538719 1.03664088 0.41385433 0.39956614] ###Markdown ** Expected Output** : **cost** [ 1.00538719 1.03664088 0.41385433 0.39956614] 1.4 - Using One Hot encodingsMany times in deep learning you will have a y vector with numbers ranging from 0 to C-1, where C is the number of classes. If C is for example 4, then you might have the following y vector which you will need to convert as follows:This is called a "one hot" encoding, because in the converted representation exactly one element of each column is "hot" (meaning set to 1). To do this conversion in numpy, you might have to write a few lines of code. In tensorflow, you can use one line of code: - tf.one_hot(labels, depth, axis) **Exercise:** Implement the function below to take one vector of labels and the total number of classes $C$, and return the one hot encoding. Use `tf.one_hot()` to do this. ###Code # GRADED FUNCTION: one_hot_matrix def one_hot_matrix(labels, C): """ Creates a matrix where the i-th row corresponds to the ith class number and the jth column corresponds to the jth training example. So if example j had a label i. Then entry (i,j) will be 1. Arguments: labels -- vector containing the labels C -- number of classes, the depth of the one hot dimension Returns: one_hot -- one hot matrix """ ### START CODE HERE ### # Create a tf.constant equal to C (depth), name it 'C'. (approx. 1 line) C = tf.constant(C, name='C') # Use tf.one_hot, be careful with the axis (approx. 1 line) one_hot_matrix = tf.one_hot(indices=labels, depth=C, axis=0) # Create the session (approx. 1 line) sess = tf.Session() # Run the session (approx. 1 line) one_hot = sess.run(one_hot_matrix) # Close the session (approx. 1 line). See method 1 above. sess.close() ### END CODE HERE ### return one_hot labels = np.array([1,2,3,0,2,1]) one_hot = one_hot_matrix(labels, C = 4) print ("one_hot = " + str(one_hot)) ###Output one_hot = [[ 0. 0. 0. 1. 0. 0.] [ 1. 0. 0. 0. 0. 1.] [ 0. 1. 0. 0. 1. 0.] [ 0. 0. 1. 0. 0. 0.]] ###Markdown **Expected Output**: **one_hot** [[ 0. 0. 0. 1. 0. 0.] [ 1. 0. 0. 0. 0. 1.] [ 0. 1. 0. 0. 1. 0.] [ 0. 0. 1. 0. 0. 0.]] 1.5 - Initialize with zeros and onesNow you will learn how to initialize a vector of zeros and ones. The function you will be calling is `tf.ones()`. To initialize with zeros you could use tf.zeros() instead. These functions take in a shape and return an array of dimension shape full of zeros and ones respectively. **Exercise:** Implement the function below to take in a shape and to return an array (of the shape's dimension of ones). - tf.ones(shape) ###Code # GRADED FUNCTION: ones# GRADED def ones(shape): """ Creates an array of ones of dimension shape Arguments: shape -- shape of the array you want to create Returns: ones -- array containing only ones """ ### START CODE HERE ### # Create "ones" tensor using tf.ones(...). (approx. 1 line) ones = tf.ones(shape) # Create the session (approx. 1 line) sess = tf.Session() # Run the session to compute 'ones' (approx. 1 line) ones = sess.run(ones) # Close the session (approx. 1 line). See method 1 above. sess.close() ### END CODE HERE ### return ones print ("ones = " + str(ones([3]))) ###Output ones = [ 1. 1. 1.] ###Markdown **Expected Output:** **ones** [ 1. 1. 1.] 2 - Building your first neural network in tensorflowIn this part of the assignment you will build a neural network using tensorflow. Remember that there are two parts to implement a tensorflow model:- Create the computation graph- Run the graphLet's delve into the problem you'd like to solve! 2.0 - Problem statement: SIGNS DatasetOne afternoon, with some friends we decided to teach our computers to decipher sign language. We spent a few hours taking pictures in front of a white wall and came up with the following dataset. It's now your job to build an algorithm that would facilitate communications from a speech-impaired person to someone who doesn't understand sign language.- **Training set**: 1080 pictures (64 by 64 pixels) of signs representing numbers from 0 to 5 (180 pictures per number).- **Test set**: 120 pictures (64 by 64 pixels) of signs representing numbers from 0 to 5 (20 pictures per number).Note that this is a subset of the SIGNS dataset. The complete dataset contains many more signs.Here are examples for each number, and how an explanation of how we represent the labels. These are the original pictures, before we lowered the image resolutoion to 64 by 64 pixels. **Figure 1**: SIGNS dataset Run the following code to load the dataset. ###Code # Loading the dataset X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset() ###Output _____no_output_____ ###Markdown Change the index below and run the cell to visualize some examples in the dataset. ###Code # Example of a picture index = 0 plt.imshow(X_train_orig[index]) print ("y = " + str(np.squeeze(Y_train_orig[:, index]))) ###Output y = 5 ###Markdown As usual you flatten the image dataset, then normalize it by dividing by 255. On top of that, you will convert each label to a one-hot vector as shown in Figure 1. Run the cell below to do so. ###Code # Flatten the training and test images X_train_flatten = X_train_orig.reshape(X_train_orig.shape[0], -1).T X_test_flatten = X_test_orig.reshape(X_test_orig.shape[0], -1).T # Normalize image vectors X_train = X_train_flatten/255. X_test = X_test_flatten/255. # Convert training and test labels to one hot matrices Y_train = convert_to_one_hot(Y_train_orig, 6) Y_test = convert_to_one_hot(Y_test_orig, 6) print ("number of training examples = " + str(X_train.shape[1])) print ("number of test examples = " + str(X_test.shape[1])) print ("X_train shape: " + str(X_train.shape)) print ("Y_train shape: " + str(Y_train.shape)) print ("X_test shape: " + str(X_test.shape)) print ("Y_test shape: " + str(Y_test.shape)) ###Output number of training examples = 1080 number of test examples = 120 X_train shape: (12288, 1080) Y_train shape: (6, 1080) X_test shape: (12288, 120) Y_test shape: (6, 120) ###Markdown **Note** that 12288 comes from $64 \times 64 \times 3$. Each image is square, 64 by 64 pixels, and 3 is for the RGB colors. Please make sure all these shapes make sense to you before continuing. **Your goal** is to build an algorithm capable of recognizing a sign with high accuracy. To do so, you are going to build a tensorflow model that is almost the same as one you have previously built in numpy for cat recognition (but now using a softmax output). It is a great occasion to compare your numpy implementation to the tensorflow one. **The model** is *LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAX*. The SIGMOID output layer has been converted to a SOFTMAX. A SOFTMAX layer generalizes SIGMOID to when there are more than two classes. 2.1 - Create placeholdersYour first task is to create placeholders for `X` and `Y`. This will allow you to later pass your training data in when you run your session. **Exercise:** Implement the function below to create the placeholders in tensorflow. ###Code # GRADED FUNCTION: create_placeholders def create_placeholders(n_x, n_y): """ Creates the placeholders for the tensorflow session. Arguments: n_x -- scalar, size of an image vector (num_px * num_px = 64 * 64 * 3 = 12288) n_y -- scalar, number of classes (from 0 to 5, so -> 6) Returns: X -- placeholder for the data input, of shape [n_x, None] and dtype "float" Y -- placeholder for the input labels, of shape [n_y, None] and dtype "float" Tips: - You will use None because it let's us be flexible on the number of examples you will for the placeholders. In fact, the number of examples during test/train is different. """ ### START CODE HERE ### (approx. 2 lines) X = tf.placeholder(tf.float32, [n_x, None], name="X") Y = tf.placeholder(tf.float32, [n_y, None], name="Y") ### END CODE HERE ### return X, Y X, Y = create_placeholders(12288, 6) print ("X = " + str(X)) print ("Y = " + str(Y)) ###Output X = Tensor("X_6:0", shape=(12288, ?), dtype=float32) Y = Tensor("Y:0", shape=(6, ?), dtype=float32) ###Markdown **Expected Output**: **X** Tensor("Placeholder_1:0", shape=(12288, ?), dtype=float32) (not necessarily Placeholder_1) **Y** Tensor("Placeholder_2:0", shape=(10, ?), dtype=float32) (not necessarily Placeholder_2) 2.2 - Initializing the parametersYour second task is to initialize the parameters in tensorflow.**Exercise:** Implement the function below to initialize the parameters in tensorflow. You are going use Xavier Initialization for weights and Zero Initialization for biases. The shapes are given below. As an example, to help you, for W1 and b1 you could use: ```pythonW1 = tf.get_variable("W1", [25,12288], initializer = tf.contrib.layers.xavier_initializer(seed = 1))b1 = tf.get_variable("b1", [25,1], initializer = tf.zeros_initializer())```Please use `seed = 1` to make sure your results match ours. ###Code # GRADED FUNCTION: initialize_parameters def initialize_parameters(): """ Initializes parameters to build a neural network with tensorflow. The shapes are: W1 : [25, 12288] b1 : [25, 1] W2 : [12, 25] b2 : [12, 1] W3 : [6, 12] b3 : [6, 1] Returns: parameters -- a dictionary of tensors containing W1, b1, W2, b2, W3, b3 """ tf.set_random_seed(1) # so that your "random" numbers match ours ### START CODE HERE ### (approx. 6 lines of code) W1 = tf.get_variable("W1", [25, 12288], initializer = tf.contrib.layers.xavier_initializer(seed=1)) b1 = tf.get_variable("b1", [25, 1], initializer = tf.zeros_initializer()) W2 = tf.get_variable("W2", [12, 25], initializer = tf.contrib.layers.xavier_initializer(seed=1)) b2 = tf.get_variable("b2", [12, 1], initializer = tf.zeros_initializer()) W3 = tf.get_variable("W3", [6, 12], initializer = tf.contrib.layers.xavier_initializer(seed=1)) b3 = tf.get_variable("b3", [6, 1], initializer = tf.zeros_initializer()) ### END CODE HERE ### parameters = {"W1": W1, "b1": b1, "W2": W2, "b2": b2, "W3": W3, "b3": b3} return parameters tf.reset_default_graph() with tf.Session() as sess: parameters = initialize_parameters() print("W1 = " + str(parameters["W1"])) print("b1 = " + str(parameters["b1"])) print("W2 = " + str(parameters["W2"])) print("b2 = " + str(parameters["b2"])) ###Output W1 = <tf.Variable 'W1:0' shape=(25, 12288) dtype=float32_ref> b1 = <tf.Variable 'b1:0' shape=(25, 1) dtype=float32_ref> W2 = <tf.Variable 'W2:0' shape=(12, 25) dtype=float32_ref> b2 = <tf.Variable 'b2:0' shape=(12, 1) dtype=float32_ref> ###Markdown **Expected Output**: **W1** **b1** **W2** **b2** As expected, the parameters haven't been evaluated yet. 2.3 - Forward propagation in tensorflow You will now implement the forward propagation module in tensorflow. The function will take in a dictionary of parameters and it will complete the forward pass. The functions you will be using are: - `tf.add(...,...)` to do an addition- `tf.matmul(...,...)` to do a matrix multiplication- `tf.nn.relu(...)` to apply the ReLU activation**Question:** Implement the forward pass of the neural network. We commented for you the numpy equivalents so that you can compare the tensorflow implementation to numpy. It is important to note that the forward propagation stops at `z3`. The reason is that in tensorflow the last linear layer output is given as input to the function computing the loss. Therefore, you don't need `a3`! ###Code # GRADED FUNCTION: forward_propagation def forward_propagation(X, parameters): """ Implements the forward propagation for the model: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAX Arguments: X -- input dataset placeholder, of shape (input size, number of examples) parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3" the shapes are given in initialize_parameters Returns: Z3 -- the output of the last LINEAR unit """ # Retrieve the parameters from the dictionary "parameters" W1 = parameters['W1'] b1 = parameters['b1'] W2 = parameters['W2'] b2 = parameters['b2'] W3 = parameters['W3'] b3 = parameters['b3'] ### START CODE HERE ### (approx. 5 lines) # Numpy Equivalents: Z1 = tf.add(tf.matmul(W1, X), b1) # Z1 = np.dot(W1, X) + b1 A1 = tf.nn.relu(Z1) # A1 = relu(Z1) Z2 = tf.add(tf.matmul(W2, A1), b2) # Z2 = np.dot(W2, a1) + b2 A2 = tf.nn.relu(Z2) # A2 = relu(Z2) Z3 = tf.add(tf.matmul(W3, A2), b3) # Z3 = np.dot(W3,Z2) + b3 ### END CODE HERE ### return Z3 tf.reset_default_graph() with tf.Session() as sess: X, Y = create_placeholders(12288, 6) parameters = initialize_parameters() Z3 = forward_propagation(X, parameters) print("Z3 = " + str(Z3)) ###Output Z3 = Tensor("Add_2:0", shape=(6, ?), dtype=float32) ###Markdown **Expected Output**: **Z3** Tensor("Add_2:0", shape=(6, ?), dtype=float32) You may have noticed that the forward propagation doesn't output any cache. You will understand why below, when we get to brackpropagation. 2.4 Compute costAs seen before, it is very easy to compute the cost using:```pythontf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = ..., labels = ...))```**Question**: Implement the cost function below. - It is important to know that the "`logits`" and "`labels`" inputs of `tf.nn.softmax_cross_entropy_with_logits` are expected to be of shape (number of examples, num_classes). We have thus transposed Z3 and Y for you.- Besides, `tf.reduce_mean` basically does the summation over the examples. ###Code # GRADED FUNCTION: compute_cost def compute_cost(Z3, Y): """ Computes the cost Arguments: Z3 -- output of forward propagation (output of the last LINEAR unit), of shape (6, number of examples) Y -- "true" labels vector placeholder, same shape as Z3 Returns: cost - Tensor of the cost function """ # to fit the tensorflow requirement for tf.nn.softmax_cross_entropy_with_logits(...,...) logits = tf.transpose(Z3) labels = tf.transpose(Y) ### START CODE HERE ### (1 line of code) cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=labels)) ### END CODE HERE ### return cost tf.reset_default_graph() with tf.Session() as sess: X, Y = create_placeholders(12288, 6) parameters = initialize_parameters() Z3 = forward_propagation(X, parameters) cost = compute_cost(Z3, Y) print("cost = " + str(cost)) ###Output cost = Tensor("Mean:0", shape=(), dtype=float32) ###Markdown **Expected Output**: **cost** Tensor("Mean:0", shape=(), dtype=float32) 2.5 - Backward propagation & parameter updatesThis is where you become grateful to programming frameworks. All the backpropagation and the parameters update is taken care of in 1 line of code. It is very easy to incorporate this line in the model.After you compute the cost function. You will create an "`optimizer`" object. You have to call this object along with the cost when running the tf.session. When called, it will perform an optimization on the given cost with the chosen method and learning rate.For instance, for gradient descent the optimizer would be:```pythonoptimizer = tf.train.GradientDescentOptimizer(learning_rate = learning_rate).minimize(cost)```To make the optimization you would do:```python_ , c = sess.run([optimizer, cost], feed_dict={X: minibatch_X, Y: minibatch_Y})```This computes the backpropagation by passing through the tensorflow graph in the reverse order. From cost to inputs.**Note** When coding, we often use `_` as a "throwaway" variable to store values that we won't need to use later. Here, `_` takes on the evaluated value of `optimizer`, which we don't need (and `c` takes the value of the `cost` variable). 2.6 - Building the modelNow, you will bring it all together! **Exercise:** Implement the model. You will be calling the functions you had previously implemented. ###Code def model(X_train, Y_train, X_test, Y_test, learning_rate = 0.0001, num_epochs = 1500, minibatch_size = 32, print_cost = True): """ Implements a three-layer tensorflow neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SOFTMAX. Arguments: X_train -- training set, of shape (input size = 12288, number of training examples = 1080) Y_train -- test set, of shape (output size = 6, number of training examples = 1080) X_test -- training set, of shape (input size = 12288, number of training examples = 120) Y_test -- test set, of shape (output size = 6, number of test examples = 120) learning_rate -- learning rate of the optimization num_epochs -- number of epochs of the optimization loop minibatch_size -- size of a minibatch print_cost -- True to print the cost every 100 epochs Returns: parameters -- parameters learnt by the model. They can then be used to predict. """ ops.reset_default_graph() # to be able to rerun the model without overwriting tf variables tf.set_random_seed(1) # to keep consistent results seed = 3 # to keep consistent results (n_x, m) = X_train.shape # (n_x: input size, m : number of examples in the train set) n_y = Y_train.shape[0] # n_y : output size costs = [] # To keep track of the cost # Create Placeholders of shape (n_x, n_y) ### START CODE HERE ### (1 line) X, Y = create_placeholders(n_x, n_y) ### END CODE HERE ### # Initialize parameters ### START CODE HERE ### (1 line) parameters = initialize_parameters() ### END CODE HERE ### # Forward propagation: Build the forward propagation in the tensorflow graph ### START CODE HERE ### (1 line) Z3 = forward_propagation(X, parameters) ### END CODE HERE ### # Cost function: Add cost function to tensorflow graph ### START CODE HERE ### (1 line) cost = compute_cost(Z3, Y) ### END CODE HERE ### # Backpropagation: Define the tensorflow optimizer. Use an AdamOptimizer. ### START CODE HERE ### (1 line) optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost) ### END CODE HERE ### # Initialize all the variables init = tf.global_variables_initializer() # Start the session to compute the tensorflow graph with tf.Session() as sess: # Run the initialization sess.run(init) # Do the training loop for epoch in range(num_epochs): epoch_cost = 0. # Defines a cost related to an epoch num_minibatches = int(m / minibatch_size) # number of minibatches of size minibatch_size in the train set seed = seed + 1 minibatches = random_mini_batches(X_train, Y_train, minibatch_size, seed) for minibatch in minibatches: # Select a minibatch (minibatch_X, minibatch_Y) = minibatch # IMPORTANT: The line that runs the graph on a minibatch. # Run the session to execute the "optimizer" and the "cost", the feedict should contain a minibatch for (X,Y). ### START CODE HERE ### (1 line) _ , minibatch_cost = sess.run([optimizer, cost], feed_dict={X: minibatch_X, Y: minibatch_Y}) ### END CODE HERE ### epoch_cost += minibatch_cost / num_minibatches # Print the cost every epoch if print_cost == True and epoch % 100 == 0: print ("Cost after epoch %i: %f" % (epoch, epoch_cost)) if print_cost == True and epoch % 5 == 0: costs.append(epoch_cost) # plot the cost plt.plot(np.squeeze(costs)) plt.ylabel('cost') plt.xlabel('iterations (per tens)') plt.title("Learning rate =" + str(learning_rate)) plt.show() # lets save the parameters in a variable parameters = sess.run(parameters) print("Parameters have been trained!") # Calculate the correct predictions correct_prediction = tf.equal(tf.argmax(Z3), tf.argmax(Y)) # Calculate accuracy on the test set accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float")) print("Train Accuracy:", accuracy.eval({X: X_train, Y: Y_train})) print("Test Accuracy:", accuracy.eval({X: X_test, Y: Y_test})) return parameters ###Output _____no_output_____ ###Markdown Run the following cell to train your model! On our machine it takes about 5 minutes. Your "Cost after epoch 100" should be 1.016458. If it's not, don't waste time; interrupt the training by clicking on the square (⬛) in the upper bar of the notebook, and try to correct your code. If it is the correct cost, take a break and come back in 5 minutes! ###Code parameters = model(X_train, Y_train, X_test, Y_test) ###Output Cost after epoch 0: 1.855702 Cost after epoch 100: 1.016458 Cost after epoch 200: 0.733102 Cost after epoch 300: 0.572940 Cost after epoch 400: 0.468774 Cost after epoch 500: 0.381021 Cost after epoch 600: 0.313822 Cost after epoch 700: 0.254158 Cost after epoch 800: 0.203829 Cost after epoch 900: 0.166421 Cost after epoch 1000: 0.141486 Cost after epoch 1100: 0.107580 Cost after epoch 1200: 0.086270 Cost after epoch 1300: 0.059371 Cost after epoch 1400: 0.052228 ###Markdown **Expected Output**: **Train Accuracy** 0.999074 **Test Accuracy** 0.716667 Amazing, your algorithm can recognize a sign representing a figure between 0 and 5 with 71.7% accuracy.**Insights**:- Your model seems big enough to fit the training set well. However, given the difference between train and test accuracy, you could try to add L2 or dropout regularization to reduce overfitting. - Think about the session as a block of code to train the model. Each time you run the session on a minibatch, it trains the parameters. In total you have run the session a large number of times (1500 epochs) until you obtained well trained parameters. 2.7 - Test with your own image (optional / ungraded exercise)Congratulations on finishing this assignment. You can now take a picture of your hand and see the output of your model. To do that: 1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub. 2. Add your image to this Jupyter Notebook's directory, in the "images" folder 3. Write your image's name in the following code 4. Run the code and check if the algorithm is right! ###Code import scipy from PIL import Image from scipy import ndimage ## START CODE HERE ## (PUT YOUR IMAGE NAME) my_image = "thumbs_up.jpg" ## END CODE HERE ## # We preprocess your image to fit your algorithm. fname = "images/" + my_image image = np.array(ndimage.imread(fname, flatten=False)) my_image = scipy.misc.imresize(image, size=(64,64)).reshape((1, 64*64*3)).T my_image_prediction = predict(my_image, parameters) plt.imshow(image) print("Your algorithm predicts: y = " + str(np.squeeze(my_image_prediction))) ###Output Your algorithm predicts: y = 3
models/research/object_detection/object_detection_tutorial.ipynb
###Markdown Object Detection API Demo Run in Google Colab View source on GitHub Welcome to the [Object Detection API](https://github.com/tensorflow/models/tree/master/research/object_detection). This notebook will walk you step by step through the process of using a pre-trained model to detect objects in an image. > **Important**: This tutorial is to help you through the first step towards using [Object Detection API](https://github.com/tensorflow/models/tree/master/research/object_detection) to build models. If you just just need an off the shelf model that does the job, see the [TFHub object detection example](https://colab.sandbox.google.com/github/tensorflow/hub/blob/master/examples/colab/object_detection.ipynb). Setup Important: If you're running on a local machine, be sure to follow the [installation instructions](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/installation.md). This notebook includes only what's necessary to run in Colab. Install ###Code !pip install -U --pre tensorflow=="2.*" ###Output _____no_output_____ ###Markdown Make sure you have `pycocotools` installed ###Code !pip install pycocotools ###Output _____no_output_____ ###Markdown Get `tensorflow/models` or `cd` to parent directory of the repository. ###Code import os import pathlib if "models" in pathlib.Path.cwd().parts: while "models" in pathlib.Path.cwd().parts: os.chdir('..') elif not pathlib.Path('models').exists(): !git clone --depth 1 https://github.com/tensorflow/models ###Output _____no_output_____ ###Markdown Compile protobufs and install the object_detection package ###Code %%bash cd models/research/ protoc object_detection/protos/*.proto --python_out=. %%bash cd models/research pip install . ###Output _____no_output_____ ###Markdown Imports ###Code import numpy as np import os import six.moves.urllib as urllib import sys import tarfile import tensorflow as tf import zipfile from collections import defaultdict from io import StringIO from matplotlib import pyplot as plt from PIL import Image from IPython.display import display ###Output _____no_output_____ ###Markdown Import the object detection module. ###Code from object_detection.utils import ops as utils_ops from object_detection.utils import label_map_util from object_detection.utils import visualization_utils as vis_util ###Output _____no_output_____ ###Markdown Patches: ###Code # patch tf1 into `utils.ops` utils_ops.tf = tf.compat.v1 # Patch the location of gfile tf.gfile = tf.io.gfile ###Output _____no_output_____ ###Markdown Model preparation VariablesAny model exported using the `export_inference_graph.py` tool can be loaded here simply by changing the path.By default we use an "SSD with Mobilenet" model here. See the [detection model zoo](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md) for a list of other models that can be run out-of-the-box with varying speeds and accuracies. Loader ###Code def load_model(model_name): base_url = 'http://download.tensorflow.org/models/object_detection/' model_file = model_name + '.tar.gz' model_dir = tf.keras.utils.get_file( fname=model_name, origin=base_url + model_file, untar=True) model_dir = pathlib.Path(model_dir)/"saved_model" model = tf.saved_model.load(str(model_dir)) model = model.signatures['serving_default'] return model ###Output _____no_output_____ ###Markdown Loading label mapLabel maps map indices to category names, so that when our convolution network predicts `5`, we know that this corresponds to `airplane`. Here we use internal utility functions, but anything that returns a dictionary mapping integers to appropriate string labels would be fine ###Code # List of the strings that is used to add correct label for each box. PATH_TO_LABELS = 'models/research/object_detection/data/mscoco_label_map.pbtxt' category_index = label_map_util.create_category_index_from_labelmap(PATH_TO_LABELS, use_display_name=True) ###Output _____no_output_____ ###Markdown For the sake of simplicity we will test on 2 images: ###Code # If you want to test the code with your images, just add path to the images to the TEST_IMAGE_PATHS. PATH_TO_TEST_IMAGES_DIR = pathlib.Path('models/research/object_detection/test_images') TEST_IMAGE_PATHS = sorted(list(PATH_TO_TEST_IMAGES_DIR.glob("*.jpg"))) TEST_IMAGE_PATHS ###Output _____no_output_____ ###Markdown Detection Load an object detection model: ###Code model_name = 'ssd_mobilenet_v1_coco_2017_11_17' detection_model = load_model(model_name) ###Output Downloading data from http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v1_coco_2017_11_17.tar.gz 13033472/76534733 [====>.........................] - ETA: 1:43 ###Markdown Check the model's input signature, it expects a batch of 3-color images of type uint8: ###Code print(detection_model.inputs) ###Output _____no_output_____ ###Markdown And retuns several outputs: ###Code detection_model.output_dtypes detection_model.output_shapes ###Output _____no_output_____ ###Markdown Add a wrapper function to call the model, and cleanup the outputs: ###Code def run_inference_for_single_image(model, image): image = np.asarray(image) # The input needs to be a tensor, convert it using `tf.convert_to_tensor`. input_tensor = tf.convert_to_tensor(image) # The model expects a batch of images, so add an axis with `tf.newaxis`. input_tensor = input_tensor[tf.newaxis,...] # Run inference output_dict = model(input_tensor) # All outputs are batches tensors. # Convert to numpy arrays, and take index [0] to remove the batch dimension. # We're only interested in the first num_detections. num_detections = int(output_dict.pop('num_detections')) output_dict = {key:value[0, :num_detections].numpy() for key,value in output_dict.items()} output_dict['num_detections'] = num_detections # detection_classes should be ints. output_dict['detection_classes'] = output_dict['detection_classes'].astype(np.int64) # Handle models with masks: if 'detection_masks' in output_dict: # Reframe the the bbox mask to the image size. detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks( output_dict['detection_masks'], output_dict['detection_boxes'], image.shape[0], image.shape[1]) detection_masks_reframed = tf.cast(detection_masks_reframed > 0.5, tf.uint8) output_dict['detection_masks_reframed'] = detection_masks_reframed.numpy() return output_dict ###Output _____no_output_____ ###Markdown Run it on each test image and show the results: ###Code def show_inference(model, image_path): # the array based representation of the image will be used later in order to prepare the # result image with boxes and labels on it. image_np = np.array(Image.open(image_path)) # Actual detection. output_dict = run_inference_for_single_image(model, image_np) # Visualization of the results of a detection. vis_util.visualize_boxes_and_labels_on_image_array( image_np, output_dict['detection_boxes'], output_dict['detection_classes'], output_dict['detection_scores'], category_index, instance_masks=output_dict.get('detection_masks_reframed', None), use_normalized_coordinates=True, line_thickness=8) display(Image.fromarray(image_np)) for image_path in TEST_IMAGE_PATHS: show_inference(detection_model, image_path) ###Output _____no_output_____ ###Markdown Instance Segmentation ###Code model_name = "mask_rcnn_inception_resnet_v2_atrous_coco_2018_01_28" masking_model = load_model("mask_rcnn_inception_resnet_v2_atrous_coco_2018_01_28") ###Output _____no_output_____ ###Markdown The instance segmentation model includes a `detection_masks` output: ###Code masking_model.output_shapes for image_path in TEST_IMAGE_PATHS: show_inference(masking_model, image_path) ###Output _____no_output_____ ###Markdown Object Detection DemoWelcome to the object detection inference walkthrough! This notebook will walk you step by step through the process of using a pre-trained model to detect objects in an image. Make sure to follow the [installation instructions](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/installation.md) before you start. Imports ###Code import numpy as np import os import six.moves.urllib as urllib import sys import tarfile import tensorflow as tf import zipfile from collections import defaultdict from io import StringIO from matplotlib import pyplot as plt from PIL import Image # This is needed since the notebook is stored in the object_detection folder. sys.path.append("..") from object_detection.utils import ops as utils_ops if tf.__version__ < '1.4.0': raise ImportError('Please upgrade your tensorflow installation to v1.4.* or later!') ###Output C:\Users\kappaavr\AppData\Local\Continuum\Anaconda3\lib\site-packages\h5py\__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`. from ._conv import register_converters as _register_converters ###Markdown Env setup ###Code # This is needed to display the images. %matplotlib inline ###Output _____no_output_____ ###Markdown Object detection importsHere are the imports from the object detection module. ###Code from utils import label_map_util from utils import visualization_utils as vis_util ###Output C:\tensorflow1\models\research\object_detection\utils\visualization_utils.py:25: UserWarning: This call to matplotlib.use() has no effect because the backend has already been chosen; matplotlib.use() must be called *before* pylab, matplotlib.pyplot, or matplotlib.backends is imported for the first time. The backend was *originally* set to 'module://ipykernel.pylab.backend_inline' by the following code: File "C:\Users\kappaavr\AppData\Local\Continuum\Anaconda3\lib\runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "C:\Users\kappaavr\AppData\Local\Continuum\Anaconda3\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "C:\Users\kappaavr\AppData\Local\Continuum\Anaconda3\lib\site-packages\ipykernel\__main__.py", line 3, in <module> app.launch_new_instance() File "C:\Users\kappaavr\AppData\Local\Continuum\Anaconda3\lib\site-packages\traitlets\config\application.py", line 658, in launch_instance app.start() File "C:\Users\kappaavr\AppData\Local\Continuum\Anaconda3\lib\site-packages\ipykernel\kernelapp.py", line 478, in start self.io_loop.start() File "C:\Users\kappaavr\AppData\Local\Continuum\Anaconda3\lib\site-packages\zmq\eventloop\ioloop.py", line 177, in start super(ZMQIOLoop, self).start() File "C:\Users\kappaavr\AppData\Local\Continuum\Anaconda3\lib\site-packages\tornado\ioloop.py", line 888, in start handler_func(fd_obj, events) File "C:\Users\kappaavr\AppData\Local\Continuum\Anaconda3\lib\site-packages\tornado\stack_context.py", line 277, in null_wrapper return fn(*args, **kwargs) File "C:\Users\kappaavr\AppData\Local\Continuum\Anaconda3\lib\site-packages\zmq\eventloop\zmqstream.py", line 440, in _handle_events self._handle_recv() File "C:\Users\kappaavr\AppData\Local\Continuum\Anaconda3\lib\site-packages\zmq\eventloop\zmqstream.py", line 472, in _handle_recv self._run_callback(callback, msg) File "C:\Users\kappaavr\AppData\Local\Continuum\Anaconda3\lib\site-packages\zmq\eventloop\zmqstream.py", line 414, in _run_callback callback(*args, **kwargs) File "C:\Users\kappaavr\AppData\Local\Continuum\Anaconda3\lib\site-packages\tornado\stack_context.py", line 277, in null_wrapper return fn(*args, **kwargs) File "C:\Users\kappaavr\AppData\Local\Continuum\Anaconda3\lib\site-packages\ipykernel\kernelbase.py", line 283, in dispatcher return self.dispatch_shell(stream, msg) File "C:\Users\kappaavr\AppData\Local\Continuum\Anaconda3\lib\site-packages\ipykernel\kernelbase.py", line 233, in dispatch_shell handler(stream, idents, msg) File "C:\Users\kappaavr\AppData\Local\Continuum\Anaconda3\lib\site-packages\ipykernel\kernelbase.py", line 399, in execute_request user_expressions, allow_stdin) File "C:\Users\kappaavr\AppData\Local\Continuum\Anaconda3\lib\site-packages\ipykernel\ipkernel.py", line 208, in do_execute res = shell.run_cell(code, store_history=store_history, silent=silent) File "C:\Users\kappaavr\AppData\Local\Continuum\Anaconda3\lib\site-packages\ipykernel\zmqshell.py", line 537, in run_cell return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs) File "C:\Users\kappaavr\AppData\Local\Continuum\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 2728, in run_cell interactivity=interactivity, compiler=compiler, result=result) File "C:\Users\kappaavr\AppData\Local\Continuum\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 2856, in run_ast_nodes if self.run_code(code, result): File "C:\Users\kappaavr\AppData\Local\Continuum\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 2910, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-2-0f0a85121700>", line 2, in <module> get_ipython().run_line_magic('matplotlib', 'inline') File "C:\Users\kappaavr\AppData\Local\Continuum\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 2095, in run_line_magic result = fn(*args,**kwargs) File "<decorator-gen-108>", line 2, in matplotlib File "C:\Users\kappaavr\AppData\Local\Continuum\Anaconda3\lib\site-packages\IPython\core\magic.py", line 187, in <lambda> call = lambda f, *a, **k: f(*a, **k) File "C:\Users\kappaavr\AppData\Local\Continuum\Anaconda3\lib\site-packages\IPython\core\magics\pylab.py", line 99, in matplotlib gui, backend = self.shell.enable_matplotlib(args.gui) File "C:\Users\kappaavr\AppData\Local\Continuum\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 2978, in enable_matplotlib pt.activate_matplotlib(backend) File "C:\Users\kappaavr\AppData\Local\Continuum\Anaconda3\lib\site-packages\IPython\core\pylabtools.py", line 308, in activate_matplotlib matplotlib.pyplot.switch_backend(backend) File "C:\Users\kappaavr\AppData\Local\Continuum\Anaconda3\lib\site-packages\matplotlib\pyplot.py", line 232, in switch_backend matplotlib.use(newbackend, warn=False, force=True) File "C:\Users\kappaavr\AppData\Local\Continuum\Anaconda3\lib\site-packages\matplotlib\__init__.py", line 1305, in use reload(sys.modules['matplotlib.backends']) File "C:\Users\kappaavr\AppData\Local\Continuum\Anaconda3\lib\importlib\__init__.py", line 166, in reload _bootstrap._exec(spec, module) File "C:\Users\kappaavr\AppData\Local\Continuum\Anaconda3\lib\site-packages\matplotlib\backends\__init__.py", line 14, in <module> line for line in traceback.format_stack() import matplotlib; matplotlib.use('Agg') # pylint: disable=multiple-statements ###Markdown Model preparation VariablesAny model exported using the `export_inference_graph.py` tool can be loaded here simply by changing `PATH_TO_CKPT` to point to a new .pb file. By default we use an "SSD with Mobilenet" model here. See the [detection model zoo](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md) for a list of other models that can be run out-of-the-box with varying speeds and accuracies. ###Code # What model to download. MODEL_NAME = 'ssd_inception_v2_coco_2018_01_28' MODEL_FILE = MODEL_NAME + '.tar.gz' DOWNLOAD_BASE = 'http://download.tensorflow.org/models/object_detection/' # Path to frozen detection graph. This is the actual model that is used for the object detection. PATH_TO_CKPT = MODEL_NAME + '/frozen_inference_graph.pb' # List of the strings that is used to add correct label for each box. PATH_TO_LABELS = os.path.join('data', 'mscoco_label_map.pbtxt') NUM_CLASSES = 1 ###Output _____no_output_____ ###Markdown Download Model ###Code opener = urllib.request.URLopener() opener.retrieve(DOWNLOAD_BASE + MODEL_FILE, MODEL_FILE) tar_file = tarfile.open(MODEL_FILE) for file in tar_file.getmembers(): file_name = os.path.basename(file.name) if 'frozen_inference_graph.pb' in file_name: tar_file.extract(file, os.getcwd()) ###Output _____no_output_____ ###Markdown Load a (frozen) Tensorflow model into memory. ###Code detection_graph = tf.Graph() with detection_graph.as_default(): od_graph_def = tf.GraphDef() with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid: serialized_graph = fid.read() od_graph_def.ParseFromString(serialized_graph) tf.import_graph_def(od_graph_def, name='') ###Output _____no_output_____ ###Markdown Loading label mapLabel maps map indices to category names, so that when our convolution network predicts `5`, we know that this corresponds to `airplane`. Here we use internal utility functions, but anything that returns a dictionary mapping integers to appropriate string labels would be fine ###Code label_map = label_map_util.load_labelmap(PATH_TO_LABELS) categories = label_map_util.convert_label_map_to_categories(label_map, max_num_classes=NUM_CLASSES, use_display_name=True) category_index = label_map_util.create_category_index(categories) ###Output _____no_output_____ ###Markdown Helper code ###Code def load_image_into_numpy_array(image): (im_width, im_height) = image.size return np.array(image.getdata()).reshape( (im_height, im_width, 3)).astype(np.uint8) ###Output _____no_output_____ ###Markdown Detection ###Code # For the sake of simplicity we will use only 2 images: # image1.jpg # image2.jpg # If you want to test the code with your images, just add path to the images to the TEST_IMAGE_PATHS. PATH_TO_TEST_IMAGES_DIR = 'test_images' TEST_IMAGE_PATHS = [ os.path.join(PATH_TO_TEST_IMAGES_DIR, 'image{}.jpg'.format(i)) for i in range(1, 3) ] # Size, in inches, of the output images. IMAGE_SIZE = (12, 8) def run_inference_for_single_image(image, graph): with graph.as_default(): with tf.Session() as sess: # Get handles to input and output tensors ops = tf.get_default_graph().get_operations() all_tensor_names = {output.name for op in ops for output in op.outputs} tensor_dict = {} for key in [ 'num_detections', 'detection_boxes', 'detection_scores', 'detection_classes', 'detection_masks' ]: tensor_name = key + ':0' if tensor_name in all_tensor_names: tensor_dict[key] = tf.get_default_graph().get_tensor_by_name( tensor_name) if 'detection_masks' in tensor_dict: # The following processing is only for single image detection_boxes = tf.squeeze(tensor_dict['detection_boxes'], [0]) detection_masks = tf.squeeze(tensor_dict['detection_masks'], [0]) # Reframe is required to translate mask from box coordinates to image coordinates and fit the image size. real_num_detection = tf.cast(tensor_dict['num_detections'][0], tf.int32) detection_boxes = tf.slice(detection_boxes, [0, 0], [real_num_detection, -1]) detection_masks = tf.slice(detection_masks, [0, 0, 0], [real_num_detection, -1, -1]) detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks( detection_masks, detection_boxes, image.shape[0], image.shape[1]) detection_masks_reframed = tf.cast( tf.greater(detection_masks_reframed, 0.5), tf.uint8) # Follow the convention by adding back the batch dimension tensor_dict['detection_masks'] = tf.expand_dims( detection_masks_reframed, 0) image_tensor = tf.get_default_graph().get_tensor_by_name('image_tensor:0') # Run inference output_dict = sess.run(tensor_dict, feed_dict={image_tensor: np.expand_dims(image, 0)}) # all outputs are float32 numpy arrays, so convert types as appropriate output_dict['num_detections'] = int(output_dict['num_detections'][0]) output_dict['detection_classes'] = output_dict[ 'detection_classes'][0].astype(np.uint8) output_dict['detection_boxes'] = output_dict['detection_boxes'][0] output_dict['detection_scores'] = output_dict['detection_scores'][0] if 'detection_masks' in output_dict: output_dict['detection_masks'] = output_dict['detection_masks'][0] return output_dict for image_path in TEST_IMAGE_PATHS: image = Image.open(image_path) # the array based representation of the image will be used later in order to prepare the # result image with boxes and labels on it. image_np = load_image_into_numpy_array(image) # Expand dimensions since the model expects images to have shape: [1, None, None, 3] image_np_expanded = np.expand_dims(image_np, axis=0) # Actual detection. output_dict = run_inference_for_single_image(image_np, detection_graph) # Visualization of the results of a detection. vis_util.visualize_boxes_and_labels_on_image_array( image_np, output_dict['detection_boxes'], output_dict['detection_classes'], output_dict['detection_scores'], category_index, instance_masks=output_dict.get('detection_masks'), use_normalized_coordinates=True, line_thickness=8) plt.figure(figsize=IMAGE_SIZE) plt.imshow(image_np) ###Output _____no_output_____ ###Markdown Object Detection DemoWelcome to the object detection inference walkthrough! This notebook will walk you step by step through the process of using a pre-trained model to detect objects in an image. Make sure to follow the [installation instructions](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/installation.md) before you start. Imports ###Code import numpy as np import os import six.moves.urllib as urllib import sys import tarfile import tensorflow as tf import zipfile from collections import defaultdict from io import StringIO from matplotlib import pyplot as plt from PIL import Image # This is needed since the notebook is stored in the object_detection folder. sys.path.append("..") from object_detection.utils import ops as utils_ops if tf.__version__ < '1.4.0': raise ImportError('Please upgrade your tensorflow installation to v1.4.* or later!') ###Output _____no_output_____ ###Markdown Env setup ###Code # This is needed to display the images. %matplotlib inline ###Output _____no_output_____ ###Markdown Object detection importsHere are the imports from the object detection module. ###Code from utils import label_map_util from utils import visualization_utils as vis_util ###Output _____no_output_____ ###Markdown Model preparation VariablesAny model exported using the `export_inference_graph.py` tool can be loaded here simply by changing `PATH_TO_CKPT` to point to a new .pb file. By default we use an "SSD with Mobilenet" model here. See the [detection model zoo](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md) for a list of other models that can be run out-of-the-box with varying speeds and accuracies. ###Code # What model to download. MODEL_NAME = 'gun_graph' # Path to frozen detection graph. This is the actual model that is used for the object detection. PATH_TO_CKPT = MODEL_NAME + '/frozen_inference_graph.pb' # List of the strings that is used to add correct label for each box. PATH_TO_LABELS = os.path.join('training', 'object-detection.pbtxt') NUM_CLASSES = 1 ###Output _____no_output_____ ###Markdown Download Model Load a (frozen) Tensorflow model into memory. ###Code detection_graph = tf.Graph() with detection_graph.as_default(): od_graph_def = tf.GraphDef() with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid: serialized_graph = fid.read() od_graph_def.ParseFromString(serialized_graph) tf.import_graph_def(od_graph_def, name='') ###Output _____no_output_____ ###Markdown Loading label mapLabel maps map indices to category names, so that when our convolution network predicts `5`, we know that this corresponds to `airplane`. Here we use internal utility functions, but anything that returns a dictionary mapping integers to appropriate string labels would be fine ###Code label_map = label_map_util.load_labelmap(PATH_TO_LABELS) categories = label_map_util.convert_label_map_to_categories(label_map, max_num_classes=NUM_CLASSES, use_display_name=True) category_index = label_map_util.create_category_index(categories) ###Output _____no_output_____ ###Markdown Helper code ###Code def load_image_into_numpy_array(image): (im_width, im_height) = image.size return np.array(image.getdata()).reshape( (im_height, im_width, 3)).astype(np.uint8) ###Output _____no_output_____ ###Markdown Detection ###Code # For the sake of simplicity we will use only 2 images: # image1.jpg # image2.jpg # If you want to test the code with your images, just add path to the images to the TEST_IMAGE_PATHS. PATH_TO_TEST_IMAGES_DIR = 'test_images' TEST_IMAGE_PATHS = [ os.path.join(PATH_TO_TEST_IMAGES_DIR, 'image{}.jpg'.format(i)) for i in range(8, 10) ] # Size, in inches, of the output images. IMAGE_SIZE = (12, 8) def run_inference_for_single_image(image, graph): with graph.as_default(): with tf.Session() as sess: # Get handles to input and output tensors ops = tf.get_default_graph().get_operations() all_tensor_names = {output.name for op in ops for output in op.outputs} tensor_dict = {} for key in [ 'num_detections', 'detection_boxes', 'detection_scores', 'detection_classes', 'detection_masks' ]: tensor_name = key + ':0' if tensor_name in all_tensor_names: tensor_dict[key] = tf.get_default_graph().get_tensor_by_name( tensor_name) if 'detection_masks' in tensor_dict: # The following processing is only for single image detection_boxes = tf.squeeze(tensor_dict['detection_boxes'], [0]) detection_masks = tf.squeeze(tensor_dict['detection_masks'], [0]) # Reframe is required to translate mask from box coordinates to image coordinates and fit the image size. real_num_detection = tf.cast(tensor_dict['num_detections'][0], tf.int32) detection_boxes = tf.slice(detection_boxes, [0, 0], [real_num_detection, -1]) detection_masks = tf.slice(detection_masks, [0, 0, 0], [real_num_detection, -1, -1]) detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks( detection_masks, detection_boxes, image.shape[0], image.shape[1]) detection_masks_reframed = tf.cast( tf.greater(detection_masks_reframed, 0.5), tf.uint8) # Follow the convention by adding back the batch dimension tensor_dict['detection_masks'] = tf.expand_dims( detection_masks_reframed, 0) image_tensor = tf.get_default_graph().get_tensor_by_name('image_tensor:0') # Run inference output_dict = sess.run(tensor_dict, feed_dict={image_tensor: np.expand_dims(image, 0)}) # all outputs are float32 numpy arrays, so convert types as appropriate output_dict['num_detections'] = int(output_dict['num_detections'][0]) output_dict['detection_classes'] = output_dict[ 'detection_classes'][0].astype(np.uint8) output_dict['detection_boxes'] = output_dict['detection_boxes'][0] output_dict['detection_scores'] = output_dict['detection_scores'][0] if 'detection_masks' in output_dict: output_dict['detection_masks'] = output_dict['detection_masks'][0] return output_dict for image_path in TEST_IMAGE_PATHS: image = Image.open(image_path) # the array based representation of the image will be used later in order to prepare the # result image with boxes and labels on it. image_np = load_image_into_numpy_array(image) # Expand dimensions since the model expects images to have shape: [1, None, None, 3] image_np_expanded = np.expand_dims(image_np, axis=0) # Actual detection. output_dict = run_inference_for_single_image(image_np, detection_graph) # Visualization of the results of a detection. vis_util.visualize_boxes_and_labels_on_image_array( image_np, output_dict['detection_boxes'], output_dict['detection_classes'], output_dict['detection_scores'], category_index, instance_masks=output_dict.get('detection_masks'), use_normalized_coordinates=True, line_thickness=8) plt.figure(figsize=IMAGE_SIZE) plt.imshow(image_np) ###Output _____no_output_____ ###Markdown Object Detection DemoWelcome to the object detection inference walkthrough! This notebook will walk you step by step through the process of using a pre-trained model to detect objects in an image. Make sure to follow the [installation instructions](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/installation.md) before you start. Imports ###Code import numpy as np import os import six.moves.urllib as urllib import sys import tarfile import tensorflow as tf import zipfile from distutils.version import StrictVersion from collections import defaultdict from io import StringIO from matplotlib import pyplot as plt from PIL import Image # This is needed since the notebook is stored in the object_detection folder. sys.path.append("..") from object_detection.utils import ops as utils_ops if StrictVersion(tf.__version__) < StrictVersion('1.9.0'): raise ImportError('Please upgrade your TensorFlow installation to v1.9.* or later!') ###Output _____no_output_____ ###Markdown Env setup ###Code # This is needed to display the images. %matplotlib inline ###Output _____no_output_____ ###Markdown Object detection importsHere are the imports from the object detection module. ###Code from utils import label_map_util from utils import visualization_utils as vis_util ###Output _____no_output_____ ###Markdown Model preparation VariablesAny model exported using the `export_inference_graph.py` tool can be loaded here simply by changing `PATH_TO_FROZEN_GRAPH` to point to a new .pb file. By default we use an "SSD with Mobilenet" model here. See the [detection model zoo](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md) for a list of other models that can be run out-of-the-box with varying speeds and accuracies. ###Code # What model to download. MODEL_NAME = 'ssd_mobilenet_v1_coco_2017_11_17' MODEL_FILE = MODEL_NAME + '.tar.gz' DOWNLOAD_BASE = 'http://download.tensorflow.org/models/object_detection/' # Path to frozen detection graph. This is the actual model that is used for the object detection. PATH_TO_FROZEN_GRAPH = MODEL_NAME + '/frozen_inference_graph.pb' # List of the strings that is used to add correct label for each box. PATH_TO_LABELS = os.path.join('data', 'mscoco_label_map.pbtxt') ###Output _____no_output_____ ###Markdown Download Model ###Code opener = urllib.request.URLopener() opener.retrieve(DOWNLOAD_BASE + MODEL_FILE, MODEL_FILE) tar_file = tarfile.open(MODEL_FILE) for file in tar_file.getmembers(): file_name = os.path.basename(file.name) if 'frozen_inference_graph.pb' in file_name: tar_file.extract(file, os.getcwd()) ###Output _____no_output_____ ###Markdown Load a (frozen) Tensorflow model into memory. ###Code detection_graph = tf.Graph() with detection_graph.as_default(): od_graph_def = tf.GraphDef() with tf.gfile.GFile(PATH_TO_FROZEN_GRAPH, 'rb') as fid: serialized_graph = fid.read() od_graph_def.ParseFromString(serialized_graph) tf.import_graph_def(od_graph_def, name='') ###Output _____no_output_____ ###Markdown Loading label mapLabel maps map indices to category names, so that when our convolution network predicts `5`, we know that this corresponds to `airplane`. Here we use internal utility functions, but anything that returns a dictionary mapping integers to appropriate string labels would be fine ###Code category_index = label_map_util.create_category_index_from_labelmap(PATH_TO_LABELS, use_display_name=True) ###Output _____no_output_____ ###Markdown Helper code ###Code def load_image_into_numpy_array(image): (im_width, im_height) = image.size return np.array(image.getdata()).reshape( (im_height, im_width, 3)).astype(np.uint8) ###Output _____no_output_____ ###Markdown Detection ###Code # For the sake of simplicity we will use only 2 images: # image1.jpg # image2.jpg # If you want to test the code with your images, just add path to the images to the TEST_IMAGE_PATHS. PATH_TO_TEST_IMAGES_DIR = 'test_images' TEST_IMAGE_PATHS = [ os.path.join(PATH_TO_TEST_IMAGES_DIR, 'image{}.jpg'.format(i)) for i in range(1, 3) ] # Size, in inches, of the output images. IMAGE_SIZE = (12, 8) def run_inference_for_single_image(image, graph): with graph.as_default(): with tf.Session() as sess: # Get handles to input and output tensors ops = tf.get_default_graph().get_operations() all_tensor_names = {output.name for op in ops for output in op.outputs} tensor_dict = {} for key in [ 'num_detections', 'detection_boxes', 'detection_scores', 'detection_classes', 'detection_masks' ]: tensor_name = key + ':0' if tensor_name in all_tensor_names: tensor_dict[key] = tf.get_default_graph().get_tensor_by_name( tensor_name) if 'detection_masks' in tensor_dict: # The following processing is only for single image detection_boxes = tf.squeeze(tensor_dict['detection_boxes'], [0]) detection_masks = tf.squeeze(tensor_dict['detection_masks'], [0]) # Reframe is required to translate mask from box coordinates to image coordinates and fit the image size. real_num_detection = tf.cast(tensor_dict['num_detections'][0], tf.int32) detection_boxes = tf.slice(detection_boxes, [0, 0], [real_num_detection, -1]) detection_masks = tf.slice(detection_masks, [0, 0, 0], [real_num_detection, -1, -1]) detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks( detection_masks, detection_boxes, image.shape[0], image.shape[1]) detection_masks_reframed = tf.cast( tf.greater(detection_masks_reframed, 0.5), tf.uint8) # Follow the convention by adding back the batch dimension tensor_dict['detection_masks'] = tf.expand_dims( detection_masks_reframed, 0) image_tensor = tf.get_default_graph().get_tensor_by_name('image_tensor:0') # Run inference output_dict = sess.run(tensor_dict, feed_dict={image_tensor: np.expand_dims(image, 0)}) # all outputs are float32 numpy arrays, so convert types as appropriate output_dict['num_detections'] = int(output_dict['num_detections'][0]) output_dict['detection_classes'] = output_dict[ 'detection_classes'][0].astype(np.uint8) output_dict['detection_boxes'] = output_dict['detection_boxes'][0] output_dict['detection_scores'] = output_dict['detection_scores'][0] if 'detection_masks' in output_dict: output_dict['detection_masks'] = output_dict['detection_masks'][0] return output_dict for image_path in TEST_IMAGE_PATHS: image = Image.open(image_path) # the array based representation of the image will be used later in order to prepare the # result image with boxes and labels on it. image_np = load_image_into_numpy_array(image) # Expand dimensions since the model expects images to have shape: [1, None, None, 3] image_np_expanded = np.expand_dims(image_np, axis=0) # Actual detection. output_dict = run_inference_for_single_image(image_np, detection_graph) # Visualization of the results of a detection. vis_util.visualize_boxes_and_labels_on_image_array( image_np, output_dict['detection_boxes'], output_dict['detection_classes'], output_dict['detection_scores'], category_index, instance_masks=output_dict.get('detection_masks'), use_normalized_coordinates=True, line_thickness=8) plt.figure(figsize=IMAGE_SIZE) plt.imshow(image_np) ###Output _____no_output_____
luna16/old_code/LUNA16_extract_patches.ipynb
###Markdown Simple script for extracting patches from LUNA16 datasetThis is a first pass. Let's keep things simple. The goal is to just extract 64x64 pixel patches around just the transverse slices with the candidates in the center. We'll have some 700k images. Only about 1,100 patches will have class 1 (true nodule). The remainder will be class 0 (non-nodule). We'll take this data and run it though a modified VGG classifier (done in a second script). If the classifier can make a good class prediction, then we know we've got data that will work with more advanced models (e.g. Faster R-CNN to both localize and classify the candidates in the full slice images) ###Code import SimpleITK as sitk import numpy as np import pandas as pd import os import ntpath # To get the data: # wget https://www.dropbox.com/sh/mtip9dx6zt9nb3z/AAAs2wbJxbNM44-uafZyoMVca/subset5.zip # The files are 7-zipped. Regular linux unzip won't work to uncompress them. Use 7za instead. # 7za e subset5.zip DATA_DIR = "/Volumes/data/tonyr/dicom/LUNA16/" cand_path = 'CSVFILES/candidates_with_annotations.csv' def extractCandidates(img_file): # Get the name of the file subjectName = ntpath.splitext(ntpath.basename(img_file))[0] # Strip off the .mhd extension # Read the list of candidate ROI dfCandidates = pd.read_csv(DATA_DIR+cand_path) numCandidates = dfCandidates[dfCandidates['seriesuid']==subjectName].shape[0] print('There are {} candidate nodules in this file.'.format(numCandidates)) numNonNodules = sum(dfCandidates[dfCandidates['seriesuid']==subjectName]['class'] == 0) numNodules = sum(dfCandidates[dfCandidates['seriesuid']==subjectName]['class'] == 1) #print('{} are true nodules (class 1) and {} are non-nodules (class 0)'.format(numNodules, numNonNodules)) # Read if the candidate ROI is a nodule (1) or non-nodule (0) candidateValues = dfCandidates[dfCandidates['seriesuid']==subjectName]['class'].values # Get the world coordinates (mm) of the candidate ROI center worldCoords = dfCandidates[dfCandidates['seriesuid']==subjectName][['coordX', 'coordY', 'coordZ']].values # Use SimpleITK to read the mhd image itkimage = sitk.ReadImage(img_file) # Get the real world origin (mm) for this image originMatrix = np.tile(itkimage.GetOrigin(), (numCandidates,1)) # Real world origin for this image (0,0) # Subtract the real world origin and scale by the real world (mm per pixel) # This should give us the X,Y,Z coordinates for the candidates candidatesPixels = (np.round(np.absolute(worldCoords - originMatrix) / itkimage.GetSpacing())).astype(int) # Replace the missing diameters with the 50th percentile diameter candidateDiameter = dfCandidates['diameter_mm'].fillna(dfCandidates['diameter_mm'].quantile(0.5)).values / itkimage.GetSpacing()[1] candidatePatches = [] imgAll = sitk.GetArrayFromImage(itkimage) # Read the image volume for candNum in range(numCandidates): #print('Extracting candidate patch #{}'.format(candNum)) candidateVoxel = candidatesPixels[candNum,:] xpos = int(candidateVoxel[0]) ypos = int(candidateVoxel[1]) zpos = int(candidateVoxel[2]) # Need to handle the candidates where the window would extend beyond the image boundaries windowSize = 32 x_lower = np.max([0, xpos - windowSize]) # Return 0 if position off image x_upper = np.min([xpos + windowSize, itkimage.GetWidth()]) # Return maxWidth if position off image y_lower = np.max([0, ypos - windowSize]) # Return 0 if position off image y_upper = np.min([ypos + windowSize, itkimage.GetHeight()]) # Return maxHeight if position off image # SimpleITK is x,y,z. Numpy is z, y, x. imgPatch = imgAll[zpos, y_lower:y_upper, x_lower:x_upper] # Normalize to the Hounsfield units # TODO: I don't think we should normalize into Housefield units imgPatchNorm = imgPatch #normalizePlanes(imgPatch) candidatePatches.append(imgPatchNorm) # Append the candidate image patches to a python list return candidatePatches, candidateValues, candidateDiameter from scipy.misc import toimage # We need to save the array as an image. # This is the easiest way. Matplotlib seems to like adding a white border that is hard to kill. def SavePatches(img_file, patchesArray, valuesArray): saveDir = ntpath.dirname(img_file) + '/patches' try: os.stat(saveDir) except: os.mkdir(saveDir) subjectName = ntpath.splitext(ntpath.basename(img_file))[0] print('Saving image patches for file {}.'.format(subjectName)) for i in range(len(valuesArray)): print('\r{} of {}'.format(i+1, len(valuesArray))), im = toimage(patchesArray[i]) im.save(saveDir + '/{}_{}_{}.png'.format(subjectName, i, valuesArray[i])) print('Finished {}'.format(subjectName)) i = 0 for root, dirs, files in os.walk(DATA_DIR): for file in files: if (file.endswith('.mhd')) & ('__MACOSX' not in root): # Don't get the Macintosh directory img_file = os.path.join(root, file) patchesArray, valuesArray, noduleDiameter = extractCandidates(img_file) SavePatches(img_file, patchesArray, valuesArray) ###Output There are 1129 candidate nodules in this file. Saving image patches for file 1.3.6.1.4.1.14519.5.2.1.6279.6001.126264578931778258890371755354. 1129 of 1129 Finished 1.3.6.1.4.1.14519.5.2.1.6279.6001.126264578931778258890371755354 There are 1262 candidate nodules in this file. Saving image patches for file 1.3.6.1.4.1.14519.5.2.1.6279.6001.130438550890816550994739120843. 601 of 1262
Structured/ExceptionsExercises.ipynb
###Markdown Ejercicios - Excepciones Escribe el código que permita ingresar un valor entero controlando las situaciones **extrañas** que puedan ocurrir. ###Code while True : try : value = int(input("Ingrese un valor entero: ")) except ValueError : print("\tCUIDADO Ingrese un valor válido") ###Output _____no_output_____ ###Markdown Escribe el código que permita ingresar un valor entero positivo controlando las situaciones **extrañas** que puedan ocurrir. ###Code # ... ###Output _____no_output_____
demos/MC_vs_QMC.ipynb
###Markdown A Monte Carlo vs Quasi-Monte Carlo ComparisonMonte Carlo algorithms work on independent identically distributed (IID) points while Quasi-Monte Carlo algorithms work on low discrepancy (LD) sequences. LD generators, such as those for the lattice and Sobol' sequences, provide samples whose space filling properties can be exploited by Quasi-Monte Carlo algorithms. ###Code import pandas as pd pd.options.display.float_format = '{:.2e}'.format from matplotlib import pyplot as plt import matplotlib %matplotlib inline plt.rc('font', size=16) # controls default text sizes plt.rc('axes', titlesize=16) # fontsize of the axes title plt.rc('axes', labelsize=16) # fontsize of the x and y labels plt.rc('xtick', labelsize=16) # fontsize of the tick labels plt.rc('ytick', labelsize=16) # fontsize of the tick labels plt.rc('legend', fontsize=16) # legend fontsize plt.rc('figure', titlesize=16) # fontsize of the figure title ###Output _____no_output_____ ###Markdown Vary Absolute ToleranceTesting Parameters- relative tolerance = 0- Results averaged over 3 trialsKeister Integrand- $y_i = \pi^{d/2} \cos(||\boldsymbol{x}_i||_2)$- $d=3$Gaussian True Measure- $\mathcal{N}(\boldsymbol{0},\boldsymbol{I}/2)$ ###Code df = pd.read_csv('../workouts/mc_vs_qmc/out/vary_abs_tol.csv') df['Problem'] = df['Stopping Criterion'] + ' ' + df['Distribution'] + ' (' + df['MC/QMC'] + ')' df = df.drop(['Stopping Criterion','Distribution','MC/QMC'],axis=1) problems = ['CubMCCLT IIDStdUniform (MC)', 'CubMCG IIDStdUniform (MC)', 'CubQMCCLT Sobol (QMC)', 'CubQMCLatticeG Lattice (QMC)', 'CubQMCSobolG Sobol (QMC)'] df = df[df['Problem'].isin(problems)] df['abs_tol'] = df['abs_tol'].round(4) df_grouped = df.groupby(['Problem']) df_abs_tols = df_grouped['abs_tol'].apply(list).reset_index(name='abs_tol') df_samples = df_grouped['n_samples'].apply(list).reset_index(name='n') df_times = df.groupby(['Problem'])['time'].apply(list).reset_index(name='time') df[df['abs_tol'].isin([.01,.05,.1])].set_index('Problem') fig,ax = plt.subplots(nrows=1, ncols=2, figsize=(18, 5)) for problem in problems: abs_tols = df_abs_tols[df_abs_tols['Problem']==problem]['abs_tol'].tolist()[0] samples = df_samples[df_samples['Problem']==problem]['n'].tolist()[0] times = df_times[df_times['Problem']==problem]['time'].tolist()[0] ax[0].plot(abs_tols,samples,label=problem) ax[1].plot(abs_tols,times,label=problem) for ax_i in ax: ax_i.set_xscale('log', basex=10) ax_i.set_yscale('log', basey=10) ax_i.spines['right'].set_visible(False) ax_i.spines['top'].set_visible(False) ax_i.set_xlabel('Absolute Tolerance') ax[0].legend(loc='upper right', frameon=False,ncol=1,bbox_to_anchor=(2.8,1)) ax[0].set_ylabel('Total Samples') ax[1].set_ylabel('Runtime') fig.suptitle('Comparing Absolute Tolerances') plt.subplots_adjust(wspace=.15, hspace=0); ###Output _____no_output_____ ###Markdown Quasi-Monte Carlo takes less time and fewer samples to achieve the same accuracy as regular Monte CarloThe number of points for Monte Carlo algorithms is $\mathcal{O}(1/\epsilon^2)$ while Quasi-Monte Carlo algorithms can be as efficient as $\mathcal{O}(1/\epsilon)$ Vary DimensionTesting Parameters- absolute tolerance = 0- relative tolerance = .01- Results averaged over 3 trialsKeister Integrand- $y_i = \pi^{d/2} \cos(||\boldsymbol{x}_i||_2)$Gaussian True Measure- $\mathcal{N}(\boldsymbol{0},\boldsymbol{I}/2)$ ###Code df = pd.read_csv('../workouts/mc_vs_qmc/out/vary_dimension.csv') df['Problem'] = df['Stopping Criterion'] + ' ' + df['Distribution'] + ' (' + df['MC/QMC'] + ')' df = df.drop(['Stopping Criterion','Distribution','MC/QMC'],axis=1) problems = ['CubMCCLT IIDStdUniform (MC)', 'CubMCG IIDStdUniform (MC)', 'CubQMCCLT Sobol (QMC)', 'CubQMCLatticeG Lattice (QMC)', 'CubQMCSobolG Sobol (QMC)'] df = df[df['Problem'].isin(problems)] df_grouped = df.groupby(['Problem']) df_dims = df_grouped['dimension'].apply(list).reset_index(name='dimension') df_samples = df_grouped['n_samples'].apply(list).reset_index(name='n') df_times = df.groupby(['Problem'])['time'].apply(list).reset_index(name='time') df[df['dimension'].isin([10,30])].set_index('Problem') fig,ax = plt.subplots(nrows=1, ncols=2, figsize=(18, 5)) for problem in problems: dimension = df_dims[df_dims['Problem']==problem]['dimension'].tolist()[0] samples = df_samples[df_samples['Problem']==problem]['n'].tolist()[0] times = df_times[df_times['Problem']==problem]['time'].tolist()[0] ax[0].plot(dimension,samples,label=problem) ax[1].plot(dimension,times,label=problem) for ax_i in ax: ax_i.set_xscale('log', basex=10) ax_i.set_yscale('log', basey=10) ax_i.spines['right'].set_visible(False) ax_i.spines['top'].set_visible(False) ax_i.set_xlabel('Dimension') ax[0].legend(loc='upper right', frameon=False,ncol=1,bbox_to_anchor=(2.9,1)) ax[0].set_ylabel('Total Samples') ax[1].set_ylabel('Runtime') fig.suptitle('Comparing Dimensions'); ###Output _____no_output_____ ###Markdown A Monte Carlo vs Quasi-Monte Carlo ComparisonMonte Carlo algorithms work on independent identically distributed (IID) points while Quasi-Monte Carlo algorithms work on low discrepancy (LD) sequences. LD generators, such as those for the lattice and Sobol' sequences, provide samples whose space filling properties can be exploited by Quasi-Monte Carlo algorithms. ###Code import pandas as pd pd.options.display.float_format = '{:.2e}'.format from matplotlib import pyplot as plt import matplotlib %matplotlib inline plt.rc('font', size=16) # controls default text sizes plt.rc('axes', titlesize=16) # fontsize of the axes title plt.rc('axes', labelsize=16) # fontsize of the x and y labels plt.rc('xtick', labelsize=16) # fontsize of the tick labels plt.rc('ytick', labelsize=16) # fontsize of the tick labels plt.rc('legend', fontsize=16) # legend fontsize plt.rc('figure', titlesize=16) # fontsize of the figure title ###Output _____no_output_____ ###Markdown Vary Absolute ToleranceTesting Parameters- relative tolerance = 0- Results averaged over 3 trialsKeister Integrand- $y_i = \pi^{d/2} \cos(||\boldsymbol{x}_i||_2)$- $d=3$Gaussian True Measure- $\mathcal{N}(\boldsymbol{0},\boldsymbol{I}/2)$ ###Code df = pd.read_csv('../workouts/mc_vs_qmc/out/vary_abs_tol.csv') df['Problem'] = df['Stopping Criterion'] + ' ' + df['Distribution'] + ' (' + df['MC/QMC'] + ')' df = df.drop(['Stopping Criterion','Distribution','MC/QMC'],axis=1) problems = ['CubMCCLT IIDStdUniform (MC)', 'CubMCG IIDStdUniform (MC)', 'CubQMCCLT Sobol (QMC)', 'CubQMCLatticeG Lattice (QMC)', 'CubQMCSobolG Sobol (QMC)'] df = df[df['Problem'].isin(problems)] df['abs_tol'] = df['abs_tol'].round(4) df_grouped = df.groupby(['Problem']) df_abs_tols = df_grouped['abs_tol'].apply(list).reset_index(name='abs_tol') df_samples = df_grouped['n_samples'].apply(list).reset_index(name='n') df_times = df.groupby(['Problem'])['time'].apply(list).reset_index(name='time') df[df['abs_tol'].isin([.01,.05,.1])].set_index('Problem') fig,ax = plt.subplots(nrows=1, ncols=2, figsize=(18, 5)) for problem in problems: abs_tols = df_abs_tols[df_abs_tols['Problem']==problem]['abs_tol'].tolist()[0] samples = df_samples[df_samples['Problem']==problem]['n'].tolist()[0] times = df_times[df_times['Problem']==problem]['time'].tolist()[0] ax[0].plot(abs_tols,samples,label=problem) ax[1].plot(abs_tols,times,label=problem) for ax_i in ax: ax_i.set_xscale('log', basex=10) ax_i.set_yscale('log', basey=10) ax_i.spines['right'].set_visible(False) ax_i.spines['top'].set_visible(False) ax_i.set_xlabel('Absolute Tolerance') ax[0].legend(loc='upper right', frameon=False,ncol=1,bbox_to_anchor=(2.8,1)) ax[0].set_ylabel('Total Samples') ax[1].set_ylabel('Runtime') fig.suptitle('Comparing Absolute Tolerances') plt.subplots_adjust(wspace=.15, hspace=0); ###Output _____no_output_____ ###Markdown Quasi-Monte Carlo takes less time and fewer samples to achieve the same accuracy as regular Monte CarloThe number of points for Monte Carlo algorithms is $\mathcal{O}(1/\epsilon^2)$ while Quasi-Monte Carlo algorithms can be as efficient as $\mathcal{O}(1/\epsilon)$ Vary DimensionTesting Parameters- absolute tolerance = 0- relative tolerance = .01- Results averaged over 3 trialsKeister Integrand- $y_i = \pi^{d/2} \cos(||\boldsymbol{x}_i||_2)$Gaussian True Measure- $\mathcal{N}(\boldsymbol{0},\boldsymbol{I}/2)$ ###Code df = pd.read_csv('../workouts/mc_vs_qmc/out/vary_dimension.csv') df['Problem'] = df['Stopping Criterion'] + ' ' + df['Distribution'] + ' (' + df['MC/QMC'] + ')' df = df.drop(['Stopping Criterion','Distribution','MC/QMC'],axis=1) problems = ['CubMCCLT IIDStdUniform (MC)', 'CubMCG IIDStdUniform (MC)', 'CubQMCCLT Sobol (QMC)', 'CubQMCLatticeG Lattice (QMC)', 'CubQMCSobolG Sobol (QMC)'] df = df[df['Problem'].isin(problems)] df_grouped = df.groupby(['Problem']) df_dims = df_grouped['dimension'].apply(list).reset_index(name='dimension') df_samples = df_grouped['n_samples'].apply(list).reset_index(name='n') df_times = df.groupby(['Problem'])['time'].apply(list).reset_index(name='time') df[df['dimension'].isin([10,30])].set_index('Problem') fig,ax = plt.subplots(nrows=1, ncols=2, figsize=(18, 5)) for problem in problems: dimension = df_dims[df_dims['Problem']==problem]['dimension'].tolist()[0] samples = df_samples[df_samples['Problem']==problem]['n'].tolist()[0] times = df_times[df_times['Problem']==problem]['time'].tolist()[0] ax[0].plot(dimension,samples,label=problem) ax[1].plot(dimension,times,label=problem) for ax_i in ax: ax_i.set_xscale('log', basex=10) ax_i.set_yscale('log', basey=10) ax_i.spines['right'].set_visible(False) ax_i.spines['top'].set_visible(False) ax_i.set_xlabel('Dimension') ax[0].legend(loc='upper right', frameon=False,ncol=1,bbox_to_anchor=(2.9,1)) ax[0].set_ylabel('Total Samples') ax[1].set_ylabel('Runtime') fig.suptitle('Comparing Dimensions'); ###Output _____no_output_____ ###Markdown A Monte Carlo vs Quasi-Monte Carlo ComparisonMonte Carlo algorithms work on independent identically distributed (IID) points while Quasi-Monte Carlo algorithms work on low discrepancy (LD) sequences. LD generators, such as those for the lattice and Sobol' sequences, provide samples whose space filling properties can be exploited by Quasi-Monte Carlo algorithms. ###Code import pandas as pd pd.options.display.float_format = '{:.2e}'.format from matplotlib import pyplot as plt import matplotlib %matplotlib inline plt.rc('font', size=16) # controls default text sizes plt.rc('axes', titlesize=16) # fontsize of the axes title plt.rc('axes', labelsize=16) # fontsize of the x and y labels plt.rc('xtick', labelsize=16) # fontsize of the tick labels plt.rc('ytick', labelsize=16) # fontsize of the tick labels plt.rc('legend', fontsize=16) # legend fontsize plt.rc('figure', titlesize=16) # fontsize of the figure title ###Output _____no_output_____ ###Markdown Vary Absolute ToleranceTesting Parameters- relative tolerance = 0- Results averaged over 3 trialsKeister Integrand- $y_i = \pi^{d/2} \cos(||\boldsymbol{x}_i||_2)$- $d=3$Gaussian True Measure- $\mathcal{N}(\boldsymbol{0},\boldsymbol{I}/2)$ ###Code df = pd.read_csv('../workouts/mc_vs_qmc/out/vary_abs_tol.csv') df['Problem'] = df['Stopping Criterion'] + ' ' + df['Distribution'] + ' (' + df['MC/QMC'] + ')' df = df.drop(['Stopping Criterion','Distribution','MC/QMC'],axis=1) problems = ['CubMCCLT IIDStdUniform (MC)', 'CubMCG IIDStdGaussian (MC)', 'CubQMCCLT Sobol (QMC)', 'CubQMCLatticeG Lattice (QMC)', 'CubQMCSobolG Sobol (QMC)'] df = df[df['Problem'].isin(problems)] df['abs_tol'] = df['abs_tol'].round(4) df_grouped = df.groupby(['Problem']) df_abs_tols = df_grouped['abs_tol'].apply(list).reset_index(name='abs_tol') df_samples = df_grouped['n_samples'].apply(list).reset_index(name='n') df_times = df.groupby(['Problem'])['time'].apply(list).reset_index(name='time') df[df['abs_tol'].isin([.01,.05,.1])].set_index('Problem') fig,ax = plt.subplots(nrows=1, ncols=2, figsize=(18, 5)) for problem in problems: abs_tols = df_abs_tols[df_abs_tols['Problem']==problem]['abs_tol'].tolist()[0] samples = df_samples[df_samples['Problem']==problem]['n'].tolist()[0] times = df_times[df_times['Problem']==problem]['time'].tolist()[0] ax[0].plot(abs_tols,samples,label=problem) ax[1].plot(abs_tols,times,label=problem) for ax_i in ax: ax_i.set_xscale('log', basex=10) ax_i.set_yscale('log', basey=10) ax_i.spines['right'].set_visible(False) ax_i.spines['top'].set_visible(False) ax_i.set_xlabel('Absolute Tolerance') ax[0].legend(loc='upper right', frameon=False,ncol=1,bbox_to_anchor=(2.8,1)) ax[0].set_ylabel('Total Samples') ax[1].set_ylabel('Runtime') fig.suptitle('Comparing Absolute Tolerances') plt.subplots_adjust(wspace=.15, hspace=0); ###Output _____no_output_____ ###Markdown Quasi-Monte Carlo takes less time and fewer samples to achieve the same accuracy as regular Monte CarloThe number of points for Monte Carlo algorithms is $\mathcal{O}(1/\epsilon^2)$ while Quasi-Monte Carlo algorithms can be as efficient as $\mathcal{O}(1/\epsilon)$ Vary DimensionTesting Parameters- absolute tolerance = 0- relative tolerance = .01- Results averaged over 3 trialsKeister Integrand- $y_i = \pi^{d/2} \cos(||\boldsymbol{x}_i||_2)$Gaussian True Measure- $\mathcal{N}(\boldsymbol{0},\boldsymbol{I}/2)$ ###Code df = pd.read_csv('../workouts/mc_vs_qmc/out/vary_dimension.csv') df['Problem'] = df['Stopping Criterion'] + ' ' + df['Distribution'] + ' (' + df['MC/QMC'] + ')' df = df.drop(['Stopping Criterion','Distribution','MC/QMC'],axis=1) problems = ['CubMCCLT IIDStdUniform (MC)', 'CubMCG IIDStdGaussian (MC)', 'CubQMCCLT Sobol (QMC)', 'CubQMCLatticeG Lattice (QMC)', 'CubQMCSobolG Sobol (QMC)'] df = df[df['Problem'].isin(problems)] df_grouped = df.groupby(['Problem']) df_dims = df_grouped['dimension'].apply(list).reset_index(name='dimension') df_samples = df_grouped['n_samples'].apply(list).reset_index(name='n') df_times = df.groupby(['Problem'])['time'].apply(list).reset_index(name='time') df[df['dimension'].isin([10,30])].set_index('Problem') fig,ax = plt.subplots(nrows=1, ncols=2, figsize=(18, 5)) for problem in problems: dimension = df_dims[df_dims['Problem']==problem]['dimension'].tolist()[0] samples = df_samples[df_samples['Problem']==problem]['n'].tolist()[0] times = df_times[df_times['Problem']==problem]['time'].tolist()[0] ax[0].plot(dimension,samples,label=problem) ax[1].plot(dimension,times,label=problem) for ax_i in ax: ax_i.set_xscale('log', basex=10) ax_i.set_yscale('log', basey=10) ax_i.spines['right'].set_visible(False) ax_i.spines['top'].set_visible(False) ax_i.set_xlabel('Dimension') ax[0].legend(loc='upper right', frameon=False,ncol=1,bbox_to_anchor=(2.9,1)) ax[0].set_ylabel('Total Samples') ax[1].set_ylabel('Runtime') fig.suptitle('Comparing Dimensions'); ###Output _____no_output_____ ###Markdown A Monte Carlo vs Quasi-Monte Carlo ComparisonMonte Carlo algorithms work on independent identically distributed (IID) points while Quasi-Monte Carlo algorithms work on low discrepancy (LD) sequences. LD generators, such as those for the lattice and Sobol' sequences, provide samples whose space filling properties can be exploited by Quasi-Monte Carlo algorithms. ###Code import pandas as pd pd.options.display.float_format = '{:.2e}'.format from matplotlib import pyplot as plt import matplotlib %matplotlib inline plt.rc('font', size=16) # controls default text sizes plt.rc('axes', titlesize=16) # fontsize of the axes title plt.rc('axes', labelsize=16) # fontsize of the x and y labels plt.rc('xtick', labelsize=16) # fontsize of the tick labels plt.rc('ytick', labelsize=16) # fontsize of the tick labels plt.rc('legend', fontsize=16) # legend fontsize plt.rc('figure', titlesize=16) # fontsize of the figure title ###Output _____no_output_____ ###Markdown Vary Absolute ToleranceTesting Parameters- relative tolerance = 0- Results averaged over 3 trialsKeister Integrand- $y_i = \pi^{d/2} \cos(||\boldsymbol{x}_i||_2)$- $d=3$Gaussian True Measure- $\mathcal{N}(\boldsymbol{0},\boldsymbol{I}/2)$ ###Code df = pd.read_csv('../workouts/mc_vs_qmc/out/vary_abs_tol.csv') df['Problem'] = df['Stopping Criterion'] + ' ' + df['Distribution'] + ' (' + df['MC/QMC'] + ')' df = df.drop(['Stopping Criterion','Distribution','MC/QMC'],axis=1) problems = ['CubMCCLT IIDStdUniform (MC)', 'CubMCG IIDStdUniform (MC)', 'CubQMCCLT Sobol (QMC)', 'CubQMCLatticeG Lattice (QMC)', 'CubQMCSobolG Sobol (QMC)'] df = df[df['Problem'].isin(problems)] df['abs_tol'] = df['abs_tol'].round(4) df_grouped = df.groupby(['Problem']) df_abs_tols = df_grouped['abs_tol'].apply(list).reset_index(name='abs_tol') df_samples = df_grouped['n_samples'].apply(list).reset_index(name='n') df_times = df.groupby(['Problem'])['time'].apply(list).reset_index(name='time') df[df['abs_tol'].isin([.01,.05,.1])].set_index('Problem') fig,ax = plt.subplots(nrows=1, ncols=2, figsize=(18, 5)) for problem in problems: abs_tols = df_abs_tols[df_abs_tols['Problem']==problem]['abs_tol'].tolist()[0] samples = df_samples[df_samples['Problem']==problem]['n'].tolist()[0] times = df_times[df_times['Problem']==problem]['time'].tolist()[0] ax[0].plot(abs_tols,samples,label=problem) ax[1].plot(abs_tols,times,label=problem) for ax_i in ax: ax_i.set_xscale('log', basex=10) ax_i.set_yscale('log', basey=10) ax_i.spines['right'].set_visible(False) ax_i.spines['top'].set_visible(False) ax_i.set_xlabel('Absolute Tolerance') ax[0].legend(loc='upper right', frameon=False,ncol=1,bbox_to_anchor=(2.8,1)) ax[0].set_ylabel('Total Samples') ax[1].set_ylabel('Runtime') fig.suptitle('Comparing Absolute Tolerances') plt.subplots_adjust(wspace=.15, hspace=0); ###Output _____no_output_____ ###Markdown Quasi-Monte Carlo takes less time and fewer samples to achieve the same accuracy as regular Monte CarloThe number of points for Monte Carlo algorithms is $\mathcal{O}(1/\epsilon^2)$ while Quasi-Monte Carlo algorithms can be as efficient as $\mathcal{O}(1/\epsilon)$ Vary DimensionTesting Parameters- absolute tolerance = 0- relative tolerance = .01- Results averaged over 3 trialsKeister Integrand- $y_i = \pi^{d/2} \cos(||\boldsymbol{x}_i||_2)$Gaussian True Measure- $\mathcal{N}(\boldsymbol{0},\boldsymbol{I}/2)$ ###Code df = pd.read_csv('../workouts/mc_vs_qmc/out/vary_dimension.csv') df['Problem'] = df['Stopping Criterion'] + ' ' + df['Distribution'] + ' (' + df['MC/QMC'] + ')' df = df.drop(['Stopping Criterion','Distribution','MC/QMC'],axis=1) problems = ['CubMCCLT IIDStdUniform (MC)', 'CubMCG IIDStdUniform (MC)', 'CubQMCCLT Sobol (QMC)', 'CubQMCLatticeG Lattice (QMC)', 'CubQMCSobolG Sobol (QMC)'] df = df[df['Problem'].isin(problems)] df_grouped = df.groupby(['Problem']) df_dims = df_grouped['dimension'].apply(list).reset_index(name='dimension') df_samples = df_grouped['n_samples'].apply(list).reset_index(name='n') df_times = df.groupby(['Problem'])['time'].apply(list).reset_index(name='time') df[df['dimension'].isin([10,30])].set_index('Problem') fig,ax = plt.subplots(nrows=1, ncols=2, figsize=(18, 5)) for problem in problems: dimension = df_dims[df_dims['Problem']==problem]['dimension'].tolist()[0] samples = df_samples[df_samples['Problem']==problem]['n'].tolist()[0] times = df_times[df_times['Problem']==problem]['time'].tolist()[0] ax[0].plot(dimension,samples,label=problem) ax[1].plot(dimension,times,label=problem) for ax_i in ax: ax_i.set_xscale('log', basex=10) ax_i.set_yscale('log', basey=10) ax_i.spines['right'].set_visible(False) ax_i.spines['top'].set_visible(False) ax_i.set_xlabel('Dimension') ax[0].legend(loc='upper right', frameon=False,ncol=1,bbox_to_anchor=(2.9,1)) ax[0].set_ylabel('Total Samples') ax[1].set_ylabel('Runtime') fig.suptitle('Comparing Dimensions'); ###Output _____no_output_____
code/notebooks/NCAA Probability Exploration.ipynb
###Markdown Hello! Welcome to my notebook.First thing: Import libraries and connect to the database ###Code from db import get_db import numpy as np import pandas as pd import matplotlib.pyplot as plt db=get_db() pipeline = [ {'$match':{'OppAst':{'$ne':np.nan}}}, {'$group':{'_id':{'TmName':'$TmName','Season':'$Season'}, 'TmPF':{'$sum':'$TmPF'}}}, {'$sort':{'TmPF':-1}} ] results = db.games.aggregate(pipeline) results = list(results) [print(row) for row in results[0:5]] [print(row) for row in results[-5:]] query = {'OppAst':{'$ne':np.nan}, 'TmName':'Duke' #,'Season':2018 } fields = {'_id':0, 'TmPF':1, 'OppPF':1} results = db.games.find(query,fields) temp = list(results) df = pd.DataFrame(temp) fig, axes = plt.subplots(2, 1,sharex=True) axes[0].hist(df.TmPF,bins=np.arange(0, 130, 5).tolist(),label='Pts For') axes[1].set_xlabel('Points') axes[1].hist(df.OppPF,bins=np.arange(0, 130, 5).tolist(), label = 'Pts Against') plt.show() ###Output _____no_output_____
files/general_overview/DataModel.ipynb
###Markdown Объектно-ориентированное программирование Объявление класса* Все обычные поля класса публичные* Все методы класса виртуальные* Методы явно принимают экземпляр класса как первый аргумент ###Code class Vector2D: x = 0 y = 0 def norm(self): return (self.x**2 + self.y**2)**0.5 def norm(self): return (self.x**2 + self.y**2)**0.5 vec = Vector2D() vec.x = 5 vec.y = 5 vec.norm() norm(vec) ###Output _____no_output_____ ###Markdown Больше о полях и методах Data attributess ###Code vec.z = 10 vec.z del vec.z vec.z ###Output _____no_output_____ ###Markdown Все в Python это объекты, классы это тоже объекты ###Code type(Vector2D) # Vector2D это экземпляр класса type isinstance(Vector2D, object) Vector2D.x, Vector2D.y # x и y это атрибутты класса Vector2D.norm(vec) ###Output _____no_output_____ ###Markdown Мутабельные объекты как аттрибуты класса это плохая идея ###Code class Foo: bar = [] foo1 = Foo() foo2 = Foo() foo1.bar.append("spam") foo2.bar ###Output _____no_output_____ ###Markdown Методы объекта отличаются от полей-функций объекта ###Code vec.func = lambda self : print(self.x) vec.func() # self не передается в поля-функции type(vec.norm) type(vec.func) ###Output _____no_output_____ ###Markdown Но метод остается методом ###Code func_norm = vec.norm type(func_norm) func_norm() ###Output _____no_output_____ ###Markdown Методы это функции класса ###Code type(vec.norm) type(Vector2D.norm) vec.norm() == Vector2D.norm(vec) def another_norm(self): return abs(self.x + self.y) Vector2D.another_norm = another_norm type(Vector2D.another_norm) vec = Vector2D() vec.another_norm() type(vec.another_norm) ###Output _____no_output_____ ###Markdown Приватные поля* `_spam` --- поля чьё имя начинается с подчеркивания подразумеваются как не публичные ###Code class Foo: spam = "public" _spam = "типа приватное" foo = Foo() ###Output _____no_output_____ ###Markdown Магические методы ###Code class Foo: pass dir(Foo) def fun(): pass dir(fun) ###Output _____no_output_____ ###Markdown * Управляют внутренней работой объектов* Вызываются при использовании синтаксических конструкций* Вызываются встроенными (builtins) функциями* Рефлексия и метапрограммирование ###Code class TenItemList: def __len__(self): return 10 ten_item_list = TenItemList() len(ten_item_list) ###Output _____no_output_____ ###Markdown Всё в Python является объектом, а все синтаксические конструкции сводятся к вызовам магическим методов ###Code vec = Vector2D() vec.__getattribute__("x") vec.x = 5 vec.__getattribute__("x") vec.__setattr__("x", 10) vec.x ###Output _____no_output_____ ###Markdown На самом деле все объекты реализованы как словари хранящие атрибуты объекта (однако есть возможности для оптимизаций) ###Code vec = Vector2D() vec.__dict__ Vector2D.__dict__ vec.x = 5 vec.__dict__ class Foo: def __set ###Output _____no_output_____
notebooks/Solute Comparison.ipynb
###Markdown Load solute parameters ###Code def load_parameters(solutes, nclusters, linkage='ward'): params = {i : {} for i in solutes} for n, s in zip(nclusters, solutes): if s != 'GCL': ihmm = file_rw.load_object('saved_parameters/2000iter_%s_unseeded.pl' % s)['ihmm'] else: ihmm = file_rw.load_object('saved_parameters/2000iter_%s.pl' % s)['ihmm'] A, sigma, mu, T, mu_weights = organize_parameters(ihmm) # arrange parameters into arrays suitable for clustering params[s]['z'] = file_rw.load_object('clusters_%s_%s.pl' %(s, linkage))[0][n]['z'] params[s]['final_parameters'] = file_rw.load_object('clusters_%s_%s.pl' %(s, linkage))[0][n] params[s]['unclustered_params'] = {'A': A, 'sigma': sigma, 'mu': mu, 'T': T, 'mu_weights': mu_weights} params[s]['density'] = get_density(s) params[s]['coord'] = get_coordination(s) params[s]['hbonded'], params[s]['monomer_hbonds'], params[s]['definitions'] = get_hbonds(s) params[s]['unclustered_realizations'] = file_rw.load_object('unclustered_%s.pl' % s) return params solutes = ['MET', 'GCL', 'URE', 'ACH'] nclusters = [10, 10, 10, 10] params = load_parameters(solutes, nclusters) ###Output _____no_output_____ ###Markdown Plot time spent hbonding and associated with sodium ###Code def hbonded_associated(residues, params, bar_width=0.75, bar_cmap = plt.cm.cool, savename=None): fig, haax = plt.subplots(figsize=(6, 4.5)) names = {'MET': 'methanol', 'URE': 'urea', 'GCL': 'ethylene\n glycol', 'ACH': 'acetic\n acid'} for i, r in enumerate(residues): z = params[r]['z'] hbonded = params[r]['hbonded'] coordinated = params[r]['coord'] nsolute = hbonded.shape[0] nclusters = np.unique(z).size interactions = np.zeros([nclusters, 3]) # [hbonded, both, associated] both = 0 only_hbonded = 0 only_coord = 0 for t in range(nsolute): zipped = np.array([hbonded[t, :], coordinated[t, :]]).astype(bool) both += len(np.where(zipped.sum(axis=0) == 2)[0]) not_both_ndx = np.where(zipped.sum(axis=0) < 2)[0] only_hbonded += len(np.where(zipped[0, not_both_ndx])[0]) only_coord += len(np.where(zipped[1, not_both_ndx])[0]) for s in range(np.unique(z).size): ndx = np.where(z[t, :] == s)[0] data = zipped[:, ndx] not_both = np.where(data.sum(axis=0) < 2)[0] interactions[s, 1] += len(np.where(data.sum(axis=0) == 2)[0]) interactions[s, 0] += len(np.where(data[0, not_both])[0]) interactions[s, 2] += len(np.where(data[1, not_both])[0]) bar_colors = np.array([bar_cmap(j) for j in np.linspace(50, 250, 4).astype(int)]) tot = z.size both /= tot only_hbonded /= tot only_coord /= tot #msd = file_rw.load_object('trajectories/%s_msd.pl' % res) haax.bar(i, only_hbonded, bar_width, color=bar_colors[0], edgecolor='white', linewidth=1) haax.bar(i, both, bar_width, bottom=only_hbonded, color=bar_colors[1], edgecolor='white', linewidth=1) haax.bar(i, only_coord, bar_width, bottom=only_hbonded + both, color=bar_colors[2], edgecolor='white', linewidth=1) #msdax.bar(i + bar_width/2, msd.MSD.mean(axis=1)[2000], bar_width, color=bar_colors[3], edgecolor='white', lw=2) haax.tick_params(labelsize=14) haax.set_xticks([0, 1, 2, 3]) haax.set_xticklabels([names[res] for res in residues]) haax.set_ylabel('Fraction of time spent interacting', fontsize=14) haax.set_ylim(0, 1) # msdax.tick_params(labelsize=14) # msdax.set_ylim(0, 4.5) # msdax.set_ylabel('Mean Squared Displacement', fontsize=14) hatches = [] labels = ['Hbonded', 'Both', 'Associated'] for i in range(3): hatches.append(mpatches.Patch(facecolor=bar_colors[i], label=labels[i], edgecolor='white')) fig.legend(handles=hatches, fontsize=14, ncol=3, loc='upper left', bbox_to_anchor=(0.15, 0.94), columnspacing=0.9, frameon=False) fig.tight_layout() if savename is not None: fig.savefig(savename) save = True if save: savename = '/home/ben/github/LLC_Membranes/Ben_Manuscripts/hdphmm/figures/hbonds_assoc_summary.pdf' else: savename = None hbonded_associated(solutes, params, bar_width=0.75, bar_cmap=plt.cm.cool, savename=savename) ###Output _____no_output_____ ###Markdown Plot radial distribution functions based on clustered parameters ###Code def rdf(residues, params, cmap=plt.cm.jet, savename=None): colors = np.array([cmap(i) for i in np.linspace(50, 225, len(residues)).astype(int)]) colors = ['xkcd:red', 'xkcd:green', 'xkcd:blue', 'xkcd:orange'] names = {'MET': 'methanol', 'URE': 'urea', 'GCL': 'ethylene glycol', 'ACH': 'acetic acid'} fig, rdfax = plt.subplots(len(residues), 1, sharex=True, gridspec_kw = {'wspace':0, 'hspace':0}, figsize=(6, 4.5)) for i, r in enumerate(residues): z = params[r]['z'] mu = params[r]['unclustered_params']['mu'] mu_weights = params[r]['unclustered_params']['mu_weights'] mur = np.linalg.norm(mu[:2, :], axis=0) rdf = [] for j, m in enumerate(mur): rdf += [m] * mu_weights[j] nbins = 15 n, edges = np.histogram(rdf, range=(0, 3), bins=15, density=True) x = np.zeros([2 * nbins + 2]) y = np.zeros_like(x) x[1::2] = edges x[2::2] = edges[:-1] x += edges[1] - edges[0] y[:-2:2] = n y[1:-1:2] = n rdfax[i].plot(x, y, color=colors[i], lw=2) rdfax[i].hist(rdf, bins=15, range=(0, 3), color=colors[i], density=True, alpha=0.7) rdfax[i].set_ylim(0, 2) rdfax[i].set_yticks([1, 2]) rdfax[i].tick_params(labelsize=14) hatch = mpatches.Patch(facecolor='xkcd:white', label=names[r], edgecolor='white') rdfax[i].legend(handles = [hatch], loc='upper right', frameon=False, fontsize=14) rdfax[2].set_ylabel(' Cluster frequency', fontsize=14) rdfax[3].set_xlabel('Radial distance from nearest pore center (nm)', fontsize=14) fig.tight_layout() if savename is not None: fig.savefig(savename) save = True if save: savename = '/home/ben/github/LLC_Membranes/Ben_Manuscripts/hdphmm/figures/rdf_summary.pdf' else: savename = None rdf(solutes, params, savename=savename, cmap=plt.cm.prism) ###Output _____no_output_____ ###Markdown Compare longest dwell times of prevelant states ###Code def dwell_time_comparison(residues, params, msds, cmap=plt.cm.jet, savename=None, dt=0.5, bar_width=0.6): colors = np.array([cmap(i) for i in np.linspace(50, 225, len(residues)).astype(int)]) colors = ['xkcd:red', 'xkcd:green', 'xkcd:blue', 'xkcd:orange'] names = {'MET': 'methanol', 'URE': 'urea', 'GCL': 'ethylene glycol', 'ACH': 'acetic acid'} fig, ax = plt.subplots(figsize=(6, 4.5)) #ax2 = ax.twinx() ax_color = 'xkcd:azure' ax2_color = 'xkcd:aquamarine' for i, r in enumerate(residues): # T = params[r]['final_parameters']['T'] z = params[r]['z'] dwells = [] dwells_perparticle = [] for t in range(24): sp = ts.switch_points(z[t, :]) dwells += (sp[1:] - sp[:-1]).tolist() dwells_perparticle.append((sp[1:] - sp[:-1]).tolist()) #prevelant_states, state_counts = prevalent_states(z, percent=50) # states that are at least half of trajectories #max_selfT_ndx = prevelant_states[np.argmax(np.diag(T.mean(axis=0))[prevelant_states])] #max_selfT = T[:, max_selfT_ndx, max_selfT_ndx] dwells = np.array(dwells) dwell_boot = [] for b in range(200): ndx = np.random.choice(len(dwells), size=len(dwells), replace=True) dwell_boot.append(np.percentile(dwells[ndx], 95)) # dwells = dt / (1 - max_selfT) # yerr = np.abs(np.array([[np.percentile(dwells, 16), np.percentile(dwells, 84)]]).T - dwells.mean()) # ax.bar(i - bar_width/2, dwells.mean(), bar_width, color=ax_color, yerr=yerr) # ax.bar(i - bar_width/2, dt*np.mean(dwell_boot), bar_width, color=ax_color, yerr=dt*np.std(dwell_boot)) # ax2.bar(i + bar_width/2, msds[r][0], bar_width, yerr=msds[r][1], color=ax2_color) #ax.scatter(msds[r][0], np.mean(dwell_boot), color=colors[i]) #ax.scatter(np.mean(dwell_boot), msds[r][0], color=colors[i]) ax.errorbar(dt*np.mean(dwell_boot), msds[r][0], xerr=dt*np.std(dwell_boot), yerr=msds[r][1], color=colors[i], marker='o', capsize=5, capthick=2) # ax.set_xticks([0, 1, 2, 3]) # ax.set_xticklabels([names[r] for r in residues], fontsize=14) #ax.invert_yaxis() ax.set_xticks([50, 100, 150, 200, 250]) ax.set_ylabel('MSD after 1000 ns (nm$^2$)', fontsize=14) ax.set_xlabel('95$^{th}$ Percentile Dwell Time (ns)', fontsize=14) ax.tick_params(labelsize=14)#, axis='y', color=ax_color) handles = [] for i, r in enumerate(residues): handles.append(mlines.Line2D([], [], color=colors[i], marker='D', label=names[r])) labels = [h.get_label() for h in handles] ax.legend(handles, labels, fontsize=14) #fig.legend(handles=[hatch1, hatch2], fontsize=14, loc='lower left', bbox_to_anchor=(0.38, 0.15)) # ax2.tick_params(labelsize=14, axis='y', colors=ax2_color) # ax2.set_ylabel('MSD after 1000 ns (nm$^2$)', fontsize=14) # ax.spines['left'].set_color(ax_color) # ax.spines['right'].set_color(ax2_color) fig.tight_layout() if savename is not None: fig.savefig(savename) plt.show() z = params['ACH']['z'] dwells = [] dwells_perparticle = [] for t in range(24): sp = ts.switch_points(z[t, :]) dwells += (sp[1:] - sp[:-1]).tolist() dwells_perparticle.append((sp[1:] - sp[:-1]).tolist()) dwells = np.array(dwells) dwell_boot = [] for i in range(200): ndx = np.random.choice(len(dwells), size=len(dwells), replace=True) dwell_boot.append(np.percentile(dwells[ndx], 95)) dwells_perparticle = np.array(dwells_perparticle) dwells_perparticle_boot = [] for i in range(200): ndx = np.random.choice(z.shape[0], size=z.shape[0], replace=True) boot = [] for n in ndx: boot += dwells_perparticle[n] dwells_perparticle_boot.append(np.percentile(boot, 95)) print(np.mean(dwell_boot), np.std(dwell_boot)) print(np.mean(dwells_perparticle_boot), np.std(dwells_perparticle_boot)) #np.percentile(dwells, 95) msds = dict() timelag = 2000 for s in solutes: msd = file_rw.load_object('trajectories/%s_msd.pl' % s) msds[s] = (msd.MSD_average[timelag], msd.limits[:, [timelag]]) save = True if save: savename = '/home/ben/github/LLC_Membranes/Ben_Manuscripts/hdphmm/figures/dwell_time_summary.pdf' else: savename = None dwell_time_comparison(solutes, params, msds, savename=savename, cmap=plt.cm.prism, bar_width=0.4) ###Output _____no_output_____ ###Markdown Plot local density versus expected dwell time ###Code def density_comparison(residues, params, percent=0, savename=None, cmap=plt.cm.prism, ylims=(0, 100)): colors = np.array([cmap(i) for i in np.linspace(50, 225, len(residues)).astype(int)]) colors = ['xkcd:red', 'xkcd:green', 'xkcd:blue', 'xkcd:orange'] names = {'MET': 'methanol', 'URE': 'urea', 'GCL': 'ethylene glycol', 'ACH': 'acetic acid'} fig, ax = plt.subplots(figsize=(6, 4.5)) for c, r in enumerate(residues): z = params[r]['z'] density = params[r]['density'] T = params[r]['final_parameters']['T'] dominant_states, dominant_state_counts = prevalent_states(z, percent=percent) nT = density.shape[0] for i, s in enumerate(dominant_states): dens = [] for t in range(density.shape[1]): ndx = np.where(z[t, :] == s)[0] if len(ndx) > 0: dens += density[ndx, t].tolist() dwell = 1 / (1 - T[:, s, s]) ax.errorbar(np.mean(dens), 0.5*dwell.mean(), color=colors[c], marker='D', yerr=dwell.std()) ax.set_ylim(ylims) ax.tick_params(labelsize=14) ax.set_xlabel('Number density (heavy atoms / nm$^3$)', fontsize=14) ax.set_ylabel('Expected Dwell Time (ns)', fontsize=14) handles = [] for i, r in enumerate(residues): handles.append(mlines.Line2D([], [], color=colors[i], marker='D', label=names[r])) labels = [h.get_label() for h in handles] ax.legend(handles, labels, fontsize=14) fig.tight_layout() if savename is not None: fig.savefig(savename) plt.show() save = True if save: savename = '/home/ben/github/LLC_Membranes/Ben_Manuscripts/hdphmm/figures/density_comparison.pdf' else: savename = None #params['ACH']['final_parameters']['T'] = params['ACH']['final_parameters']['T'][1:, ...] #print(params['ACH']['final_parameters']['T'].shape) density_comparison(solutes, params, savename=savename, cmap=plt.cm.prism, percent=50, ylims=(0, 150)) ###Output _____no_output_____ ###Markdown Calculate selectivities ###Code def selectivity(residues, params, ratios, savename=None, cmap=plt.cm.prism, dt=0.5, nboot=5, startfit=0, endfit=1000, show_fits=False): colors = np.array([cmap(i) for i in np.linspace(50, 225, ratios.shape[0]).astype(int)]) names = {'MET': 'methanol', 'URE': 'urea', 'GCL': 'ethylene\n glycol', 'ACH': 'acetic\n acid'} fig, ax = plt.subplots(figsize=(8, 4.5)) D = np.zeros([len(residues), nboot]) for i, r in enumerate(residues): realizations = np.moveaxis(params[r]['unclustered_realizations'][2], 0, 1) nchoose = realizations.shape[2] time = np.arange(realizations.shape[0]) * dt for b in tqdm.tqdm(range(nboot)): ndx = np.random.choice(nchoose, size=nchoose, replace=True) msd = np.zeros([realizations.shape[0], nchoose]) for n, nd in enumerate(ndx): msd[:, n] = ts.msd(realizations[:, :, nd, :], 2, progress=False).mean(axis=1) msd_boot = msd.mean(axis=1) # weight by number of observations of each time lag. 1st time lag should have nsteps - 1 observations w = np.arange(realizations.shape[0] - 1)[::-1][startfit:endfit] m, b_ = np.polyfit(time[startfit:endfit], msd_boot[startfit:endfit], 1, w=w) D[i, b] = m / 2 # <msd> = 2nDt; m = 2nD; D = m / 2n ; n = 1 if show_fits: plt.plot(time[0:endfit], msd_boot[0:endfit]) plt.plot(time[startfit:endfit], time[startfit:endfit] * m + b_, '--', lw=2, color='black') plt.show() ax.set_xticks(np.arange(ratios.shape[0])) labels = [] for i in range(ratios.shape[0]): a, b = ratios[i, :] s = D[a, :] / D[b, :] ax.bar(i, s.mean(), yerr=s.std(), color=colors[i], edgecolor='black') labels.append('%s:\n %s' %(names[residues[a]], names[residues[b]])) ax.set_xticklabels(labels, fontsize=14) #ax.invert_yaxis() ax.set_ylabel('Selectivity', fontsize=14) ax.tick_params(labelsize=14) #fig.legend(handles=[hatch1, hatch2], fontsize=14, loc='lower left', bbox_to_anchor=(0.38, 0.15)) fig.tight_layout() if savename is not None: fig.savefig(savename) return fig save = True if save: savename = '/home/ben/github/LLC_Membranes/Ben_Manuscripts/hdphmm/figures/selectivity2.pdf' else: savename = None startfit = 10 endfit = 50 ratios = np.array([[0, 1], [0, 2], [0, 3], [1, 2], [1, 3], [2, 3]]) ratios = np.array([[1, 3], [0, 3], [1, 2], [0, 2], [2, 3], [1, 0]]) # in order from highest to lowest selectivity solutes = ['MET', 'GCL', 'URE', 'ACH'] # solutes = ['GCL'] fig = selectivity(solutes, params, ratios, savename=savename, cmap=plt.cm.jet, startfit=startfit, endfit=endfit, show_fits=False, nboot=200) savename = '/home/ben/github/LLC_Membranes/Ben_Manuscripts/hdphmm/figures/selectivity2.pdf' fig.savefig(savename) ###Output _____no_output_____
notebooks/AxisScaling_Part3.ipynb
###Markdown Part 2 of Axis Scaling: Adjusting Scale for RegionsThis page is primarily based on the following page at the Circos documentation site:- [3. Adjusting Scale for Regions](????????????)That page is found as part number 4 of the ??? part ['Axis Scaling' section](http://circos.ca/documentation/tutorials/quick_start/) of [the larger set of Circos tutorials](http://circos.ca/documentation/tutorials/).Go back to Part 2 by clicking [here &8592;](AxisScaling_Part2.ipynb).----7 --- Axis Scaling==================3. Adjusting Scale for Regions------------------------------::: {menu4}[[Lesson](/documentation/tutorials/scaling/scale_regions/lesson){.clean}]{.active}[Images](/documentation/tutorials/scaling/scale_regions/images){.normal}[Configuration](/documentation/tutorials/scaling/scale_regions/configuration){.normal}:::Before I get into local scale adjustment in the next tutorial, I want tocover a simple way to adjust the scale in a region of an ideogram.In this example, I have drawn 0-60 Mb regions of chromosomes 1 and 2, bybreaking them up into three regions each. The resulting image has 6ideograms, 3 per chromosome. Using chromosomes\_scale, the scale for theideograms is adjusted like in the previous example. ```inichromosomes = hs1[a]:0-20;hs2[b]:0-20;hs1[c]:20-40;hs2[d]:20-40;hs1[e]:40-60;hs2[f]:40-60chromosomes_scale = a:0.5;b:0.5;e:5;f:5``` Note that ideogram tags (a, b, c, \...) are required because we havemultiple idoegrams per chromosome and it is not sufficient to use thechromosome name to uniquely specify an ideogram.For example, if you would like to expand (or contract) the scale on aspecific region of an ideogram using global scale adjustment, break theideogram into multiple region that demarcate your region of interst andapply a new scale to the region. For example, here\'s a way to zoom inon the 50-60Mb region on chr1 by 10x. ```inichromosomes = hs1[a]:0-50;hs1[b]:50-60;hs1[c]:60-)chromosomes_scale = b:10``` Note the 60-) syntax in the chromosome range field. This means from 60to the end of the chromosome, and saves you remembering the exact sizeof the chromosome. The size of each chromosome is defined in thekaryotype file.---- Generating the plot produced by this example codeThe following two cells will generate the plot. The first cell adjusts the current working directory. ###Code %cd ../circos-tutorials-0.67/tutorials/7/3/ %%bash ../../../../circos-0.69-6/bin/circos -conf circos.conf ###Output debuggroup summary 0.32s welcome to circos v0.69-6 31 July 2017 on Perl 5.022000 debuggroup summary 0.33s current working directory /home/jovyan/circos-tutorials-0.67/tutorials/7/3 debuggroup summary 0.33s command ../../../../circos-0.69-6/bin/circos -conf circos.conf debuggroup summary 0.33s loading configuration from file circos.conf debuggroup summary 0.33s found conf file circos.conf debuggroup summary 0.49s debug will appear for these features: output,summary debuggroup summary 0.49s bitmap output image ./circos.png debuggroup summary 0.49s SVG output image ./circos.svg debuggroup summary 0.49s parsing karyotype and organizing ideograms debuggroup summary 0.59s karyotype has 24 chromosomes of total size 3,095,677,436 debuggroup summary 0.59s applying global and local scaling debuggroup summary 0.60s allocating image, colors and brushes debuggroup summary 2.57s drawing 6 ideograms of total size 120,000,000 debuggroup summary 2.57s drawing highlights and ideograms debuggroup output 2.85s generating output debuggroup output 3.56s created PNG image ./circos.png (176 kb) debuggroup output 3.57s created SVG image ./circos.svg (53 kb) ###Markdown View the plot in this page using the following cell. ###Code from IPython.display import Image Image("circos.png") ###Output _____no_output_____
notebook/01_naive_execution.ipynb
###Markdown Run pipeline with bashYou will need to install Salmon, fastQC and multiQC before running the commands below. Otherwise, you can create a Conda environment that include the above-mentioned tools, activate it as explained [here](./installation_help.ipynbHow-to-create-a-Conda-environment) and then run the commands on this notebook. If you don't have Conda installed in your system follow [these](https://docs.conda.io/projects/conda/en/latest/user-guide/install/) instructions.> **_Note:_** The instructions below assume you launched the notebook from `your_path/BovReg-Reproducibility/notebook/` Set pipeline folder ###Code # Command to change to the base directory of the repository import os, shutil pwd = os.getcwd() pipeline_folder = pwd + "/../" + "rnaseq-nf" os.chdir(pipeline_folder) ###Output _____no_output_____ ###Markdown Move fastq files and call bash script to rename input files ###Code fq_folder = pipeline_folder + "/data_chicken" if not os.path.exists(fq_folder): os.makedirs(fq_folder) !cp data/ggal/ggal_gut*.fq ./data_chicken os.chdir(fq_folder) ## Calling bash script to rename files !../bin/rename_files.sh ###Output _____no_output_____ ###Markdown Create Salmon index ###Code os.chdir(pipeline_folder) !salmon index --threads 1 -t ./data/ggal/ggal_1_48850000_49020000.Ggal71.500bpflank.fa -i index ###Output _____no_output_____ ###Markdown Create folders to dump results ###Code ## Set results folders r_fastqc = pipeline_folder + "/results_fastqc" r_salmon = pipeline_folder + "/results_salmon" d_multiqc = pipeline_folder + "/multiqc_in" r_multiqc = pipeline_folder + "/results_multiqc" try: shutil.rmtree(r_fastqc) except OSError: pass try: shutil.rmtree(r_salmon) except OSError: pass try: shutil.rmtree(d_multiqc) except OSError: pass try: shutil.rmtree(r_multiqc) except OSError: pass os.makedirs(r_fastqc) os.makedirs(d_multiqc) ###Output _____no_output_____ ###Markdown Run Salmon and remove index ###Code !salmon quant --threads 1 --libType=U -i index -1 ./data_chicken/chicken_gut_1.fq -2 ./data_chicken/chicken_gut_2.fq -o results_salmon !rm -rf index ###Output _____no_output_____ ###Markdown Create fastQC results folder and run it ###Code !fastqc -o results_fastqc -f fastq -q ./data_chicken/*.fq ###Output _____no_output_____ ###Markdown Create folders to move input and results for multiQC ###Code !cp results_fastqc/* multiqc_in !cp -rf results_salmon/* multiqc_in !cp multiqc/* multiqc_in !echo "custom_logo: $PWD/multiqc_in/logo.png" >> multiqc_in/multiqc_config.yaml os.chdir(d_multiqc) ###Output _____no_output_____ ###Markdown Run multiQC ###Code !multiqc -v . --outdir ../results_multiqc/ ###Output _____no_output_____ ###Markdown Remove input data for multiQC and calling firefox to display multiQC report ###Code os.chdir(r_multiqc) shutil.rmtree(d_multiqc) ### Uncomment to show the report on firefox, otherwise go to ### /your_path/BovReg-Reproducibility/rnaseq-nf/results_multiqc and open the file multiqc_report.html ### with your favourite browser # !firefox multiqc_report.html ###Output _____no_output_____
notebooks/nn_large/vgg-augmentation-long.ipynb
###Markdown Momentum ###Code optimizer = AcceleratedSGD(model.parameters(), 1e-2, k=10, momentum=0.9, weight_decay=1e-5, lambda_=1e-8) loss_fn = nn.NLLLoss() epochs = 90 for epoch in range(epochs): print("Epoch", epoch+1) loss_log = [] train_epoch(loss_log) print(f"Training loss: {np.mean(loss_log):.4f}") optimizer.finish_epoch() val_acc, val_loss = validation(model, valid_loader) print(f"Validation accuracy: {val_acc:.4f}, validation loss: {val_loss:.4f}") print("Epoch", epoch+1, f"Training loss: {np.mean(loss_log):.4f}, validation accuracy: {val_acc:.4f}, validation loss: {val_loss:.4f}", file=log_file, flush=True ) train_score = validation(model, train_loader) valid_score = validation(model, valid_loader) print("Train:", train_score) print("Valid:", valid_score) print("Train:", train_score, flush=True, file=log_file) print("Valid:", valid_score, flush=True, file=log_file) model_acc = deepcopy(model) optimizer.accelerate() optimizer.store_parameters([model_acc.parameters()]) model_acc.to(device) train_score = validation(model_acc, train_loader) valid_score = validation(model_acc, valid_loader) print("Train:", train_score) print("Valid:", valid_score) print("Train:", train_score, flush=True, file=log_file) print("Valid:", valid_score, flush=True, file=log_file) optimizer.param_groups[0]["lr"] = 1e-3 epochs = 90 for epoch in range(epochs): print("Epoch", epoch+1) loss_log = [] train_epoch(loss_log) print(f"Training loss: {np.mean(loss_log):.4f}") optimizer.finish_epoch() val_acc, val_loss = validation(model, valid_loader) print(f"Validation accuracy: {val_acc:.4f}, validation loss: {val_loss:.4f}") print("Epoch", epoch+1, f"Training loss: {np.mean(loss_log):.4f}, validation accuracy: {val_acc:.4f}, validation loss: {val_loss:.4f}", file=log_file, flush=True ) train_score = validation(model, train_loader) valid_score = validation(model, valid_loader) print("Train:", train_score) print("Valid:", valid_score) print("Train:", train_score, flush=True, file=log_file) print("Valid:", valid_score, flush=True, file=log_file) model_acc = deepcopy(model) optimizer.accelerate() optimizer.store_parameters([model_acc.parameters()]) model_acc.to(device) train_score = validation(model_acc, train_loader) valid_score = validation(model_acc, valid_loader) print("Train:", train_score) print("Valid:", valid_score) print("Train:", train_score, flush=True, file=log_file) print("Valid:", valid_score, flush=True, file=log_file) exit ###Output _____no_output_____
02-python-201/labs/LAB_matplotlib.ipynb
###Markdown Ejemplo 1: representar la función cosenoVamos con el primer ejemplo en el que representaremos dos _arrays_, uno frente a otro, en los ejes *x* e *y* respectivamente. **Notad que para que los gráficos se muestren en el mismo Notebook debemos añadir la directiva especial:** *%matplotlib inline*. ###Code %matplotlib inline # Importamos las librerías import matplotlib import numpy as np import matplotlib.pyplot as plt # Calculamos un array de -2*PI a 2*PI con un paso de 0.1 x = np.arange(-2*np.pi, 2*np.pi, 0.1) # Representamos el array x frente al valor cos(x) plt.plot(x, np.cos(x)) # Añadimos las etiquetas y título...leyenda.etc etc etc etc plt.xlabel('x') plt.ylabel('cos(x)') # Mostramos el resultado final plt.show() ###Output _____no_output_____ ###Markdown Ejemplo 2: Representar las funciones coseno y seno a la vezEn este ejemplo, calcularemos los valores de las funciones seno y coseno para el mismo rango de valores y las representaremos en el mismo gráfico. ###Code %matplotlib inline # Importamos las librerías import matplotlib import numpy as np import matplotlib.pyplot as plt # Calculamos un array de -2*PI a 2*PI con un paso de 0.1 x = np.arange(-2*np.pi, 2*np.pi, 0.1) # Mostramos el resultado con sin y cos plt.plot(x, np.cos(x), 'r+', x, np.sin(x), 'bo') plt.xlabel('x = cos(x)') plt.ylabel('y = sin(x)') # Mostramos el resultado final plt.show() ###Output _____no_output_____ ###Markdown Ejemplo 3: HistogramasMatplotlib dispone de muchos tipos de gráficos implementados, entre ellos los histogramas. En este ejemplo representamos una [función gaussiana](https://es.wikipedia.org/wiki/Funci%C3%B3n_gaussiana). ###Code %matplotlib inline # Importamos las librerías import matplotlib import numpy as np import matplotlib.pyplot as plt # Parámetros de la función gaussiana mu, sigma = 50, 7 # Generamos un array x = mu + sigma * np.random.randn(10000) # La función 'hist' nos calcula la frecuencia y el número de barras. El argumento normed=1 normaliza los valores de # probabilidad ([0,1]), facecolor controla el color del gráfico y alpha el valor de la transparencia de las barras. n, bins, patches = plt.hist(x, 30, density=1, facecolor='dodgerblue', alpha=0.5) plt.xlabel('x') plt.ylabel('Probabilidad') # Situamos el texto con los valores de mu y sigma en el gráfico. plt.text(20, .025, r'$\mu=50,\ \sigma=7$') # Controlamos manualmente el tamaño de los ejes. Los dos primeros valores se corresponden con xmin y xmax y los # siguientes con ymin e ymax: plt.axis([10, 90, 0, 0.07]) # Mostramos una rejilla. plt.grid(True) plt.show() ###Output _____no_output_____ ###Markdown Ejemplo 4: Representación del conjunto de MandrelbrotEl conjunto de Mandelbrot es uno de los conjuntos fractales más estudiados y conocidos. Podéis encontrar más información en línea sobre [el conjunto y los fractales en general](https://es.wikipedia.org/wiki/Conjunto_de_Mandelbrot).El siguiente ejemplo es una adaptación de este [código original](https://scipy-lectures.github.io/intro/numpy/exercises.htmlmandelbrot-set). ###Code %matplotlib inline import numpy as np import matplotlib.pyplot as plt from numpy import newaxis # La función que calculará el conjunto de Mandelbrot. def mandelbrot(N_max, threshold, nx, ny): # Creamos un array con nx elementos entre los valores -2 y 1. x = np.linspace(-2, 1, nx) # Lo mismo, pero en este caso entre -1.5 y 1.5, de ny elementos. y = np.linspace(-1.5, 1.5, ny) # Creamos el plano de números complejos necesario para calcular el conjunto. c = x[:,newaxis] + 1j*y[newaxis,:] # Iteración para calcular el valor de un elemento en la sucesión. z = c for j in range(N_max): z = z**2 + c # Finalmente, calculamos si un elemento pertenece o no al conjunto poniendo un límite 'threshold'. conjunto = (abs(z) < threshold) return conjunto conjunto_mandelbrot = mandelbrot(50, 50., 601, 401) # Transponemos los ejes del conjunto de Mandelbrot calculado utilizando la función de numpy 'T' # Utilizamos la función imshow para representar una matriz como una imagen. El argumento cmap significa # 'color map' y es la escala de colores en la que representaremos nuestra imagen. Podéis encontrar muchos # otros mapas en la documentación oficial: http://matplotlib.org/examples/color/colormaps_reference.html plt.imshow(conjunto_mandelbrot.T, cmap='Blues') plt.show() ###Output <ipython-input-16-53c1f8852eea>:20: RuntimeWarning: overflow encountered in square z = z**2 + c <ipython-input-16-53c1f8852eea>:20: RuntimeWarning: invalid value encountered in square z = z**2 + c ###Markdown Ejemplo 5. Manipulación de las imágenesUna imagen puede asimilarse a una matriz multidimensional donde para valores de píxeles (x,y) tenemos calores de color. Matplotlib nos permite leer imágenes, manipularlas y aplicarles distintos mapas de colores a la hora de representarlas. En el siguiente ejemplo, cargaremos una fotografía en formato PNG de Carl Sagan.Créditos de la foto: NASA JPL ###Code %matplotlib inline import numpy as np import matplotlib.pyplot as plt import matplotlib.image as mpimg # Cargamos la imagen de Carl carl = mpimg.imread("Carl_Sagan_Planetary_Society.JPG") # Mostramos la imagen plt.imshow(carl) plt.show() # Podemos modificar la imagen. Imread, importa la imagen # nosotros podemos manipular la imagen basada en los valores numéricos utilizando la escala de grises grises = np.mean(carl, 2) grises plt.imshow(grises, cmap='Spectral') plt.show() plt.imshow(grises, cmap='jet') plt.show() %pylab inline from pylab import imshow imshow(array('nombre-imagen',dtype=uint16),interpolation='nearest') ###Output Populating the interactive namespace from numpy and matplotlib ###Markdown Ejemplo 1: representar la función cosenoVamos con el primer ejemplo en el que representaremos dos _arrays_, uno frente a otro, en los ejes *x* e *y* respectivamente. **Notad que para que los gráficos se muestren en el mismo Notebook debemos añadir la directiva especial:** *%matplotlib inline*. ###Code %matplotlib inline # Importamos las librerías import matplotlib import numpy as np import matplotlib.pyplot as plt # Calculamos un array de -2*PI a 2*PI con un paso de 0.1 x = np.arange(-2*np.pi, 2*np.pi, 0.1) # Representamos el array x frente al valor cos(x) plt.plot(x, np.cos(x)) # Añadimos las etiquetas y título...leyenda.etc etc etc etc plt.xlabel('x') plt.ylabel('cos(x)') # Mostramos el resultado final plt.show() ###Output _____no_output_____ ###Markdown Ejemplo 2: Representar las funciones coseno y seno a la vezEn este ejemplo, calcularemos los valores de las funciones seno y coseno para el mismo rango de valores y las representaremos en el mismo gráfico. ###Code %matplotlib inline # Importamos las librerías import matplotlib import numpy as np import matplotlib.pyplot as plt # Calculamos un array de -2*PI a 2*PI con un paso de 0.1 x = np.arange(-2*np.pi, 2*np.pi, 0.1) # Mostramos el resultado con sin y cos plt.plot(x, np.cos(x), 'r+', x, np.sin(x), 'bo') plt.xlabel('x = cos(x)') plt.ylabel('y = sin(x)') # Mostramos el resultado final plt.show() ###Output _____no_output_____ ###Markdown Ejemplo 3: HistogramasMatplotlib dispone de muchos tipos de gráficos implementados, entre ellos los histogramas. En este ejemplo representamos una [función gaussiana](https://es.wikipedia.org/wiki/Funci%C3%B3n_gaussiana). ###Code %matplotlib inline # Importamos las librerías import matplotlib import numpy as np import matplotlib.pyplot as plt # Parámetros de la función gaussiana mu, sigma = 50, 7 # Generamos un array x = mu + sigma * np.random.randn(10000) # La función 'hist' nos calcula la frecuencia y el número de barras. El argumento normed=1 normaliza los valores de # probabilidad ([0,1]), facecolor controla el color del gráfico y alpha el valor de la transparencia de las barras. n, bins, patches = plt.hist(x, 30, density=1, facecolor='dodgerblue', alpha=0.5) plt.xlabel('x') plt.ylabel('Probabilidad') # Situamos el texto con los valores de mu y sigma en el gráfico. plt.text(20, .025, r'$\mu=50,\ \sigma=7$') # Controlamos manualmente el tamaño de los ejes. Los dos primeros valores se corresponden con xmin y xmax y los # siguientes con ymin e ymax: plt.axis([10, 90, 0, 0.07]) # Mostramos una rejilla. plt.grid(True) plt.show() ###Output _____no_output_____ ###Markdown Ejemplo 4: Representación del conjunto de MandrelbrotEl conjunto de Mandelbrot es uno de los conjuntos fractales más estudiados y conocidos. Podéis encontrar más información en línea sobre [el conjunto y los fractales en general](https://es.wikipedia.org/wiki/Conjunto_de_Mandelbrot).El siguiente ejemplo es una adaptación de este [código original](https://scipy-lectures.github.io/intro/numpy/exercises.htmlmandelbrot-set). ###Code %matplotlib inline import numpy as np import matplotlib.pyplot as plt from numpy import newaxis # La función que calculará el conjunto de Mandelbrot. def mandelbrot(N_max, threshold, nx, ny): # Creamos un array con nx elementos entre los valores -2 y 1. x = np.linspace(-2, 1, nx) # Lo mismo, pero en este caso entre -1.5 y 1.5, de ny elementos. y = np.linspace(-1.5, 1.5, ny) # Creamos el plano de números complejos necesario para calcular el conjunto. c = x[:,newaxis] + 1j*y[newaxis,:] # Iteración para calcular el valor de un elemento en la sucesión. z = c for j in range(N_max): z = z**2 + c # Finalmente, calculamos si un elemento pertenece o no al conjunto poniendo un límite 'threshold'. conjunto = (abs(z) < threshold) return conjunto conjunto_mandelbrot = mandelbrot(50, 50., 601, 401) # Transponemos los ejes del conjunto de Mandelbrot calculado utilizando la función de numpy 'T' # Utilizamos la función imshow para representar una matriz como una imagen. El argumento cmap significa # 'color map' y es la escala de colores en la que representaremos nuestra imagen. Podéis encontrar muchos # otros mapas en la documentación oficial: http://matplotlib.org/examples/color/colormaps_reference.html plt.imshow(conjunto_mandelbrot.T, cmap='Blues') plt.show() ###Output <ipython-input-16-53c1f8852eea>:20: RuntimeWarning: overflow encountered in square z = z**2 + c <ipython-input-16-53c1f8852eea>:20: RuntimeWarning: invalid value encountered in square z = z**2 + c ###Markdown Ejemplo 5. Manipulación de las imágenesUna imagen puede asimilarse a una matriz multidimensional donde para valores de píxeles (x,y) tenemos calores de color. Matplotlib nos permite leer imágenes, manipularlas y aplicarles distintos mapas de colores a la hora de representarlas. En el siguiente ejemplo, cargaremos una fotografía en formato PNG de Carl Sagan.Créditos de la foto: NASA JPL ###Code %matplotlib inline import numpy as np import matplotlib.pyplot as plt import matplotlib.image as mpimg # Cargamos la imagen de Carl carl = mpimg.imread("Carl_Sagan_Planetary_Society.JPG") # Mostramos la imagen plt.imshow(carl) plt.show() # Podemos modificar la imagen. Imread, importa la imagen # nosotros podemos manipular la imagen basada en los valores numéricos utilizando la escala de grises grises = np.mean(carl, 2) grises plt.imshow(grises, cmap='Spectral') plt.show() plt.imshow(grises, cmap='jet') plt.show() %pylab inline from pylab import imshow imshow(array('nombre-imagen',dtype=uint16),interpolation='nearest') ###Output Populating the interactive namespace from numpy and matplotlib
assets/resources/machine-learning/CNN-CIFAR10/cifar10 dropout test notebook-s.ipynb
###Markdown CIFAR-10 Dropout test using CNN Install Theano ###Code pip install -U git+https://github.com/Theano/Theano.git#egg=Theano ###Output Collecting Theano from git+https://github.com/Theano/Theano.git#egg=Theano Cloning https://github.com/Theano/Theano.git to /tmp/pip-install-orl7vwm3/Theano Running command git clone -q https://github.com/Theano/Theano.git /tmp/pip-install-orl7vwm3/Theano Requirement already satisfied, skipping upgrade: numpy>=1.9.1 in /usr/local/lib/python3.6/dist-packages (from Theano) (1.16.4) Requirement already satisfied, skipping upgrade: scipy>=0.14 in /usr/local/lib/python3.6/dist-packages (from Theano) (1.3.1) Requirement already satisfied, skipping upgrade: six>=1.9.0 in /usr/local/lib/python3.6/dist-packages (from Theano) (1.12.0) Building wheels for collected packages: Theano Building wheel for Theano (setup.py) ... [?25l[?25hdone Created wheel for Theano: filename=Theano-1.0.4+12.g93e8180bf-cp36-none-any.whl size=2667476 sha256=36434006d2fc766e6cfec28a2fc6a4b6f4fc22d461b2f156815566cf63e7bdba Stored in directory: /tmp/pip-ephem-wheel-cache-sb5lk15w/wheels/14/72/17/35fc1366380e8e05fc8ed5d44e24a2da28ef975aa4be6aaa17 Successfully built Theano Installing collected packages: Theano Found existing installation: Theano 1.0.4 Uninstalling Theano-1.0.4: Successfully uninstalled Theano-1.0.4 Successfully installed Theano-1.0.4+12.g93e8180bf ###Markdown Import Package ###Code import numpy as np import os os.environ["THEANO_FLAGS"] = "mode=FAST_RUN,device=cuda*,floatX=float32" import theano import keras from keras.datasets import cifar10 from keras.models import Sequential from keras.layers import Dense, Dropout, Activation, Flatten from keras.layers import Conv2D, MaxPooling2D from keras.regularizers import l2 from keras.utils import np_utils from keras.preprocessing.image import ImageDataGenerator import matplotlib.pyplot as plt %matplotlib inline from pylab import rcParams rcParams['figure.figsize'] = 20, 20 ###Output _____no_output_____ ###Markdown Train on CIFAR-10 dataset Load CIFAR 10 dataset.CIFAR-10은 총 10개의 레이블로 이루어진 6만장의 이미지를 가지고 있으며 5만장은 트레이닝, 1만장은 테스트 용도로 쓰입니다. 해당 데이터셋은 http://www.cs.toronto.edu/~kriz/cifar.html 에서 다운로드 받으실 수 있습니다. ###Code (X_train, y_train), (X_test, y_test) = cifar10.load_data() print ("Training data:") print ("Number of examples: ", X_train.shape[0]) print ("Number of channels:",X_train.shape[3]) print ("Image size:", X_train.shape[1], X_train.shape[2]) print print ("Test data:") print ("Number of examples:", X_test.shape[0]) print ("Number of channels:", X_test.shape[3]) print ("Image size:", X_test.shape[1], X_test.shape[2]) print(X_train.shape, X_train.dtype) ###Output Downloading data from https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz 170500096/170498071 [==============================] - 2s 0us/step Training data: Number of examples: 50000 Number of channels: 3 Image size: 32 32 Test data: Number of examples: 10000 Number of channels: 3 Image size: 32 32 (50000, 32, 32, 3) uint8 ###Markdown Visualize some images from CIFAR-10 dataset. CIFAR-10 데이터셋은 다음의 10가지 클래스를 담고 있습니다. airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck ###Code plt.subplot(141) plt.imshow(X_train[0], interpolation="bicubic") plt.grid(False) plt.subplot(142) plt.imshow(X_train[4], interpolation="bicubic") plt.grid(False) plt.subplot(143) plt.imshow(X_train[8], interpolation="bicubic") plt.grid(False) plt.subplot(144) plt.imshow(X_train[12], interpolation="bicubic") plt.grid(False) plt.show() ###Output _____no_output_____ ###Markdown Normalize the data. ###Code print ("mean before normalization:", np.mean(X_train)) print ("std before normalization:", np.std(X_train)) mean=[0,0,0] std=[0,0,0] newX_train = np.ones(X_train.shape) newX_test = np.ones(X_test.shape) for i in range(3): mean[i] = np.mean(X_train[:,:,:,i]) std[i] = np.std(X_train[:,:,:,i]) for i in range(3): newX_train[:,:,:,i] = X_train[:,:,:,i] - mean[i] newX_train[:,:,:,i] = newX_train[:,:,:,i] / std[i] newX_test[:,:,:,i] = X_test[:,:,:,i] - mean[i] newX_test[:,:,:,i] = newX_test[:,:,:,i] / std[i] X_train = newX_train X_test = newX_test print ("mean after normalization:", np.mean(X_train)) print ("std after normalization:", np.std(X_train)) print(X_train.max()) ###Output mean before normalization: 120.70756512369792 std before normalization: 64.1500758911213 mean after normalization: 4.91799193961621e-17 std after normalization: 0.9999999999999996 2.126789409516928 ###Markdown Specify Training Parameters ###Code batchSize = 512 #-- Training Batch Size num_classes = 10 #-- Number of classes in CIFAR-10 dataset num_epochs = 50 #-- Number of epochs for training learningRate= 0.001 #-- Learning rate for the network lr_weight_decay = 0.95 #-- Learning weight decay. Reduce the learn rate by 0.95 after epoch img_rows = 32 #-- input image dimensions img_cols = 32 Y_train = np_utils.to_categorical(y_train, num_classes) Y_test = np_utils.to_categorical(y_test, num_classes) ###Output _____no_output_____ ###Markdown VGGnet-10 ###Code from keras import initializers import copy result = {} y = {} loss = [] acc = [] dropouts = [0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9] for dropout in dropouts: print ("Dropout: ", (dropout)) model = Sequential() #-- layer 1 model.add(Conv2D(64, 3, 3, border_mode='same', input_shape=(img_rows, img_cols,3))) model.add(Dropout(dropout)) model.add(Conv2D(64, 3, 3, activation='relu',border_mode='same')) model.add(Dropout(dropout)) model.add(MaxPooling2D(pool_size=(2, 2))) ##--layer 2 model.add(Conv2D(128, 3, 3, activation='relu',border_mode='same')) model.add(Dropout(dropout)) model.add(MaxPooling2D(pool_size=(2, 2))) ##--layer 3 model.add(Conv2D(256, 3, 3, activation='relu',border_mode='same')) model.add(Dropout(dropout)) model.add(MaxPooling2D(pool_size=(2, 2))) ##-- layer 4 model.add(Flatten()) model.add(Dense(512, activation='relu')) #-- layer 5 model.add(Dense(512, activation='relu')) #-- layer 6 model.add(Dense(num_classes, activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer='adadelta', metrics=['accuracy']) model_cce = model.fit(X_train, Y_train, batch_size=batchSize, nb_epoch=num_epochs, verbose=1, shuffle=True, validation_data=(X_test, Y_test)) score = model.evaluate(X_test, Y_test, verbose=0) y[dropout] = model.predict(X_test) print('Test score:', score[0]) print('Test accuracy:', score[1]) result[dropout] = copy.deepcopy(model_cce.history) loss.append(score[0]) acc.append(score[1]) ###Output Dropout: 0.0 ###Markdown example Plotting Results. ###Code import numpy as np import matplotlib.pyplot as plt width = 0.1 plt.bar(dropouts, acc, width, align='center') plt.tick_params(axis='both', which='major', labelsize=35) plt.tick_params(axis='both', which='minor', labelsize=35) plt.ylabel('Accuracy',size = 30) plt.xlabel('Dropout', size = 30) plt.show() import numpy as np import matplotlib.pyplot as plt width = 0.1 plt.bar(dropouts, loss, width, align='center',color = 'green') plt.tick_params(axis='both', which='major', labelsize=35) plt.tick_params(axis='both', which='minor', labelsize=35) plt.ylabel('Loss',size = 30) plt.xlabel('Dropout', size = 30) plt.show() ###Output _____no_output_____
SVD_intro.ipynb
###Markdown About SVD* If the dimensions of A are m x n: * U is an m x m matrix of Left Singular Vectors * S is an m x n rectangular diagonal matrix of Singular Values arranged in decreasing order * Think of singular values as feature importance values * V is an n x n matrix of Right Singular Vectors * decomposition: `A = U*S*VT` * The decomposition allows us to express our original matrix as a linear combination of low-rank matrices. * SVD in Dimensional Reduction * Using SVD, we are able to represent our large matrix A by 3 smaller matrices U, S and V * This is helpful in large computations. We can obtain a k-rank approximation of A, by selecting the first k singular values and truncate the 3 matrices accordingly * The Rank of Matrix - The number of INDEPENDENT columns in a matrix, and none of them can be expressed as a linear function of one or more of other columns. * The rank of a matrix can be thought of as a representative of the amount of unique information represented stored in the matrix. Higher the rank, higher the information.* The code below is showing 3 types of python SVD* Reference: https://www.analyticsvidhya.com/blog/2019/08/5-applications-singular-value-decomposition-svd-data-science/?utm_source=feedburner&utm_medium=email&utm_campaign=Feed%3A+AnalyticsVidhya+%28Analytics+Vidhya%29 Type 1 Python SVD* This method allows you to get complete S, U, V ###Code import numpy as np from numpy.linalg import svd # this matrix has rank=2, since col3 = col1+co2, ## but col1 and col2 are independent from each other A = np.array([[1,2,3], [4,5,6], [5,7,9]]) U, S, VT = svd(A) print("Left Singular Vectors:") print(U) print("Singular Values:") print(np.diag(S)) print("Right Singular Vectors:") print(VT) # Return the original matrix A # @ is used for matrix multiplication in Py3, use np.matmul with Py2 print(U @ np.diag(S) @ VT) ###Output [[1. 2. 3.] [4. 5. 6.] [5. 7. 9.]] ###Markdown Type 2 Pyhton SVD* Sklearn Truncated SVD - This is used for dimensional reduction directly ###Code import numpy as np from sklearn.decomposition import TruncatedSVD A = np.array([[1,2,3], [4,5,6], [5,7,9]]) print("Original Matrix:") A svd = TruncatedSVD(n_components = 2) # reduce to 2 features A_transf = svd.fit_transform(A) print("Singular values:") print(svd.singular_values_) print() print("Transformed Matrix after reducing to 2 features:") print(A_transf) ###Output Singular values: [15.66332312 0.81259398] Transformed Matrix after reducing to 2 features: [[ 3.68732795 0.6353051 ] [ 8.76164389 -0.48331806] [12.44897184 0.15198704]] ###Markdown Type 3 Python SVD* Randomized SVD * It returns S, U, V too * It returns the same results as Truncatsed SVD, but faster * Truncated SVD uses an exact solver ARPACK, Randomized SVD uses approximation techniques. * ARPACK, the ARnoldi PACKage, is a numericalsoftware library written in FORTRAN 77 for solving large scale eigenvalue problems in the matrix-free fashion ###Code import numpy as np from sklearn.utils.extmath import randomized_svd A = np.array([[1,2,3], [4,5,6], [5,7,9]]) u, s, vt = randomized_svd(A, n_components = 2) # reduce to 2 features print("Left Singular Vectors:") print(u) print("Singular Values:") print(np.diag(s)) print("Right Singular Vectors:") print(vt) # Return the reduced matrix # @ is used for matrix multiplication in Py3, use np.matmul with Py2 print('Reduced matrix:') print(u @ np.diag(s) @ vt) ###Output Reduced matrix: [[1. 2. 3.] [4. 5. 6.] [5. 7. 9.]]
notebooks/qu_img_prep.ipynb
###Markdown ----Qu implementation second half====The steps here are somewhat straightforward:- We take the output of a network, get the control points from it- We generate a voronoi pattern from this, separating the image into sections where we are positive there is only one nucleus.- We use k-means clustering for all pixels, using distance data from the nucleus and using color data to arrive at a segmentation that envelops the nucleus. ###Code import os import cv2 import numpy as np from tqdm import tqdm from glob import glob from skimage.filters.rank import entropy from skimage.morphology import disk from scipy.ndimage import binary_fill_holes from mask_prediction import start_over as qu from mask_prediction import unet_semantics as model_setup print('Imports Succesful') def watershed(img, dist_thresh_scale=.4): """ This algorithm performs a form of watershed operation. After some morphological operations, a distance transform with a threshold will separate cleanly all blobs that would have been too close to do a contour analysis. :param img: The image to be watershedded. :param dist_thresh_scale: Ratio of where to put the threshold of the watershedding. :return: A sure foreground image, alongside an unsure image. The highlighted pixels in unsure could belong to the foreground or the background. """ kernel = np.ones((3,3), np.uint8) opening = cv2.morphologyEx(img, cv2.MORPH_OPEN, kernel, iterations=1) sure_bg = cv2.dilate(opening, kernel, iterations=1) dist_transform = cv2.distanceTransform(opening, cv2.DIST_L2, 5) _, sure_fg = cv2.threshold(dist_transform, dist_thresh_scale*dist_transform.max(), 255, 0) sure_fg = np.uint8(sure_fg) unknown = cv2.subtract(sure_bg, sure_fg) return sure_fg, unknown def gen_genfolder(address): os.makedirs(os.path.dirname(address + '\\Generated_set\\Input\\1\\'), exist_ok=True) os.makedirs(os.path.dirname(address + '\\Generated_set\\Output\\'), exist_ok=True) os.makedirs(os.path.dirname(address + '\\Generated_set\\EM_overlay\\'), exist_ok=True) os.makedirs(os.path.dirname(address + '\\Generated_set\\Mask_overlaps\\'), exist_ok=True) os.makedirs(os.path.dirname(address + '\\Generated_set\\Masks\\'), exist_ok=True) os.makedirs(os.path.dirname(address + '\\Generated_backups\\'), exist_ok=True) return True em_folder = 'X:\\BEP_data\\Data_External\\RL012\\EM\\Collected' #Folder containing the EM datasets ho_folder = 'X:\\BEP_data\\Data_External\\RL012\\Hoechst\\Collected_raw' #Folder containing the Hoechst datasets mask_folder = 'X:\\BEP_data\\Data_External\\RL012\\Manual_Masks' #Folder containing masks that will be compared to in the mask_overlap folder input_folder = 'X:\\BEP_data\\Data_Internal\\Qu_Iteration\\Predict_set\\Output' #Input folder that houses images to get nuclei positions from. gen_folder = 'X:\\BEP_data\\Data_Internal\\Gen_Masks' #Folder that will be populated with the results assert gen_genfolder(gen_folder) IMG_HEIGHT = 1024 IMG_WIDTH = 1024 nucl_rad = 130 run_name = 'pancreas_130_last' backup_path = gen_folder + '\\Generated_backups' #File containing data structure export_folder = gen_folder + '\\Generated_set' train_folder = [] #Because this notebook does not use Machine Learning, the training and testing folders are not populated. test_folder = [] nr_clusters = 4 fill_holes = True data_paths = (train_folder, test_folder, em_folder, ho_folder, mask_folder) ###Output _____no_output_____ ###Markdown The parameters are set, time to import the masks, and get a list of nuclei positions: ###Code mask_list = glob(input_folder + '\\*.png') str_list = [x.split('\\')[-1] for x in mask_list] nuclei_dict = {} """ For every mask in the mask list, the mask is thresholded, watershedded and its countours are analyzed to get at an accurate list of nuclei positions. TODO: The watershedding might be overkill. """ for mask in mask_list: img = cv2.imread(mask, cv2.IMREAD_GRAYSCALE) img_str = mask.split('\\')[-1] _, img_thresh = cv2.threshold(img, int(255*.7), 255, cv2.THRESH_BINARY) img_wet, unknown = watershed(img_thresh) cnts, _ = cv2.findContours(img_wet, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE) ncls_pts = [] for cnt in cnts: if cv2.contourArea(cnt) >= 1: M = cv2.moments(cnt) coords = [int(M['m10']/M['m00']), int(M['m01']/M['m00'])] ncls_pts.append(coords) nuclei_dict[img_str] = ncls_pts ###Output _____no_output_____ ###Markdown So now we have a list for every image that we are processing of where the network thought the nuclei are. Now comes the real work, using the positions to generate a voronoi pattern.This pattern will break up the image into segments that contain exactly one nucleus. In the next piece of code, the pattern is calculated,and the data that will be used in k-means clustering is sandwiched. This data will consist of the EM data, the Hoechst data and a distance map based on the average diameter of the nuclei. ###Code """ In order to facilitate the distance threshold later, a distance limit map is made, filled with the square of the diameter of an average nucleus. Mesh grids are also generated to aid many other operations down the line. One prominent is the rescaling of 2d arrays into 1d arrays. Having a mesh grid in that rescaling will keep track of where the pixel belongs in the original image. """ num_range = np.arange(0, 1024, 1, dtype=np.int32) dist_limit = nucl_rad * nucl_rad dist_limit_map = np.ones((IMG_WIDTH, IMG_HEIGHT), np.int32) * dist_limit x_meshgrid, y_meshgrid = np.meshgrid(num_range, num_range) for image in os.listdir(export_folder + '\\Output'): os.remove(export_folder + '\\Output\\' + image) for key in nuclei_dict: print('Currently doing {}'.format(key)) """ This block of code generates the lines for a Voronoi pattern. The partitions of which will be used later. """ div2d = cv2.Subdiv2D() div2d.initDelaunay((0,0,IMG_WIDTH, IMG_HEIGHT)) div2d.insert(nuclei_dict[key]) vor_list, pnts = div2d.getVoronoiFacetList([]) """ This block of code generates a distance map for the current image. This is done here, because there are not any freely avaliable distance map algorithms to take advantage of. """ dist_map_sq_tot = dist_limit_map for pnt in nuclei_dict[key]: x_meshgrid_s = np.abs(x_meshgrid - pnt[0]) y_meshgrid_s = np.abs(y_meshgrid - pnt[1]) x_meshgrid_s = np.square(x_meshgrid_s) y_meshgrid_s = np.square(y_meshgrid_s) dist_map_sq = x_meshgrid_s + y_meshgrid_s dist_map_sq = np.minimum(dist_map_sq, dist_limit_map) dist_map_sq_tot = np.minimum(dist_map_sq, dist_map_sq_tot) dist_map = np.sqrt(dist_map_sq_tot)/nucl_rad dist_map_uint = np.array(dist_map*255, np.uint8) """ The next block of code generates various filters from the EM data, and imports the HO data as well. """ em_img = cv2.imread(em_folder + '\\' + key, cv2.IMREAD_GRAYSCALE) em_bil_img = cv2.bilateralFilter(em_img, 7 , 75, 75) em_gauss_img = cv2.GaussianBlur(em_img, (3,3), 3) ho_img = cv2.imread(ho_folder + '\\' + key, cv2.IMREAD_GRAYSCALE) lap_img = cv2.Laplacian(em_gauss_img, cv2.CV_8U) lap_img = cv2.normalize(lap_img,None, 255, 0, cv2.NORM_MINMAX) em_entr_img = entropy(em_gauss_img, disk(7)) em_entr_img = (em_entr_img*255).astype(np.uint8) em_entr_img = cv2.normalize(em_entr_img, None, 255, 0, cv2.NORM_MINMAX) em_sobel_x_img = cv2.Sobel(em_gauss_img, cv2.CV_16S, 1, 0, ksize=3, scale=1, delta=0, borderType=cv2.BORDER_DEFAULT) em_sobel_y_img = cv2.Sobel(em_gauss_img, cv2.CV_16S, 0, 1, ksize=3, scale=1, delta=0, borderType=cv2.BORDER_DEFAULT) em_sobel_x_abs_img = cv2.convertScaleAbs(em_sobel_x_img) em_sobel_y_abs_img = cv2.convertScaleAbs(em_sobel_y_img) em_sobel_img = cv2.addWeighted(em_sobel_x_abs_img, .5, em_sobel_y_abs_img, .5, 0) """ Now that all the filters have been generated for the current image, they are assembled into a giant sandwich. The contents of the sandwich can be changed from run to run, and it is made as accessible as possible. Just make sure to adjust the second number in the reshape function to the amount of filters that are put into the sandwich. The x_meshgrid and the y_meshgrid do need to stay in the sandwich, they allow for selection between different pixels belonging to different voronoi partitions. """ sandwich = np.dstack((y_meshgrid, x_meshgrid, dist_map_uint, em_gauss_img, em_sobel_img, ho_img)) sandwich_r = np.reshape(sandwich, (-1, 6)) label_map = np.zeros((IMG_WIDTH, IMG_HEIGHT), dtype=np.uint8) em_show = em_img for facet in tqdm(vor_list): facet_uint = np.array(facet, np.int32) mask = np.zeros((IMG_WIDTH, IMG_HEIGHT), dtype=np.uint8) mask = cv2.drawContours(mask, [facet_uint], -1, 255, -1, cv2.LINE_8) em_show = cv2.drawContours(em_show, [facet_uint], -1, 255, 2) mask_bool = mask == 255 mask_bool_r = np.reshape(mask_bool, -1) flist = sandwich_r[mask_bool_r] labels = qu.color_k_means(flist, cluster_nr=nr_clusters)*(int(255/nr_clusters)) label_map += labels label_floodfill = qu.get_floodfill(label_map, nuclei_dict[key], margin=2) img_EM_clustered_floodfill = np.where(label_floodfill == 0, 255, 0) img_EM_clustered_floodfill = (img_EM_clustered_floodfill).astype(np.uint8) if fill_holes: img_EM_clustered_floodfill = binary_fill_holes(img_EM_clustered_floodfill/255) img_EM_clustered_floodfill = img_EM_clustered_floodfill.astype(np.uint8)*255 cv2.imwrite(export_folder + '\\Output\\' + key, img_EM_clustered_floodfill) model_setup.backup_data(data_paths, '*.png', run_name, export_folder, backup_path, img_strs=str_list) print('All done!') ###Output Currently doing 10_3_1_3.png
doc/jupyter_execute/examples/cicd/sig-mlops-jenkins-classic/servers/torchserver/test/sklearn_iris.ipynb
###Markdown Scikit-Learn Server ###Code import os import joblib import numpy as np from sklearn import datasets from sklearn.linear_model import LogisticRegression from sklearn.pipeline import Pipeline def main(): clf = LogisticRegression() p = Pipeline([("clf", clf)]) print("Training model...") p.fit(X, y) print("Model trained!") filename_p = "model.joblib" print("Saving model in %s" % filename_p) joblib.dump(p, filename_p) print("Model saved!") if __name__ == "__main__": print("Loading iris data set...") iris = datasets.load_iris() X, y = iris.data, iris.target print("Dataset loaded!") main() ###Output Loading iris data set... Dataset loaded! Training model... Model trained! Saving model in model.joblib Model saved! ###Markdown Wrap model using s2i REST test ###Code !cd .. && make build_rest !docker run --rm -d --name "sklearnserver" -p 5000:5000 -e PREDICTIVE_UNIT_PARAMETERS='[{"type":"STRING","name":"model_uri","value":"file:///model"}]' -v ${PWD}:/model seldonio/sklearnserver_rest:0.1 ###Output 85ebfc6c41ef145b578077809af81a23ecb6c7ffe261645b098466d6fcda6ecb ###Markdown Send some random features that conform to the contract ###Code !seldon-core-tester contract.json 0.0.0.0 5000 -p !docker rm sklearnserver --force !docker run --rm -d --name "sklearnserver" -p 5000:5000 -e PREDICTIVE_UNIT_PARAMETERS='[{"type":"STRING","name":"method","value":"predict"},{"type":"STRING","name":"model_uri","value":"file:///model"}]' -v ${PWD}:/model seldonio/sklearnserver_rest:0.1 !seldon-core-tester contract.json 0.0.0.0 5000 -p !docker rm sklearnserver --force ###Output sklearnserver ###Markdown grpc test ###Code !cd .. && make build_grpc !docker run --rm -d --name "sklearnserver" -p 5000:5000 -e PREDICTIVE_UNIT_PARAMETERS='[{"type":"STRING","name":"model_uri","value":"file:///model"}]' -v ${PWD}:/model seldonio/sklearnserver_grpc:0.1 ###Output 9d0218b348e186596717736035bf67fc75f91ec0bdf8152b9d1ad9734d842d54 ###Markdown Test using NDArray payload ###Code !seldon-core-tester contract.json 0.0.0.0 5000 -p --grpc ###Output ---------------------------------------- SENDING NEW REQUEST: [[6.538 4.217 6.519 0.217]] RECEIVED RESPONSE: meta { } data { names: "t:0" names: "t:1" names: "t:2" ndarray { values { list_value { values { number_value: 0.003966041860793068 } values { number_value: 0.8586797745038719 } values { number_value: 0.13735418363533516 } } } } } ###Markdown Test using Tensor payload ###Code !seldon-core-tester contract.json 0.0.0.0 5000 -p --grpc --tensor !docker rm sklearnserver --force def x(a=None, b=2): print(a, b) x(b=3, a=1) ###Output 1 3
docs/nusa-info/es/modeler.ipynb
###Markdown Modeler Ejemplo 1. Placa plana ###Code from nusa.mesh import Modeler m = Modeler() m.add_rectangle((0,0),(1,1), esize=0.05) m.generate_mesh() m.plot_mesh() ###Output _____no_output_____ ###Markdown Ejemplo 2. Placa con agujero ###Code m = Modeler() a = m.add_rectangle((0,0),(1,1), esize=0.1) b = m.add_circle((0.5,0.5), 0.125, esize=0.02) m.substract_surfaces(a,b) m.generate_mesh() m.plot_mesh() ###Output _____no_output_____ ###Markdown Ejemplo 3. Placa con grieta ###Code m = Modeler() points = [ (0,0), (1,0), (1,0.45), (0.9,0.5), (1,0.55), (1,1), (0,1) ] a = m.add_poly(*points, esize=0.08) m.generate_mesh() m.plot_mesh() ###Output _____no_output_____ ###Markdown Placa con muesca ###Code m = Modeler() g = m.geom # Para acceder a la clase SimpleGMSH p1 = g.add_point((0,0)) p2 = g.add_point((2,0)) p3 = g.add_point((2,1)) p4 = g.add_point((1.3,1), esize=0.03) p5 = g.add_point((1,1)) p6 = g.add_point((0.7,1), esize=0.03) p7 = g.add_point((0,1)) L1 = g.add_line(p1,p2) L2 = g.add_line(p2,p3) L3 = g.add_line(p3,p4) L4 = g.add_circle(p5,p6,p4) L5 = g.add_line(p6,p7) L6 = g.add_line(p7,p1) loop1 = g.add_line_loop(L1,L2,L3,"-"+L4,L5,L6) g.add_plane_surface(loop1) m.generate_mesh() m.plot_mesh() ###Output _____no_output_____ ###Markdown Placa compuesta ###Code m = Modeler() g = m.geom # Para acceder a la clase SimpleGMSH p1 = g.add_point((0,0)) p2 = g.add_point((1,0)) p3 = g.add_point((2,0)) p4 = g.add_point((2,1)) p5 = g.add_point((3,1)) p6 = g.add_point((3,2)) p7 = g.add_point((0,2)) p8 = g.add_point((0.7,1.4)) p9 = g.add_point((0.7,1.7), esize=0.05) L1 = g.add_line(p1,p2) L2 = g.add_circle(p3,p2,p4) L3 = g.add_line(p4,p5) L4 = g.add_line(p5,p6) L5 = g.add_line(p6,p7) L6 = g.add_line(p7,p1) L7 = g.add_circle(p8,p9) loop1 = g.add_line_loop(L1,L2,L3,L4,L5,L6) # boundary loop2 = g.add_line_loop(L7)# hole g.add_plane_surface(loop1,loop2) m.generate_mesh() m.plot_mesh() ###Output _____no_output_____ ###Markdown Modeler Ejemplo 1. Placa plana ###Code %matplotlib inline from nusa.mesh import Modeler m = Modeler() m.add_rectangle((0,0),(1,1), esize=0.1) m.generate_mesh() m.plot_mesh() ###Output _____no_output_____ ###Markdown Ejemplo 2. Placa con agujero ###Code m = Modeler() a = m.add_rectangle((0,0),(1,1), esize=0.1) b = m.add_circle((0.5,0.5), 0.125, esize=0.02) m.substract_surfaces(a,b) m.generate_mesh() m.plot_mesh() ###Output _____no_output_____ ###Markdown Ejemplo 3. Placa con grieta ###Code m = Modeler() points = [ (0,0), (1,0), (1,0.45), (0.9,0.5), (1,0.55), (1,1), (0,1) ] a = m.add_poly(*points, esize=0.035) m.generate_mesh() m.plot_mesh() ###Output _____no_output_____ ###Markdown Placa con muesca ###Code m = Modeler() g = m.geom # Para acceder a la clase SimpleGMSH p1 = g.add_point((0,0)) p2 = g.add_point((2,0)) p3 = g.add_point((2,1)) p4 = g.add_point((1.3,1), esize=0.01) p5 = g.add_point((1,1)) p6 = g.add_point((0.7,1), esize=0.01) p7 = g.add_point((0,1)) L1 = g.add_line(p1,p2) L2 = g.add_line(p2,p3) L3 = g.add_line(p3,p4) L4 = g.add_circle(p5,p6,p4) L5 = g.add_line(p6,p7) L6 = g.add_line(p7,p1) loop1 = g.add_line_loop(L1,L2,L3,"-"+L4,L5,L6) g.add_plane_surface(loop1) m.generate_mesh() m.plot_mesh() ###Output _____no_output_____ ###Markdown Placa compuesta ###Code m = Modeler() g = m.geom # Para acceder a la clase SimpleGMSH p1 = g.add_point((0,0)) p2 = g.add_point((1,0)) p3 = g.add_point((2,0)) p4 = g.add_point((2,1)) p5 = g.add_point((3,1)) p6 = g.add_point((3,2)) p7 = g.add_point((0,2)) p8 = g.add_point((0.7,1.4)) p9 = g.add_point((0.7,1.7), esize=0.05) L1 = g.add_line(p1,p2) L2 = g.add_circle(p3,p2,p4) L3 = g.add_line(p4,p5) L4 = g.add_line(p5,p6) L5 = g.add_line(p6,p7) L6 = g.add_line(p7,p1) L7 = g.add_circle(p8,p9) loop1 = g.add_line_loop(L1,L2,L3,L4,L5,L6) # boundary loop2 = g.add_line_loop(L7)# hole g.add_plane_surface(loop1,loop2) m.generate_mesh() m.plot_mesh() ###Output _____no_output_____
Accuracy_and_kappa_scores.ipynb
###Markdown Making pipelines for EEG-TCNet ###Code stand = [True,True,True,True,True,True,True,True,True] for i in range(9): if not(os.path.exists('models/EEG-TCNet/S{:}/pipeline_fixed.h5'.format(i+1))): print('Making Pipeline for Subject {:}'.format(i+1)) path_for_model = 'models/EEG-TCNet/S{:}/model_fixed.h5'.format(i+1) clf = KerasClassifier(build_fn = build_model, path = path_for_model) if(stand[i]): pipe = make_pipeline(Scaler(),clf) else: pipe = make_pipeline(clf) data_path = 'data/' path = data_path+'s{:}/'.format(i+1) X_train,_,y_train_onehot,X_test,_,y_test_onehot = prepare_features(path,i,False) pipe.fit(X_train,y_train_onehot) X_train,_,y_train_onehot,X_test,_,y_test_onehot = prepare_features(path,i,False) y_pred = pipe.predict(X_test) dump(pipe, 'models/EEG-TCNet/S{:}/pipeline_fixed.h5'.format(i+1)) else: print('Pipeline already exists for Subject {:}'.format(i+1)) print('Done!') ###Output Pipeline already exists for Subject 1 Pipeline already exists for Subject 2 Pipeline already exists for Subject 3 Pipeline already exists for Subject 4 Pipeline already exists for Subject 5 Pipeline already exists for Subject 6 Pipeline already exists for Subject 7 Pipeline already exists for Subject 8 Pipeline already exists for Subject 9 Done! ###Markdown Making pipelines for Variable EEG-TCNet ###Code stand = [True,False,True,True,True,True,True,True,True] for i in range(9): if not(os.path.exists('models/EEG-TCNet/S{:}/pipeline.h5'.format(i+1))): print('Making Pipeline for Subject {:}'.format(i+1)) path_for_model = 'models/EEG-TCNet/S{:}/model.h5'.format(i+1) clf = KerasClassifier(build_fn = build_model,path=path_for_model) if(stand[i]): pipe = make_pipeline(Scaler(),clf) else: pipe = make_pipeline(clf) data_path = 'data/' path = data_path+'s{:}/'.format(i+1) X_train,_,y_train_onehot,X_test,_,y_test_onehot = prepare_features(path,i,False) pipe.fit(X_train,y_train_onehot) X_train,_,y_train_onehot,X_test,_,y_test_onehot = prepare_features(path,i,False) y_pred = pipe.predict(X_test) dump(pipe, 'models/EEG-TCNet/S{:}/pipeline.h5'.format(i+1)) else: print('Pipeline already exists for Subject {:}'.format(i+1)) print('Done!') ###Output Pipeline already exists for Subject 1 Pipeline already exists for Subject 2 Pipeline already exists for Subject 3 Pipeline already exists for Subject 4 Pipeline already exists for Subject 5 Pipeline already exists for Subject 6 Pipeline already exists for Subject 7 Pipeline already exists for Subject 8 Pipeline already exists for Subject 9 Done! ###Markdown Accuracy and Kappa score calculation for EEG-TCNet ###Code for i in range(9): clf = load('models/EEG-TCNet/S{:}/pipeline_fixed.h5'.format(i+1)) data_path = 'data/' path = data_path+'s{:}/'.format(i+1) X_train,_,y_train_onehot,X_test,_,y_test_onehot = prepare_features(path,i,False) y_pred = clf.predict(X_test) acc_score = accuracy_score(y_pred,np.argmax(y_test_onehot,axis=1)) kappa_score = cohen_kappa_score(y_pred,np.argmax(y_test_onehot,axis=1)) print('For Subject: {:}, Accuracy: {:}, Kappa: {:}.'.format(i+1,acc_score*100, kappa_score)) ###Output WARNING:tensorflow:From /home/thoriri/miniconda3/envs/EEG-TCNet/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:517: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead. WARNING:tensorflow:From /home/thoriri/miniconda3/envs/EEG-TCNet/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:4138: The name tf.random_uniform is deprecated. Please use tf.random.uniform instead. WARNING:tensorflow:From /home/thoriri/miniconda3/envs/EEG-TCNet/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:245: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead. WARNING:tensorflow:From /home/thoriri/miniconda3/envs/EEG-TCNet/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:174: The name tf.get_default_session is deprecated. Please use tf.compat.v1.get_default_session instead. WARNING:tensorflow:From /home/thoriri/miniconda3/envs/EEG-TCNet/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:181: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead. WARNING:tensorflow:From /home/thoriri/miniconda3/envs/EEG-TCNet/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:1834: The name tf.nn.fused_batch_norm is deprecated. Please use tf.compat.v1.nn.fused_batch_norm instead. WARNING:tensorflow:From /home/thoriri/miniconda3/envs/EEG-TCNet/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:3980: The name tf.nn.avg_pool is deprecated. Please use tf.nn.avg_pool2d instead. WARNING:tensorflow:From /home/thoriri/miniconda3/envs/EEG-TCNet/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:3445: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version. Instructions for updating: Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`. WARNING:tensorflow:From /home/thoriri/miniconda3/envs/EEG-TCNet/lib/python3.6/site-packages/keras/optimizers.py:790: The name tf.train.Optimizer is deprecated. Please use tf.compat.v1.train.Optimizer instead. For Subject: 1, Accuracy: 85.76512455516014, Kappa: 0.8101928467695633. For Subject: 2, Accuracy: 65.01766784452296, Kappa: 0.5339432753888381. For Subject: 3, Accuracy: 94.5054945054945, Kappa: 0.926729767932867. For Subject: 4, Accuracy: 64.91228070175438, Kappa: 0.5318755774561132. For Subject: 5, Accuracy: 75.36231884057972, Kappa: 0.671779087459121. For Subject: 6, Accuracy: 61.395348837209305, Kappa: 0.4850076476869354. For Subject: 7, Accuracy: 87.36462093862815, Kappa: 0.8317628889235948. For Subject: 8, Accuracy: 83.76383763837639, Kappa: 0.7835188177411448. For Subject: 9, Accuracy: 78.03030303030303, Kappa: 0.7066441872940455. ###Markdown Accuracy and Kappa score calculation for Variable EEG-TCNet ###Code for i in range(9): clf = load('models/EEG-TCNet/S{:}/pipeline.h5'.format(i+1)) data_path = 'data/' path = data_path+'s{:}/'.format(i+1) X_train,_,y_train_onehot,X_test,_,y_test_onehot = prepare_features(path,i,False) y_pred = clf.predict(X_test) acc_score = accuracy_score(y_pred,np.argmax(y_test_onehot,axis=1)) kappa_score = cohen_kappa_score(y_pred,np.argmax(y_test_onehot,axis=1)) print('For Subject: {:}, Accuracy: {:}, Kappa: {:}.'.format(i+1,acc_score*100, kappa_score)) ###Output For Subject: 1, Accuracy: 89.32384341637011, Kappa: 0.8576302100925488. For Subject: 2, Accuracy: 72.43816254416961, Kappa: 0.6325715331990611. For Subject: 3, Accuracy: 97.43589743589743, Kappa: 0.9658072250353379. For Subject: 4, Accuracy: 75.87719298245614, Kappa: 0.6782800554158757. For Subject: 5, Accuracy: 83.69565217391305, Kappa: 0.7826885727783319. For Subject: 6, Accuracy: 70.69767441860465, Kappa: 0.6094853683148335. For Subject: 7, Accuracy: 93.14079422382672, Kappa: 0.9085903848825899. For Subject: 8, Accuracy: 86.71586715867159, Kappa: 0.822837219437786. For Subject: 9, Accuracy: 85.22727272727273, Kappa: 0.8029247377689304.
train/volodymyr/classification.ipynb
###Markdown Model 1 simple Dense Layers ###Code embedding_dim = 64 model1 = keras.models.Sequential([ keras.layers.Embedding(input_dim=max_features, output_dim=embedding_dim, input_length=max_len), keras.layers.Flatten(), keras.layers.Dense(2000, activation='relu'), keras.layers.Dropout(0.3), keras.layers.Dense(500, activation='relu'), keras.layers.Dropout(0.3), keras.layers.Dense(100, activation='relu'), keras.layers.Dense(len(Y[0]), activation='sigmoid') ]) model1.compile(optimizer='nadam', loss='categorical_crossentropy', metrics=['categorical_accuracy']) model1.summary() epochs = 5 checkpoint = keras.callbacks.ModelCheckpoint('val_classification_model.h5', monitor='val_categorical_accuracy', mode='max', save_best_only=True) model1.fit(np.array(X_train), np.array(Y_train), #batch_size=128, validation_data=(np.array(X_test),np.array(Y_test)), epochs=epochs, callbacks=[checkpoint]) score = model1.evaluate(np.array(X_test), np.array(Y_test)) print("Test Score:", score[0]) print("Test Accuracy:", score[1]) model1.save('classification_model.h5') ###Output _____no_output_____ ###Markdown Model 2 LSTM ###Code embedding_dim =100 model2 = keras.Sequential([ layers.Embedding(max_features, embedding_dim, input_length=max_len), layers.SpatialDropout1D(0.2), layers.LSTM(100, dropout=0.2, recurrent_dropout=0.2), layers.Dense(len(Y[0]), activation='sigmoid') ]) model2.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['categorical_accuracy']) model2.summary() max_features = 10000 # maximum number of words in vocabulari 5000 max_len = 150 # max length of string batch_size = 128 epochs = 5 early_stopping_callback = keras.callbacks.EarlyStopping(monitor='val_loss', patience=3, min_delta=0.0001) history = model2.fit(X_train, Y_train, epochs=epochs, batch_size=batch_size,validation_split=0.1,callbacks=[early_stopping_callback]) accr = model2.evaluate(X_test,Y_test) print('Test set\n Loss: {:0.3f}\n Accuracy: {:0.3f}'.format(accr[0],accr[1])) output_dim = 100 inputs = keras.Input(shape=(None,), dtype="int64") # Next, we add a layer to map those vocab indices into a space of dimensionality # 'embedding_dim'. x = layers.Embedding(max_features, output_dim)(inputs) x = layers.Dropout(0.5)(x) # Conv1D + global max pooling x = layers.Conv1D(128, 7, padding="valid", activation="relu", strides=3)(x) x = layers.Conv1D(128, 7, padding="valid", activation="relu", strides=3)(x) x = layers.GlobalMaxPooling1D()(x) # We add a vanilla hidden layer: x = layers.Dense(128, activation="relu")(x) x = layers.Dropout(0.5)(x) predictions = layers.Dense(len(Y[0]), activation='softmax', name="predictions")(x) model3 = keras.Model(inputs, predictions) # Compile the model with binary crossentropy loss and an adam optimizer. model3.compile(loss="categorical_crossentropy", optimizer="adam", metrics=["accuracy"]) model3.summary() batch_size = 64 epochs = 5 model3.fit(X_train, Y_train, epochs=epochs, batch_size=batch_size,validation_split=0.1) accr = model3.evaluate(X_test,Y_test) print('Test set\n Loss: {:0.3f}\n Accuracy: {:0.3f}'.format(accr[0],accr[1])) # one hot encoding for authors def encode_authors(author_code): qty = df.author.max() result = [0] * (qty + 1) result[author_code] = 1 return result df.author = df.author.apply(encode_authors) # one hot encoding for years year_codes ={} year_count = df.year.nunique() def encode_year(year): result = [0] * year_count if year in year_codes.keys(): result[year_codes[year]] = 1 else: result[len(year_codes)] = 1 year_codes[year] = len(year_codes) return result df['years_encoded'] = df.year.apply(encode_year) df['joined_text'] = df['text'] + df['title'] df['X2'] = df['author'] + df['years_encoded'] train_df, test_df = model_selection.train_test_split(df, test_size=0.1, random_state=42) X1_train = keras.preprocessing.sequence.pad_sequences(train_df['joined_text'].to_list(), maxlen=max_len, padding='post') X2_train = np.stack(train_df['X2']) Y2_train = np.stack(train_df['themes']) X1_test = keras.preprocessing.sequence.pad_sequences(list(test_df['joined_text']), maxlen=max_len, padding='post') X2_test = np.stack(test_df['X2']) Y2_test = np.stack(test_df['themes']) text_input = keras.Input(shape=(max_len,)) categorical_input = keras.Input(shape=(len(X2_train[0]),)) text_embedding = layers.Embedding(max_features, 64)(text_input) categorical_embedding = layers.Embedding(2, 8)(categorical_input) flat_text = layers.Flatten()(text_embedding) flat_categories = layers.Flatten()(categorical_embedding) concatenated = keras.layers.Concatenate()([flat_text, flat_categories]) dense1 = keras.layers.Dense(2000, activation='relu', )(concatenated) dense2 = keras.layers.Dense(500, activation='relu', )(dense1) dense3 = keras.layers.Dense(100, activation='relu', )(dense2) out = keras.layers.Dense(len(Y2_train[0]), activation='sigmoid')(dense3) united_model = keras.Model(inputs=[text_input, categorical_input], outputs=out) united_model.compile(optimizer='nadam', loss='categorical_crossentropy', metrics=['categorical_accuracy']) united_model.summary() united_model.fit([X1_train, X2_train], Y2_train, epochs=3, validation_split=0.1) score3 = united_model.evaluate([np.array(X1_test), np.array(X2_test)], np.array(Y2_test)) print("Test Score:", score3[0]) print("Test Accuracy:", score3[1]) df ###Output _____no_output_____
sms-demo-1.ipynb
###Markdown How to manage a simple SMS campaign using the OVHcloud SMS API- We are using routes described in https://api.ovh.com/console//sms- SMS documentation: https://docs.ovh.com/gb/en/sms/ 1. Loading the credentials- Visit https://api.ovh.com/createToken/?GET=/sms/*&POST=/sms/*&PUT=/sms/*&DELETE=/sms/*&GET=/sms to get your credentials. - You can restrict the patterns according your needs ###Code # -*- encoding: utf-8 -*- import yaml # application_key: xxxx # application_secret: xxxx # consumer_key: xxxx # with open("credentials.yml", 'r') as file: credentials = yaml.safe_load(file) ###Output _____no_output_____ ###Markdown 2. Authenticating to the API- We are using the SDK: https://github.com/ovh/python-ovh ###Code import ovh client = ovh.Client( endpoint='ovh-eu', application_key=credentials["application_key"], application_secret=credentials["application_secret"], consumer_key=credentials["consumer_key"], ) ###Output _____no_output_____ ###Markdown 3. Getting your accounthttps://api.ovh.com/console//smsGET ###Code # List your SMS accounts service_names = client.get('/sms') # Use the first SMS account service_name = service_names[0] # e.g. sms-ab987654-1 ###Output _____no_output_____ ###Markdown 4. Create the campaignhttps://api.ovh.com/console//sms/%7BserviceName%7D/batchesPOST- `from` is the sender linked to your account - It can be a E.164 virtual number or an alphanumeric name - You can go to your control panel https://www.ovh.com/auth/ to manage your virtual numbers and senders - https://docs.ovh.com/gb/en/sms/launch_first_sms_campaign/step-2-create-a-sender_1- `to` is a list of E.164 number recipents- `message` is the message sent to each receivers using `GSM-7`/`GSM 03.38` charset- `noStop` tells if each sent SMS should NOT have a stop clause - The clause is appended to the message by our backend - The stop clause is **mandatory** for marketing campaigns - e.g. `STOP 36184` preceded by a newline character ###Code import json campaign_name = "SMS BlackFriday promotions" params = { "name": campaign_name, "from": "+33700000002", "to": ["+33600000001"], "message": "SMS promotions, -13% on each packs", "noStop": False, #optional } batch = client.post('/sms/%s/batches' % (service_name), **params) # Display the created batch print(json.dumps(batch, indent=2, sort_keys=True)) ###Output _____no_output_____ ###Markdown 5. Waiting for batch completionhttps://api.ovh.com/console//sms/%7BserviceName%7D/batches/%7Bid%7DGET ###Code import time batch_id = batch["id"] last_status = batch["status"] while True: batch = client.get('/sms/%s/batches/%s' % (service_name, batch_id)) if batch["status"] == "COMPLETED": print("The batch is completed") break if batch["status"] == "FAILED": print("The batch is failed") break if batch["status"] != last_status: print("Status: %s" % batch["status"]) last_status = batch["status"] print(".", end="") time.sleep(10) # wait 10s ###Output _____no_output_____ ###Markdown 6. Show completed batch ###Code if len(batch["errors"]) == 0: print("No errors on receivers during the processing of the batch.\n") else: print("The following receivers have not been processed:") for error in batch["errors"]: print("%s: %s" % (error["receiver"], error["message"])) print(json.dumps(batch, indent=2, sort_keys=True)) ###Output _____no_output_____ ###Markdown 7. Get campaign statisticshttps://api.ovh.com/console//sms/%7BserviceName%7D/batches/%7Bid%7D/statisticsGET ###Code statistics = client.get('/sms/%s/batches/%s/statistics' % (service_name, batch_id)) print("Estimated credits: %s" % statistics["estimatedCredits"]) print("Used credits: %s" % statistics["credits"]) print("Number of SMS discarded during the processing: %d" % len(batch["errors"])) print("Number of pending SMS: %d" % statistics["pending"]) print("Number of sent SMS: %d" % statistics["sent"]) print("Number of delivered SMS to receivers: %d" % statistics["delivered"]) print("Number of not delivered SMS to receivers: %d" % statistics["failed"]) print("Number of SMS that received a STOP from receivers: %d" % statistics["stoplisted"]) ###Output _____no_output_____
data/train_model.ipynb
###Markdown Train model. ###Code clf = RandomForestClassifier(n_estimators=10) clf.fit(X, y) ###Output _____no_output_____ ###Markdown Persist model to disk. ###Code joblib.dump(clf, './clf_iris.model') ###Output _____no_output_____
breast cancer prediction ( logistic regression).ipynb
###Markdown cancer is the object of the dataset imported ###Code df = pd.DataFrame(cancer.data,columns = cancer.feature_names) ###Output _____no_output_____ ###Markdown defining fields above ###Code df.head() ###Output _____no_output_____ ###Markdown df.head func shows top 5 results ###Code df.shape ###Output _____no_output_____ ###Markdown this function tells no. of rows and columns = row, col ###Code data = pd.DataFrame(cancer.feature_names) data.head(30) ###Output _____no_output_____ ###Markdown feature means parameters for analysis ###Code type(data.head()) ###Output _____no_output_____ ###Markdown test train split ###Code from sklearn.model_selection import train_test_split x_train, x_test, y_train, y_test = train_test_split(cancer.data, cancer.target,test_size = 0.2, random_state = 42) ###Output _____no_output_____ ###Markdown cancer.target target is label test size is .2 i.e. 20% for testing , df are objects of dataframe ###Code df1 = pd.DataFrame(cancer.target) df1.head() x_train.shape x_test.shape 114/569 from sklearn.linear_model import LogisticRegression for i in range(1,21): log_reg = LogisticRegression(max_iter = i) log_reg.fit(x_train,y_train) print(log_reg.score(x_train,y_train)) log_reg.score(x_train,y_train) log_reg.score(x_test, y_test) ###Output _____no_output_____
LabFiles/Module 2/Ex2.4 Thompson Beta.ipynb
###Markdown DAT257x: Reinforcement Learning Explained Lab 2: Bandits Exercise 2.4 Thompson Beta ###Code import numpy as np import sys if "../" not in sys.path: sys.path.append("../") from lib.envs.bandit import BanditEnv from lib.simulation import Experiment #Policy interface class Policy: #num_actions: (int) Number of arms [indexed by 0 ... num_actions-1] def __init__(self, num_actions): self.num_actions = num_actions def act(self): pass def feedback(self, action, reward): pass ###Output _____no_output_____ ###Markdown Now let's implement a Thompson Beta algorithm. ###Code #Tompson Beta policy class ThompsonBeta(Policy): def __init__(self, num_actions): Policy.__init__(self, num_actions) #PRIOR Hyper-params: successes = 1; failures = 1 self.total_counts = np.zeros(num_actions, dtype = np.longdouble) self.name = "Thompson Beta" #For each arm, maintain success and failures self.successes = np.ones(num_actions, dtype = np.int) self.failures = np.ones(num_actions, dtype = np.int) def act(self): """Sample beta distribution from success and failures""" """Play the max of the sampled values""" current_action = 0 return current_action def feedback(self, action, reward): if reward > 0: self.successes[action] += 1 else: self.failures[action] += 1 self.total_counts[action] += 1 ###Output _____no_output_____ ###Markdown Now let's prepare the simulation. ###Code evaluation_seed = 1239 num_actions = 10 trials = 10000 distribution = "bernoulli" ###Output _____no_output_____ ###Markdown What do you think the regret graph would look like? ###Code env = BanditEnv(num_actions, distribution, evaluation_seed) agent = ThompsonBeta(num_actions) experiment = Experiment(env, agent) experiment.run_bandit(trials) ###Output _____no_output_____ ###Markdown DAT257x: Reinforcement Learning Explained Lab 2: Bandits Exercise 2.4 Thompson Beta ###Code import numpy as np import sys if "../" not in sys.path: sys.path.append("../") from lib.envs.bandit import BanditEnv from lib.simulation import Experiment #Policy interface class Policy: #num_actions: (int) Number of arms [indexed by 0 ... num_actions-1] def __init__(self, num_actions): self.num_actions = num_actions def act(self): pass def feedback(self, action, reward): pass ###Output _____no_output_____ ###Markdown Now let's implement a Thompson Beta algorithm. ###Code #Tompson Beta policy class ThompsonBeta(Policy): def __init__(self, num_actions): Policy.__init__(self, num_actions) #PRIOR Hyper-params: successes = 1; failures = 1 self.total_counts = np.zeros(num_actions, dtype = np.longdouble) self.name = "Thompson Beta" #For each arm, maintain success and failures self.successes = np.ones(num_actions, dtype = np.int) self.failures = np.ones(num_actions, dtype = np.int) def act(self): """Sample beta distribution from success and failures""" """Play the max of the sampled values""" current_action = 0 return current_action def feedback(self, action, reward): if reward > 0: self.successes[action] += 1 else: self.failures[action] += 1 self.total_counts[action] += 1 ###Output _____no_output_____ ###Markdown Now let's prepare the simulation. ###Code evaluation_seed = 1239 num_actions = 10 trials = 10000 distribution = "bernoulli" ###Output _____no_output_____ ###Markdown What do you think the regret graph would look like? ###Code env = BanditEnv(num_actions, distribution, evaluation_seed) agent = ThompsonBeta(num_actions) experiment = Experiment(env, agent) experiment.run_bandit(trials) ###Output _____no_output_____
Modulo1/Clase3/Homework3_MarkovNetworks.ipynb
###Markdown Markov NetworksIn the third homework you will review some concepts about Markov Networks that we saw in class, and you will also learn more about some additional topics.For the theoretical exercises, please be as explicit and clear as possible. Furthermore, use the $\LaTeX$ math mode that notebooks offer.If further questions arise, please use the slack channel, or write me to [email protected]. Imagen recuperada de: https://upload.wikimedia.org/wikipedia/en/7/7b/A_simple_Markov_network.png.___ 1. Logistic regression as CRFsConsider a CRF:- Over the binary RVs $\bar{X}=\{X_1, \dots, X_n\}$ and $\bar{Y} = \{Y\}$.- There are pairwise edges between $Y$ and each $X_i$.- The factors are defined as: $$\phi_i(Y, X_i)=\exp(w_i \boldsymbol{1}\{X_i=1, Y=1\}),$$ where $w_i\in\mathbb{R}$ and $\boldsymbol{1}$ stands for the indicator function. - Moreover, there is a single-node factor $\phi_0(Y)=\exp(w_0 \boldsymbol{1}\{Y=1\})$. Show that the conditional probability distribution this CRF encodes corresponds to the logistic regression distribution:$$P(Y=1 | \bar{x}) = \frac{\exp\left(w_0 + \sum_{i=1}^{n} w_i x_i\right)}{1 + \exp\left(w_0 + \sum_{i=1}^{n} w_i x_i\right)}$$ ###Code from IPython.display import Image Image("figures/LogisticCRF.png") ###Output _____no_output_____
scratchpad/voids_paper/notebooks/scratch/smart_vis_devel.ipynb
###Markdown Segment a sparse 3D image with a single material component The goal of this notebook is to develop a 3D segmentation algorithm that improves segmentation where features are detected.**Data:** AM parts from Xuan Zhang. ###Code %matplotlib inline import numpy as np import matplotlib.pyplot as plt import seaborn as sns import cupy as cp from tomo_encoders.misc import viewer from tomo_encoders import DataFile, Grid from skimage.filters import threshold_otsu from tomo_encoders.reconstruction.recon import recon_binning import h5py import sys import time pixel_res = 1.17 # micrometer per pixel b = 4 b_K = 4 wd = 32 def transform_ax2(img): img = np.fliplr(img) img = np.rot90(img) return img hf = h5py.File('/data02/MyArchive/aisteer_3Dencoders/tmp_data/projs_2k.hdf5', 'r') projs = np.asarray(hf["data"][:]) theta = np.asarray(hf['theta'][:]) center = float(np.asarray(hf["center"])) hf.close() sys.path.append('/home/atekawade/TomoEncoders/scratchpad/voids_paper') from surface_determination import Voids size_threshs = np.linspace(3, 15, 6, endpoint = True)/(b*pixel_res) fig, ax = plt.subplots(2, 3, figsize = (12,6)) for iplot, size_thresh in enumerate(size_threshs): st_chkpt = cp.cuda.Event(); end_chkpt = cp.cuda.Event(); st_chkpt.record() voids_b = Voids() voids_b.guess_voids(projs, theta, center, b, b_K) voids_b.select_by_size(size_thresh) p_sel, r_fac = voids_b.export_grid(wd) p_sel = p_sel.rescale(b) print(f'\tSTAT: size thres: {size_thresh:.2f} pixel length') end_chkpt.record(); end_chkpt.synchronize(); t_chkpt = cp.cuda.get_elapsed_time(st_chkpt,end_chkpt) print(f"time checkpoint {t_chkpt/1000.0:.2f} secs") cp.fft.config.clear_plan_cache() ax.flat[iplot].hist(np.cbrt(voids_b["sizes"])*pixel_res*b, bins = 50) ax.flat[iplot].set_xlabel("micrometers") ax.flat[iplot].set_xlim([0,100]) p_sel.vol_shape ###Output _____no_output_____
ces/.ipynb_checkpoints/fit2-checkpoint.ipynb
###Markdown $f(t)=1/{(1+e^{-\alpha t- \beta})}$ ###Code def logist(p,t): return [1/(1+np.exp(-p[0]*i-p[1])) for i in np.array(t)-p[2]] def errfunc(p,t,y): return y-logist(p,t) from scipy import optimize ###Output _____no_output_____ ###Markdown $f(t)=1/{(1+e^{-\alpha \cdot {e^{-\gamma t}} - \beta})}$ ###Code def logist2(p,t): return [1/(1+np.exp(-p[0]*(np.exp(-p[3]*i))-p[1])) for i in np.array(t)-p[2]] def errfunc2(p,t,y): return y-logist2(p,t) from scipy import optimize #run once c1/=100.0 c1b/=100.0 c2/=100.0 c2b/=100.0 def plotter(ax,x,y,c,l,z=2,zz=2,step=2,w=-50,w2=30): yrs=range(x[0]-40,x[len(x)-1]+10) maxi=[0,0] maxv=-100 #try a few initial values for maximum rsquared i=0 for k in range(1,5): p0 = [1., 1., x[len(x)*k/5]] fit2 = optimize.leastsq(errfunc,p0,args=(x,y),full_output=True) ss_err=(fit2[2]['fvec']**2).sum() ss_tot=((y-y.mean())**2).sum() rsquared=1-(ss_err/ss_tot) if rsquared>maxv: maxi=[i,k] maxv=rsquared i=maxi[0] k=maxi[1] p0 = [1., 1., x[len(x)*k/5], -1+i*0.5] fit2 = optimize.leastsq(errfunc,p0,args=(x,y),full_output=True) ss_err=(fit2[2]['fvec']**2).sum() ss_tot=((y-y.mean())**2).sum() rsquared=1-(ss_err/ss_tot) ax.scatter(x[::step],y[::step],lw*3,color=c) #ax.plot(yrs,logist(fit2[0],yrs),color="#006d2c",lw=lw) ax.plot(yrs,logist(fit2[0],yrs),color="#444444",lw=lw) #ax.plot(yrs,logist(fit2[0],yrs),color=c,lw=1) yk=logist([fit2[0][0],fit2[0][1],fit2[0][2],fit2[0][3]],range(3000)) mint=0 maxt=3000 perc=0.1 for i in range(3000): if yk[i]<perc: mint=i if yk[i]<1-perc: maxt=i if z>-1: coord=len(x)*z/5 ax.annotate('$R^2 = '+str(np.round(rsquared,2))+'$\n'+\ '$\\alpha = '+str(np.round(fit2[0][0],2))+'$\n'+\ '$\\beta = '+str(np.round(fit2[0][1],2))+'$\n'+\ '$\\Delta t = '+str(int(maxt-mint))+'$', xy=(yrs[coord], logist(fit2[0],yrs)[coord]),\ xycoords='data', xytext=(w, w2), textcoords='offset points', color="#444444", arrowprops=dict(arrowstyle="->",color='#444444')) coord=len(x)*zz/5 ax.annotate(l, xy=(yrs[coord], logist(fit2[0],yrs)[coord]),\ xycoords='data', xytext=(w, w2), textcoords='offset points', arrowprops=dict(arrowstyle="->")) fig, ax = plt.subplots(1,1,subplot_kw=dict(axisbg='#EEEEEE',axisbelow=True),figsize=(10,5)) lw=2 colors=["#756bb1","#d95f0e"] ax.scatter([-10],[-10],color=colors[0],label="Coal") ax.scatter([-10],[-10],color=colors[1],label="Liquids") m=c1.argmax() yes=True i=0 while yes: if c1b[i]>0.005: yes=False else: i+=1 x=years1[:m] y=c1[:m] plotter(ax,x,y,colors[0],'UK',2,3,3) x=years1[i:] y=c1b[i:] plotter(ax,x,y,colors[1],'UK$^1$',-1,5,2,20,-20) m=cc1.argmax() x=yearss1[:m] y=cc1[:m] plotter(ax,x,y,colors[0],'Germany',-1,2,2,-80) m=cc22.argmax() x=yearss3[:m] y=cc22[:m] plotter(ax,x,y,colors[1],'US',-1,5,2,-35,15) m=cc4.argmax() x=yearss4[:m] y=cc4[:m] plotter(ax,x,y,colors[0],'US',-1,5,2,20,-20) m=cc5.argmax() x=yearss5[:m] y=cc5[:m] plotter(ax,x,y,colors[1],'World',-1,5,2,35,-35) m=cc6.argmax() x=yearss6[:m] y=cc6[:m] plotter(ax,x,y,colors[0],'World',-1,5,2,-70) ax.grid(color='white', linestyle='solid') ax.set_xlabel('Years') ax.set_xlim([1500,2010]) ax.set_ylim([0,1]) ax.legend(loc=2,framealpha=0.5,fontsize=11) ax.set_title('Share in primary energy use',size=12) #ax[1].set_title('Share in energy use for all transport',size=12) ax.text(0.01,0.84,'$^1$includes nuclear $(<5\%)$', horizontalalignment='left', verticalalignment='top', transform=ax.transAxes,fontsize=7) plt.suptitle(u'Historical energy transitions of fossil fuels \n $f(t)=1/{\{1+exp[-\\alpha (t-t_0) - \\beta}]\}$',fontsize=13,y=1.06) plt.savefig('ces8.png',bbox_inches = 'tight', pad_inches = 0.1, dpi=150) plt.show() ###Output _____no_output_____
notebooks/exploratory/208_afox_papermill.ipynb
###Markdown Prepare papermill for schulung3.geomar.de Make sure you have activated the correct kernel Install kernel manually ###Code !python -m ipykernel install --user --name parcels-container_2021.09.29-09ab0ce !jupyter kernelspec list ###Output Available kernels: parcels-container_2021.03.17-6c459b7 /home/jupyter-workshop007/.local/share/jupyter/kernels/parcels-container_2021.03.17-6c459b7 parcels-container_2021.09.29-09ab0ce /home/jupyter-workshop007/.local/share/jupyter/kernels/parcels-container_2021.09.29-09ab0ce py3_lagrange_v2.2.2 /home/jupyter-workshop007/.local/share/jupyter/kernels/py3_lagrange_v2.2.2 python3 /opt/tljh/user/envs/parcels-container_2021.09.29-09ab0ce/share/jupyter/kernels/python3 ###Markdown Run papermill on schulung3.geomar.de ###Code %%bash for year in {1990..2019}; do papermill 208_afox_volume_budget_sigma_lab_sea.ipynb \ ../executed/208_afox_volume_budget_sigma_lab_sea/208_afox_volume_budget_sigma_lab_sea_${year}.ipynb \ -p year $year \ -p mean_period "1m" \ -k parcels-container_2021.09.29-09ab0ce done ###Output Input Notebook: 208_afox_volume_budget_sigma_lab_sea.ipynb Output Notebook: ../executed/208_afox_volume_budget_sigma_lab_sea/208_afox_volume_budget_sigma_lab_sea_1990.ipynb Generating grammar tables from /opt/tljh/user/envs/parcels-container_2021.09.29-09ab0ce/lib/python3.9/site-packages/blib2to3/Grammar.txt Writing grammar tables to /home/jupyter-workshop007/.cache/black/21.9b0/Grammar3.9.7.final.0.pickle Writing failed: [Errno 2] No such file or directory: '/home/jupyter-workshop007/.cache/black/21.9b0/tmpg4ngnew5' Generating grammar tables from /opt/tljh/user/envs/parcels-container_2021.09.29-09ab0ce/lib/python3.9/site-packages/blib2to3/PatternGrammar.txt Writing grammar tables to /home/jupyter-workshop007/.cache/black/21.9b0/PatternGrammar3.9.7.final.0.pickle Writing failed: [Errno 2] No such file or directory: '/home/jupyter-workshop007/.cache/black/21.9b0/tmplkwco2bm' Executing: 0%| | 0/49 [00:00<?, ?cell/s]Executing notebook with kernel: parcels-container_2021.09.29-09ab0ce Executing: 100%|██████████| 49/49 [01:17<00:00, 1.59s/cell] Input Notebook: 208_afox_volume_budget_sigma_lab_sea.ipynb Output Notebook: ../executed/208_afox_volume_budget_sigma_lab_sea/208_afox_volume_budget_sigma_lab_sea_1991.ipynb Generating grammar tables from /opt/tljh/user/envs/parcels-container_2021.09.29-09ab0ce/lib/python3.9/site-packages/blib2to3/Grammar.txt Writing grammar tables to /home/jupyter-workshop007/.cache/black/21.9b0/Grammar3.9.7.final.0.pickle Writing failed: [Errno 2] No such file or directory: '/home/jupyter-workshop007/.cache/black/21.9b0/tmpjgonb4kd' Generating grammar tables from /opt/tljh/user/envs/parcels-container_2021.09.29-09ab0ce/lib/python3.9/site-packages/blib2to3/PatternGrammar.txt Writing grammar tables to /home/jupyter-workshop007/.cache/black/21.9b0/PatternGrammar3.9.7.final.0.pickle Writing failed: [Errno 2] No such file or directory: '/home/jupyter-workshop007/.cache/black/21.9b0/tmprzmo5avb' Executing: 0%| | 0/49 [00:00<?, ?cell/s]Executing notebook with kernel: parcels-container_2021.09.29-09ab0ce Executing: 100%|██████████| 49/49 [01:23<00:00, 1.70s/cell] Input Notebook: 208_afox_volume_budget_sigma_lab_sea.ipynb Output Notebook: ../executed/208_afox_volume_budget_sigma_lab_sea/208_afox_volume_budget_sigma_lab_sea_1992.ipynb Generating grammar tables from /opt/tljh/user/envs/parcels-container_2021.09.29-09ab0ce/lib/python3.9/site-packages/blib2to3/Grammar.txt Writing grammar tables to /home/jupyter-workshop007/.cache/black/21.9b0/Grammar3.9.7.final.0.pickle Writing failed: [Errno 2] No such file or directory: '/home/jupyter-workshop007/.cache/black/21.9b0/tmpme8714s8' Generating grammar tables from /opt/tljh/user/envs/parcels-container_2021.09.29-09ab0ce/lib/python3.9/site-packages/blib2to3/PatternGrammar.txt Writing grammar tables to /home/jupyter-workshop007/.cache/black/21.9b0/PatternGrammar3.9.7.final.0.pickle Writing failed: [Errno 2] No such file or directory: '/home/jupyter-workshop007/.cache/black/21.9b0/tmpg1ojlh_p' Executing: 0%| | 0/49 [00:00<?, ?cell/s]Executing notebook with kernel: parcels-container_2021.09.29-09ab0ce Executing: 100%|██████████| 49/49 [01:23<00:00, 1.70s/cell] Input Notebook: 208_afox_volume_budget_sigma_lab_sea.ipynb Output Notebook: ../executed/208_afox_volume_budget_sigma_lab_sea/208_afox_volume_budget_sigma_lab_sea_1993.ipynb Generating grammar tables from /opt/tljh/user/envs/parcels-container_2021.09.29-09ab0ce/lib/python3.9/site-packages/blib2to3/Grammar.txt Writing grammar tables to /home/jupyter-workshop007/.cache/black/21.9b0/Grammar3.9.7.final.0.pickle Writing failed: [Errno 2] No such file or directory: '/home/jupyter-workshop007/.cache/black/21.9b0/tmpt43jkxem' Generating grammar tables from /opt/tljh/user/envs/parcels-container_2021.09.29-09ab0ce/lib/python3.9/site-packages/blib2to3/PatternGrammar.txt Writing grammar tables to /home/jupyter-workshop007/.cache/black/21.9b0/PatternGrammar3.9.7.final.0.pickle Writing failed: [Errno 2] No such file or directory: '/home/jupyter-workshop007/.cache/black/21.9b0/tmppwvtamkw' Executing: 0%| | 0/49 [00:00<?, ?cell/s]Executing notebook with kernel: parcels-container_2021.09.29-09ab0ce Executing: 100%|██████████| 49/49 [01:22<00:00, 1.69s/cell] Input Notebook: 208_afox_volume_budget_sigma_lab_sea.ipynb Output Notebook: ../executed/208_afox_volume_budget_sigma_lab_sea/208_afox_volume_budget_sigma_lab_sea_1994.ipynb Generating grammar tables from /opt/tljh/user/envs/parcels-container_2021.09.29-09ab0ce/lib/python3.9/site-packages/blib2to3/Grammar.txt Writing grammar tables to /home/jupyter-workshop007/.cache/black/21.9b0/Grammar3.9.7.final.0.pickle Writing failed: [Errno 2] No such file or directory: '/home/jupyter-workshop007/.cache/black/21.9b0/tmp8sq_q0z8' Generating grammar tables from /opt/tljh/user/envs/parcels-container_2021.09.29-09ab0ce/lib/python3.9/site-packages/blib2to3/PatternGrammar.txt Writing grammar tables to /home/jupyter-workshop007/.cache/black/21.9b0/PatternGrammar3.9.7.final.0.pickle Writing failed: [Errno 2] No such file or directory: '/home/jupyter-workshop007/.cache/black/21.9b0/tmpiwhvrqfy' Executing: 0%| | 0/49 [00:00<?, ?cell/s]Executing notebook with kernel: parcels-container_2021.09.29-09ab0ce Executing: 100%|██████████| 49/49 [01:24<00:00, 1.72s/cell] Input Notebook: 208_afox_volume_budget_sigma_lab_sea.ipynb Output Notebook: ../executed/208_afox_volume_budget_sigma_lab_sea/208_afox_volume_budget_sigma_lab_sea_1995.ipynb Generating grammar tables from /opt/tljh/user/envs/parcels-container_2021.09.29-09ab0ce/lib/python3.9/site-packages/blib2to3/Grammar.txt Writing grammar tables to /home/jupyter-workshop007/.cache/black/21.9b0/Grammar3.9.7.final.0.pickle Writing failed: [Errno 2] No such file or directory: '/home/jupyter-workshop007/.cache/black/21.9b0/tmpy9tja4at' Generating grammar tables from /opt/tljh/user/envs/parcels-container_2021.09.29-09ab0ce/lib/python3.9/site-packages/blib2to3/PatternGrammar.txt Writing grammar tables to /home/jupyter-workshop007/.cache/black/21.9b0/PatternGrammar3.9.7.final.0.pickle Writing failed: [Errno 2] No such file or directory: '/home/jupyter-workshop007/.cache/black/21.9b0/tmpvul52wso' Executing: 0%| | 0/49 [00:00<?, ?cell/s]Executing notebook with kernel: parcels-container_2021.09.29-09ab0ce Executing: 100%|██████████| 49/49 [01:24<00:00, 1.73s/cell] Input Notebook: 208_afox_volume_budget_sigma_lab_sea.ipynb Output Notebook: ../executed/208_afox_volume_budget_sigma_lab_sea/208_afox_volume_budget_sigma_lab_sea_1996.ipynb Generating grammar tables from /opt/tljh/user/envs/parcels-container_2021.09.29-09ab0ce/lib/python3.9/site-packages/blib2to3/Grammar.txt Writing grammar tables to /home/jupyter-workshop007/.cache/black/21.9b0/Grammar3.9.7.final.0.pickle Writing failed: [Errno 2] No such file or directory: '/home/jupyter-workshop007/.cache/black/21.9b0/tmphasl1wqt' Generating grammar tables from /opt/tljh/user/envs/parcels-container_2021.09.29-09ab0ce/lib/python3.9/site-packages/blib2to3/PatternGrammar.txt Writing grammar tables to /home/jupyter-workshop007/.cache/black/21.9b0/PatternGrammar3.9.7.final.0.pickle Writing failed: [Errno 2] No such file or directory: '/home/jupyter-workshop007/.cache/black/21.9b0/tmprwy1uwt2' Executing: 0%| | 0/49 [00:00<?, ?cell/s]Executing notebook with kernel: parcels-container_2021.09.29-09ab0ce Executing: 100%|██████████| 49/49 [01:22<00:00, 1.69s/cell] Input Notebook: 208_afox_volume_budget_sigma_lab_sea.ipynb Output Notebook: ../executed/208_afox_volume_budget_sigma_lab_sea/208_afox_volume_budget_sigma_lab_sea_1997.ipynb Generating grammar tables from /opt/tljh/user/envs/parcels-container_2021.09.29-09ab0ce/lib/python3.9/site-packages/blib2to3/Grammar.txt Writing grammar tables to /home/jupyter-workshop007/.cache/black/21.9b0/Grammar3.9.7.final.0.pickle Writing failed: [Errno 2] No such file or directory: '/home/jupyter-workshop007/.cache/black/21.9b0/tmpwn2xseh3' Generating grammar tables from /opt/tljh/user/envs/parcels-container_2021.09.29-09ab0ce/lib/python3.9/site-packages/blib2to3/PatternGrammar.txt Writing grammar tables to /home/jupyter-workshop007/.cache/black/21.9b0/PatternGrammar3.9.7.final.0.pickle Writing failed: [Errno 2] No such file or directory: '/home/jupyter-workshop007/.cache/black/21.9b0/tmp71pc6w8w' Executing: 0%| | 0/49 [00:00<?, ?cell/s]Executing notebook with kernel: parcels-container_2021.09.29-09ab0ce Executing: 100%|██████████| 49/49 [01:25<00:00, 1.74s/cell] Input Notebook: 208_afox_volume_budget_sigma_lab_sea.ipynb Output Notebook: ../executed/208_afox_volume_budget_sigma_lab_sea/208_afox_volume_budget_sigma_lab_sea_1998.ipynb Generating grammar tables from /opt/tljh/user/envs/parcels-container_2021.09.29-09ab0ce/lib/python3.9/site-packages/blib2to3/Grammar.txt Writing grammar tables to /home/jupyter-workshop007/.cache/black/21.9b0/Grammar3.9.7.final.0.pickle Writing failed: [Errno 2] No such file or directory: '/home/jupyter-workshop007/.cache/black/21.9b0/tmprcevapic' Generating grammar tables from /opt/tljh/user/envs/parcels-container_2021.09.29-09ab0ce/lib/python3.9/site-packages/blib2to3/PatternGrammar.txt Writing grammar tables to /home/jupyter-workshop007/.cache/black/21.9b0/PatternGrammar3.9.7.final.0.pickle Writing failed: [Errno 2] No such file or directory: '/home/jupyter-workshop007/.cache/black/21.9b0/tmpvgl_tffx' Executing: 0%| | 0/49 [00:00<?, ?cell/s]Executing notebook with kernel: parcels-container_2021.09.29-09ab0ce Executing: 100%|██████████| 49/49 [01:24<00:00, 1.73s/cell] Input Notebook: 208_afox_volume_budget_sigma_lab_sea.ipynb Output Notebook: ../executed/208_afox_volume_budget_sigma_lab_sea/208_afox_volume_budget_sigma_lab_sea_1999.ipynb Generating grammar tables from /opt/tljh/user/envs/parcels-container_2021.09.29-09ab0ce/lib/python3.9/site-packages/blib2to3/Grammar.txt Writing grammar tables to /home/jupyter-workshop007/.cache/black/21.9b0/Grammar3.9.7.final.0.pickle Writing failed: [Errno 2] No such file or directory: '/home/jupyter-workshop007/.cache/black/21.9b0/tmpndiac28s' Generating grammar tables from /opt/tljh/user/envs/parcels-container_2021.09.29-09ab0ce/lib/python3.9/site-packages/blib2to3/PatternGrammar.txt Writing grammar tables to /home/jupyter-workshop007/.cache/black/21.9b0/PatternGrammar3.9.7.final.0.pickle Writing failed: [Errno 2] No such file or directory: '/home/jupyter-workshop007/.cache/black/21.9b0/tmpptx948oi' Executing: 0%| | 0/49 [00:00<?, ?cell/s]Executing notebook with kernel: parcels-container_2021.09.29-09ab0ce Executing: 100%|██████████| 49/49 [01:25<00:00, 1.74s/cell] Input Notebook: 208_afox_volume_budget_sigma_lab_sea.ipynb Output Notebook: ../executed/208_afox_volume_budget_sigma_lab_sea/208_afox_volume_budget_sigma_lab_sea_2000.ipynb Generating grammar tables from /opt/tljh/user/envs/parcels-container_2021.09.29-09ab0ce/lib/python3.9/site-packages/blib2to3/Grammar.txt Writing grammar tables to /home/jupyter-workshop007/.cache/black/21.9b0/Grammar3.9.7.final.0.pickle Writing failed: [Errno 2] No such file or directory: '/home/jupyter-workshop007/.cache/black/21.9b0/tmpiqy3gg_0' Generating grammar tables from /opt/tljh/user/envs/parcels-container_2021.09.29-09ab0ce/lib/python3.9/site-packages/blib2to3/PatternGrammar.txt Writing grammar tables to /home/jupyter-workshop007/.cache/black/21.9b0/PatternGrammar3.9.7.final.0.pickle Writing failed: [Errno 2] No such file or directory: '/home/jupyter-workshop007/.cache/black/21.9b0/tmpk71mlztk' Executing: 0%| | 0/49 [00:00<?, ?cell/s]Executing notebook with kernel: parcels-container_2021.09.29-09ab0ce Executing: 100%|██████████| 49/49 [01:23<00:00, 1.70s/cell] Input Notebook: 208_afox_volume_budget_sigma_lab_sea.ipynb Output Notebook: ../executed/208_afox_volume_budget_sigma_lab_sea/208_afox_volume_budget_sigma_lab_sea_2001.ipynb Generating grammar tables from /opt/tljh/user/envs/parcels-container_2021.09.29-09ab0ce/lib/python3.9/site-packages/blib2to3/Grammar.txt Writing grammar tables to /home/jupyter-workshop007/.cache/black/21.9b0/Grammar3.9.7.final.0.pickle Writing failed: [Errno 2] No such file or directory: '/home/jupyter-workshop007/.cache/black/21.9b0/tmpx0ovvbyh' Generating grammar tables from /opt/tljh/user/envs/parcels-container_2021.09.29-09ab0ce/lib/python3.9/site-packages/blib2to3/PatternGrammar.txt Writing grammar tables to /home/jupyter-workshop007/.cache/black/21.9b0/PatternGrammar3.9.7.final.0.pickle Writing failed: [Errno 2] No such file or directory: '/home/jupyter-workshop007/.cache/black/21.9b0/tmp54sqez5y' Executing: 0%| | 0/49 [00:00<?, ?cell/s]Executing notebook with kernel: parcels-container_2021.09.29-09ab0ce Executing: 100%|██████████| 49/49 [01:23<00:00, 1.71s/cell] Input Notebook: 208_afox_volume_budget_sigma_lab_sea.ipynb Output Notebook: ../executed/208_afox_volume_budget_sigma_lab_sea/208_afox_volume_budget_sigma_lab_sea_2002.ipynb Generating grammar tables from /opt/tljh/user/envs/parcels-container_2021.09.29-09ab0ce/lib/python3.9/site-packages/blib2to3/Grammar.txt Writing grammar tables to /home/jupyter-workshop007/.cache/black/21.9b0/Grammar3.9.7.final.0.pickle Writing failed: [Errno 2] No such file or directory: '/home/jupyter-workshop007/.cache/black/21.9b0/tmp4qrvh59p' Generating grammar tables from /opt/tljh/user/envs/parcels-container_2021.09.29-09ab0ce/lib/python3.9/site-packages/blib2to3/PatternGrammar.txt Writing grammar tables to /home/jupyter-workshop007/.cache/black/21.9b0/PatternGrammar3.9.7.final.0.pickle Writing failed: [Errno 2] No such file or directory: '/home/jupyter-workshop007/.cache/black/21.9b0/tmpoznonx6g' Executing: 0%| | 0/49 [00:00<?, ?cell/s]Executing notebook with kernel: parcels-container_2021.09.29-09ab0ce Executing: 100%|██████████| 49/49 [01:23<00:00, 1.71s/cell] Input Notebook: 208_afox_volume_budget_sigma_lab_sea.ipynb Output Notebook: ../executed/208_afox_volume_budget_sigma_lab_sea/208_afox_volume_budget_sigma_lab_sea_2003.ipynb Generating grammar tables from /opt/tljh/user/envs/parcels-container_2021.09.29-09ab0ce/lib/python3.9/site-packages/blib2to3/Grammar.txt Writing grammar tables to /home/jupyter-workshop007/.cache/black/21.9b0/Grammar3.9.7.final.0.pickle Writing failed: [Errno 2] No such file or directory: '/home/jupyter-workshop007/.cache/black/21.9b0/tmplj2q4hvi' Generating grammar tables from /opt/tljh/user/envs/parcels-container_2021.09.29-09ab0ce/lib/python3.9/site-packages/blib2to3/PatternGrammar.txt Writing grammar tables to /home/jupyter-workshop007/.cache/black/21.9b0/PatternGrammar3.9.7.final.0.pickle Writing failed: [Errno 2] No such file or directory: '/home/jupyter-workshop007/.cache/black/21.9b0/tmph3nfl6_b' Executing: 0%| | 0/49 [00:00<?, ?cell/s]Executing notebook with kernel: parcels-container_2021.09.29-09ab0ce Executing: 100%|██████████| 49/49 [01:24<00:00, 1.72s/cell] Input Notebook: 208_afox_volume_budget_sigma_lab_sea.ipynb Output Notebook: ../executed/208_afox_volume_budget_sigma_lab_sea/208_afox_volume_budget_sigma_lab_sea_2004.ipynb Generating grammar tables from /opt/tljh/user/envs/parcels-container_2021.09.29-09ab0ce/lib/python3.9/site-packages/blib2to3/Grammar.txt Writing grammar tables to /home/jupyter-workshop007/.cache/black/21.9b0/Grammar3.9.7.final.0.pickle Writing failed: [Errno 2] No such file or directory: '/home/jupyter-workshop007/.cache/black/21.9b0/tmp9tkm_ltz' Generating grammar tables from /opt/tljh/user/envs/parcels-container_2021.09.29-09ab0ce/lib/python3.9/site-packages/blib2to3/PatternGrammar.txt Writing grammar tables to /home/jupyter-workshop007/.cache/black/21.9b0/PatternGrammar3.9.7.final.0.pickle Writing failed: [Errno 2] No such file or directory: '/home/jupyter-workshop007/.cache/black/21.9b0/tmppqkop3s9' Executing: 0%| | 0/49 [00:00<?, ?cell/s]Executing notebook with kernel: parcels-container_2021.09.29-09ab0ce Executing: 100%|██████████| 49/49 [01:24<00:00, 1.73s/cell] Input Notebook: 208_afox_volume_budget_sigma_lab_sea.ipynb Output Notebook: ../executed/208_afox_volume_budget_sigma_lab_sea/208_afox_volume_budget_sigma_lab_sea_2005.ipynb Generating grammar tables from /opt/tljh/user/envs/parcels-container_2021.09.29-09ab0ce/lib/python3.9/site-packages/blib2to3/Grammar.txt Writing grammar tables to /home/jupyter-workshop007/.cache/black/21.9b0/Grammar3.9.7.final.0.pickle Writing failed: [Errno 2] No such file or directory: '/home/jupyter-workshop007/.cache/black/21.9b0/tmp3t4sw6jw' Generating grammar tables from /opt/tljh/user/envs/parcels-container_2021.09.29-09ab0ce/lib/python3.9/site-packages/blib2to3/PatternGrammar.txt Writing grammar tables to /home/jupyter-workshop007/.cache/black/21.9b0/PatternGrammar3.9.7.final.0.pickle Writing failed: [Errno 2] No such file or directory: '/home/jupyter-workshop007/.cache/black/21.9b0/tmpljbtgzb4' Executing: 0%| | 0/49 [00:00<?, ?cell/s]Executing notebook with kernel: parcels-container_2021.09.29-09ab0ce Executing: 100%|██████████| 49/49 [01:24<00:00, 1.73s/cell] Input Notebook: 208_afox_volume_budget_sigma_lab_sea.ipynb Output Notebook: ../executed/208_afox_volume_budget_sigma_lab_sea/208_afox_volume_budget_sigma_lab_sea_2006.ipynb Generating grammar tables from /opt/tljh/user/envs/parcels-container_2021.09.29-09ab0ce/lib/python3.9/site-packages/blib2to3/Grammar.txt Writing grammar tables to /home/jupyter-workshop007/.cache/black/21.9b0/Grammar3.9.7.final.0.pickle Writing failed: [Errno 2] No such file or directory: '/home/jupyter-workshop007/.cache/black/21.9b0/tmp8fmx3d99' Generating grammar tables from /opt/tljh/user/envs/parcels-container_2021.09.29-09ab0ce/lib/python3.9/site-packages/blib2to3/PatternGrammar.txt Writing grammar tables to /home/jupyter-workshop007/.cache/black/21.9b0/PatternGrammar3.9.7.final.0.pickle Writing failed: [Errno 2] No such file or directory: '/home/jupyter-workshop007/.cache/black/21.9b0/tmpt4rsblb2' Executing: 0%| | 0/49 [00:00<?, ?cell/s]Executing notebook with kernel: parcels-container_2021.09.29-09ab0ce Executing: 100%|██████████| 49/49 [01:24<00:00, 1.72s/cell] Input Notebook: 208_afox_volume_budget_sigma_lab_sea.ipynb Output Notebook: ../executed/208_afox_volume_budget_sigma_lab_sea/208_afox_volume_budget_sigma_lab_sea_2007.ipynb Generating grammar tables from /opt/tljh/user/envs/parcels-container_2021.09.29-09ab0ce/lib/python3.9/site-packages/blib2to3/Grammar.txt Writing grammar tables to /home/jupyter-workshop007/.cache/black/21.9b0/Grammar3.9.7.final.0.pickle Writing failed: [Errno 2] No such file or directory: '/home/jupyter-workshop007/.cache/black/21.9b0/tmphfs0hor5' Generating grammar tables from /opt/tljh/user/envs/parcels-container_2021.09.29-09ab0ce/lib/python3.9/site-packages/blib2to3/PatternGrammar.txt Writing grammar tables to /home/jupyter-workshop007/.cache/black/21.9b0/PatternGrammar3.9.7.final.0.pickle Writing failed: [Errno 2] No such file or directory: '/home/jupyter-workshop007/.cache/black/21.9b0/tmps15382v6' Executing: 0%| | 0/49 [00:00<?, ?cell/s]Executing notebook with kernel: parcels-container_2021.09.29-09ab0ce Executing: 100%|██████████| 49/49 [01:24<00:00, 1.72s/cell] Input Notebook: 208_afox_volume_budget_sigma_lab_sea.ipynb Output Notebook: ../executed/208_afox_volume_budget_sigma_lab_sea/208_afox_volume_budget_sigma_lab_sea_2008.ipynb Generating grammar tables from /opt/tljh/user/envs/parcels-container_2021.09.29-09ab0ce/lib/python3.9/site-packages/blib2to3/Grammar.txt Writing grammar tables to /home/jupyter-workshop007/.cache/black/21.9b0/Grammar3.9.7.final.0.pickle Writing failed: [Errno 2] No such file or directory: '/home/jupyter-workshop007/.cache/black/21.9b0/tmpq015zvdz' Generating grammar tables from /opt/tljh/user/envs/parcels-container_2021.09.29-09ab0ce/lib/python3.9/site-packages/blib2to3/PatternGrammar.txt Writing grammar tables to /home/jupyter-workshop007/.cache/black/21.9b0/PatternGrammar3.9.7.final.0.pickle Writing failed: [Errno 2] No such file or directory: '/home/jupyter-workshop007/.cache/black/21.9b0/tmpotoaomoi' Executing: 0%| | 0/49 [00:00<?, ?cell/s]Executing notebook with kernel: parcels-container_2021.09.29-09ab0ce Executing: 100%|██████████| 49/49 [01:24<00:00, 1.73s/cell] Input Notebook: 208_afox_volume_budget_sigma_lab_sea.ipynb Output Notebook: ../executed/208_afox_volume_budget_sigma_lab_sea/208_afox_volume_budget_sigma_lab_sea_2009.ipynb Generating grammar tables from /opt/tljh/user/envs/parcels-container_2021.09.29-09ab0ce/lib/python3.9/site-packages/blib2to3/Grammar.txt Writing grammar tables to /home/jupyter-workshop007/.cache/black/21.9b0/Grammar3.9.7.final.0.pickle Writing failed: [Errno 2] No such file or directory: '/home/jupyter-workshop007/.cache/black/21.9b0/tmp3dj_vhrc' Generating grammar tables from /opt/tljh/user/envs/parcels-container_2021.09.29-09ab0ce/lib/python3.9/site-packages/blib2to3/PatternGrammar.txt Writing grammar tables to /home/jupyter-workshop007/.cache/black/21.9b0/PatternGrammar3.9.7.final.0.pickle Writing failed: [Errno 2] No such file or directory: '/home/jupyter-workshop007/.cache/black/21.9b0/tmp925xxu6m' Executing: 0%| | 0/49 [00:00<?, ?cell/s]Executing notebook with kernel: parcels-container_2021.09.29-09ab0ce Executing: 100%|██████████| 49/49 [01:23<00:00, 1.71s/cell] Input Notebook: 208_afox_volume_budget_sigma_lab_sea.ipynb Output Notebook: ../executed/208_afox_volume_budget_sigma_lab_sea/208_afox_volume_budget_sigma_lab_sea_2010.ipynb Generating grammar tables from /opt/tljh/user/envs/parcels-container_2021.09.29-09ab0ce/lib/python3.9/site-packages/blib2to3/Grammar.txt Writing grammar tables to /home/jupyter-workshop007/.cache/black/21.9b0/Grammar3.9.7.final.0.pickle Writing failed: [Errno 2] No such file or directory: '/home/jupyter-workshop007/.cache/black/21.9b0/tmpizqey5p_' Generating grammar tables from /opt/tljh/user/envs/parcels-container_2021.09.29-09ab0ce/lib/python3.9/site-packages/blib2to3/PatternGrammar.txt Writing grammar tables to /home/jupyter-workshop007/.cache/black/21.9b0/PatternGrammar3.9.7.final.0.pickle Writing failed: [Errno 2] No such file or directory: '/home/jupyter-workshop007/.cache/black/21.9b0/tmp_09x_vl3' Executing: 0%| | 0/49 [00:00<?, ?cell/s]Executing notebook with kernel: parcels-container_2021.09.29-09ab0ce Executing: 100%|██████████| 49/49 [01:23<00:00, 1.71s/cell] Input Notebook: 208_afox_volume_budget_sigma_lab_sea.ipynb Output Notebook: ../executed/208_afox_volume_budget_sigma_lab_sea/208_afox_volume_budget_sigma_lab_sea_2011.ipynb Generating grammar tables from /opt/tljh/user/envs/parcels-container_2021.09.29-09ab0ce/lib/python3.9/site-packages/blib2to3/Grammar.txt Writing grammar tables to /home/jupyter-workshop007/.cache/black/21.9b0/Grammar3.9.7.final.0.pickle Writing failed: [Errno 2] No such file or directory: '/home/jupyter-workshop007/.cache/black/21.9b0/tmp6pe77ei2' Generating grammar tables from /opt/tljh/user/envs/parcels-container_2021.09.29-09ab0ce/lib/python3.9/site-packages/blib2to3/PatternGrammar.txt Writing grammar tables to /home/jupyter-workshop007/.cache/black/21.9b0/PatternGrammar3.9.7.final.0.pickle Writing failed: [Errno 2] No such file or directory: '/home/jupyter-workshop007/.cache/black/21.9b0/tmpl2s440dp' Executing: 0%| | 0/49 [00:00<?, ?cell/s]Executing notebook with kernel: parcels-container_2021.09.29-09ab0ce Executing: 100%|██████████| 49/49 [01:25<00:00, 1.74s/cell] Input Notebook: 208_afox_volume_budget_sigma_lab_sea.ipynb Output Notebook: ../executed/208_afox_volume_budget_sigma_lab_sea/208_afox_volume_budget_sigma_lab_sea_2012.ipynb Generating grammar tables from /opt/tljh/user/envs/parcels-container_2021.09.29-09ab0ce/lib/python3.9/site-packages/blib2to3/Grammar.txt Writing grammar tables to /home/jupyter-workshop007/.cache/black/21.9b0/Grammar3.9.7.final.0.pickle Writing failed: [Errno 2] No such file or directory: '/home/jupyter-workshop007/.cache/black/21.9b0/tmpgpa4tilt' Generating grammar tables from /opt/tljh/user/envs/parcels-container_2021.09.29-09ab0ce/lib/python3.9/site-packages/blib2to3/PatternGrammar.txt Writing grammar tables to /home/jupyter-workshop007/.cache/black/21.9b0/PatternGrammar3.9.7.final.0.pickle Writing failed: [Errno 2] No such file or directory: '/home/jupyter-workshop007/.cache/black/21.9b0/tmp52alvc1k' Executing: 0%| | 0/49 [00:00<?, ?cell/s]Executing notebook with kernel: parcels-container_2021.09.29-09ab0ce Executing: 100%|██████████| 49/49 [01:24<00:00, 1.73s/cell] Input Notebook: 208_afox_volume_budget_sigma_lab_sea.ipynb Output Notebook: ../executed/208_afox_volume_budget_sigma_lab_sea/208_afox_volume_budget_sigma_lab_sea_2013.ipynb Generating grammar tables from /opt/tljh/user/envs/parcels-container_2021.09.29-09ab0ce/lib/python3.9/site-packages/blib2to3/Grammar.txt Writing grammar tables to /home/jupyter-workshop007/.cache/black/21.9b0/Grammar3.9.7.final.0.pickle Writing failed: [Errno 2] No such file or directory: '/home/jupyter-workshop007/.cache/black/21.9b0/tmpsn6g3o17' Generating grammar tables from /opt/tljh/user/envs/parcels-container_2021.09.29-09ab0ce/lib/python3.9/site-packages/blib2to3/PatternGrammar.txt Writing grammar tables to /home/jupyter-workshop007/.cache/black/21.9b0/PatternGrammar3.9.7.final.0.pickle Writing failed: [Errno 2] No such file or directory: '/home/jupyter-workshop007/.cache/black/21.9b0/tmpnd0pwzu7' Executing: 0%| | 0/49 [00:00<?, ?cell/s]Executing notebook with kernel: parcels-container_2021.09.29-09ab0ce Executing: 100%|██████████| 49/49 [01:24<00:00, 1.72s/cell] Input Notebook: 208_afox_volume_budget_sigma_lab_sea.ipynb Output Notebook: ../executed/208_afox_volume_budget_sigma_lab_sea/208_afox_volume_budget_sigma_lab_sea_2014.ipynb Generating grammar tables from /opt/tljh/user/envs/parcels-container_2021.09.29-09ab0ce/lib/python3.9/site-packages/blib2to3/Grammar.txt Writing grammar tables to /home/jupyter-workshop007/.cache/black/21.9b0/Grammar3.9.7.final.0.pickle Writing failed: [Errno 2] No such file or directory: '/home/jupyter-workshop007/.cache/black/21.9b0/tmpb41o3x11' Generating grammar tables from /opt/tljh/user/envs/parcels-container_2021.09.29-09ab0ce/lib/python3.9/site-packages/blib2to3/PatternGrammar.txt Writing grammar tables to /home/jupyter-workshop007/.cache/black/21.9b0/PatternGrammar3.9.7.final.0.pickle Writing failed: [Errno 2] No such file or directory: '/home/jupyter-workshop007/.cache/black/21.9b0/tmpglrrmkvj' Executing: 0%| | 0/49 [00:00<?, ?cell/s]Executing notebook with kernel: parcels-container_2021.09.29-09ab0ce Executing: 100%|██████████| 49/49 [01:23<00:00, 1.71s/cell] Input Notebook: 208_afox_volume_budget_sigma_lab_sea.ipynb Output Notebook: ../executed/208_afox_volume_budget_sigma_lab_sea/208_afox_volume_budget_sigma_lab_sea_2015.ipynb Generating grammar tables from /opt/tljh/user/envs/parcels-container_2021.09.29-09ab0ce/lib/python3.9/site-packages/blib2to3/Grammar.txt Writing grammar tables to /home/jupyter-workshop007/.cache/black/21.9b0/Grammar3.9.7.final.0.pickle Writing failed: [Errno 2] No such file or directory: '/home/jupyter-workshop007/.cache/black/21.9b0/tmppwhj0v1c' Generating grammar tables from /opt/tljh/user/envs/parcels-container_2021.09.29-09ab0ce/lib/python3.9/site-packages/blib2to3/PatternGrammar.txt Writing grammar tables to /home/jupyter-workshop007/.cache/black/21.9b0/PatternGrammar3.9.7.final.0.pickle Writing failed: [Errno 2] No such file or directory: '/home/jupyter-workshop007/.cache/black/21.9b0/tmpisn5tbzj' Executing: 0%| | 0/49 [00:00<?, ?cell/s]Executing notebook with kernel: parcels-container_2021.09.29-09ab0ce Executing: 100%|██████████| 49/49 [01:23<00:00, 1.71s/cell] Input Notebook: 208_afox_volume_budget_sigma_lab_sea.ipynb Output Notebook: ../executed/208_afox_volume_budget_sigma_lab_sea/208_afox_volume_budget_sigma_lab_sea_2016.ipynb Generating grammar tables from /opt/tljh/user/envs/parcels-container_2021.09.29-09ab0ce/lib/python3.9/site-packages/blib2to3/Grammar.txt Writing grammar tables to /home/jupyter-workshop007/.cache/black/21.9b0/Grammar3.9.7.final.0.pickle Writing failed: [Errno 2] No such file or directory: '/home/jupyter-workshop007/.cache/black/21.9b0/tmp7rbwd42e' Generating grammar tables from /opt/tljh/user/envs/parcels-container_2021.09.29-09ab0ce/lib/python3.9/site-packages/blib2to3/PatternGrammar.txt Writing grammar tables to /home/jupyter-workshop007/.cache/black/21.9b0/PatternGrammar3.9.7.final.0.pickle Writing failed: [Errno 2] No such file or directory: '/home/jupyter-workshop007/.cache/black/21.9b0/tmp0oxwlpwj' Executing: 0%| | 0/49 [00:00<?, ?cell/s]Executing notebook with kernel: parcels-container_2021.09.29-09ab0ce Executing: 100%|██████████| 49/49 [01:24<00:00, 1.72s/cell] Input Notebook: 208_afox_volume_budget_sigma_lab_sea.ipynb Output Notebook: ../executed/208_afox_volume_budget_sigma_lab_sea/208_afox_volume_budget_sigma_lab_sea_2017.ipynb Generating grammar tables from /opt/tljh/user/envs/parcels-container_2021.09.29-09ab0ce/lib/python3.9/site-packages/blib2to3/Grammar.txt Writing grammar tables to /home/jupyter-workshop007/.cache/black/21.9b0/Grammar3.9.7.final.0.pickle Writing failed: [Errno 2] No such file or directory: '/home/jupyter-workshop007/.cache/black/21.9b0/tmpst6oe9ad' Generating grammar tables from /opt/tljh/user/envs/parcels-container_2021.09.29-09ab0ce/lib/python3.9/site-packages/blib2to3/PatternGrammar.txt Writing grammar tables to /home/jupyter-workshop007/.cache/black/21.9b0/PatternGrammar3.9.7.final.0.pickle Writing failed: [Errno 2] No such file or directory: '/home/jupyter-workshop007/.cache/black/21.9b0/tmp667duup7' Executing: 0%| | 0/49 [00:00<?, ?cell/s]Executing notebook with kernel: parcels-container_2021.09.29-09ab0ce Executing: 100%|██████████| 49/49 [01:23<00:00, 1.70s/cell] Input Notebook: 208_afox_volume_budget_sigma_lab_sea.ipynb Output Notebook: ../executed/208_afox_volume_budget_sigma_lab_sea/208_afox_volume_budget_sigma_lab_sea_2018.ipynb Generating grammar tables from /opt/tljh/user/envs/parcels-container_2021.09.29-09ab0ce/lib/python3.9/site-packages/blib2to3/Grammar.txt Writing grammar tables to /home/jupyter-workshop007/.cache/black/21.9b0/Grammar3.9.7.final.0.pickle Writing failed: [Errno 2] No such file or directory: '/home/jupyter-workshop007/.cache/black/21.9b0/tmp1382ty4t' Generating grammar tables from /opt/tljh/user/envs/parcels-container_2021.09.29-09ab0ce/lib/python3.9/site-packages/blib2to3/PatternGrammar.txt Writing grammar tables to /home/jupyter-workshop007/.cache/black/21.9b0/PatternGrammar3.9.7.final.0.pickle Writing failed: [Errno 2] No such file or directory: '/home/jupyter-workshop007/.cache/black/21.9b0/tmpjsj5g137' Executing: 0%| | 0/49 [00:00<?, ?cell/s]Executing notebook with kernel: parcels-container_2021.09.29-09ab0ce Executing: 100%|██████████| 49/49 [01:24<00:00, 1.72s/cell] Input Notebook: 208_afox_volume_budget_sigma_lab_sea.ipynb Output Notebook: ../executed/208_afox_volume_budget_sigma_lab_sea/208_afox_volume_budget_sigma_lab_sea_2019.ipynb Generating grammar tables from /opt/tljh/user/envs/parcels-container_2021.09.29-09ab0ce/lib/python3.9/site-packages/blib2to3/Grammar.txt Writing grammar tables to /home/jupyter-workshop007/.cache/black/21.9b0/Grammar3.9.7.final.0.pickle Writing failed: [Errno 2] No such file or directory: '/home/jupyter-workshop007/.cache/black/21.9b0/tmp3c6kdbmr' Generating grammar tables from /opt/tljh/user/envs/parcels-container_2021.09.29-09ab0ce/lib/python3.9/site-packages/blib2to3/PatternGrammar.txt Writing grammar tables to /home/jupyter-workshop007/.cache/black/21.9b0/PatternGrammar3.9.7.final.0.pickle Writing failed: [Errno 2] No such file or directory: '/home/jupyter-workshop007/.cache/black/21.9b0/tmp421t9l_v' Executing: 0%| | 0/49 [00:00<?, ?cell/s]Executing notebook with kernel: parcels-container_2021.09.29-09ab0ce Executing: 100%|██████████| 49/49 [01:23<00:00, 1.69s/cell]
Assignment 2/Code_template.ipynb
###Markdown Import libraries and load dataset Compute support and confidence for all possible $X \Rightarrow Y$ rules Compute support and confidence for all possible $X,Y \Rightarrow Z$ rules (5 points) Compute support and confidence for all possible $W,X,Y \Rightarrow Z$ rules (5 points) Compute support and confidence for all possible $V,W,X,Y \Rightarrow Z$ rules (5 points) User defined function to print descriptioon of a rule (with suppport and confidence) (5 points) Call a user defined function ###Code premis_1 = 1 premise_2 = 3 conclusion = 4 print_rule(premise_1,premise_2 conclusion) ###Output _____no_output_____
docs/guides/02-Authentication.ipynb
###Markdown AuthenticationThis guide focuses on the basics of authentication in CARTOframes.Authentication is needed to store your data tables and map visualizations in your CARTO account, to use [Data Services](/developers/cartoframes/guides/Data-Services/) (geocoding, isolines) or the [Data Observatory](/developers/cartoframes/guides/Data-Observatory/) (download, enrichment). If you don't already have an account, you can [create one here](https://carto.com/signup/).> You don't need to be authenticated to visualize your local data with CARTOframes. Getting your API KeyOnce you have created an account, you need to get your **Master API Key**. The API keys page can be accessed from your [CARTO Dashboard](https://carto.com/help/tutorials/your-dashboard-overview/). Once there, click on your avatar to open the dashboard menu. The API keys link will be shown.![API Keys link - CARTO Dashboard](img/credentials/dashboard.png)From here, copy the **Master** API Key to use in the next section.![Master API Key - CARTO Dashboard](img/credentials/api-keys.png) Setting your credentials[Credentials](/developers/cartoframes/reference/heading-Auth) class is used to load your credentials in CARTOframes. This should be passed to every method that interacts with your CARTO account. You can create multiple instances of credentials to manage different CARTO accounts.There are different ways to set them but we recommend using the one that reads the credentials from a JSON file:```pyfrom cartoframes.auth import Credentialscreds = Credentials.from_file('creds.json')```With [set_default_credentials](/developers/cartoframes/reference/heading-Auth), the same user's authentication will be used by every CARTOframes component by default, so you don't need to pass the parameter to every method that requires it.```pyfrom cartoframes.auth import set_default_credentialsset_default_credentials('creds.json')```Example `creds.json` file using the credentials above:```json{ "username": "johnsmith", "api_key": "b1ff3ed88761070116180189d9a1f5cb9cc80654"}``` Credential parameters- `username`: your CARTO account username- `api_key`: your CARTO account API Key. If the data to be accessed is **public**, it can be set to `default_public`- `base_url`: only needed for on-premise or custom installations. Typically in the form of `https://username.carto.com/` for user `username`. On-premises installation (and others) may have a different URL pattern. Google Cloud credentialsCARTO's [Data Observatory](/developers/cartoframes/guides/Data-Observatory/) is built on top of Google BigQuery, so every CARTO Enterprise organization has an associated Google Cloud account to run Data Observatory operations.In case you have an Enterprise account and want to perform data operations directly with the Python BigQuery client, you can obtain the credentials of the associated Google Cloud account and create a Google Credentials instance. ###Code from cartoframes.auth import Credentials from google.oauth2.credentials import Credentials as GoogleCredentials creds = Credentials.from_file('creds.json') gcloud_project, gcloud_token = creds.get_gcloud_credentials() gcloud_credentials = GoogleCredentials(gcloud_token) ###Output _____no_output_____
python/basic_python.ipynb
###Markdown **Hello, World!**Welcome to the world of python. In this section, we simply print a string using python code. You can say it's kind of a ritual/tradition for anyone who sets on a journey in coding with any language - our way of saying "Hey world, here I come!" 😉 ###Code print('Hello World!!') print('Get ready to rock-N-roll ') # we saw that the lines automatically shifted to new lines. # although we can do this manually too, in one print statement print('Hello World!! \nGet ready to rock-N-roll') ###Output Hello World!! Get ready to rock-N-roll ###Markdown So what was that `\n` ?Read this - https://qr.ae/pGSFo0 **Interact with python - get an input/data**To do some work, we need something to begin with - right? For example, you need a bat and a ball to play cricket. Similarly, while coding we need some data or input from the user on which we will do some work. ###Code #asking user for a text (name) username = input('Please enter your name - ') #lets greet the user now print ('Hello ' +username) #OK so lets go one step ahead ;) -- just to give you a feel of what we can do with programming #now its time to help the user know his/her age #ask the user's birth year year = input('Please enter your Birth Year - ') #calculating the age, here we have to change the type for year from string to integer age = 2021-int(year) #now we will print the results print(f'Hey {username}, You are {age} year(s) old :) ') ###Output Hey ineelhere, You are 2 year(s) old :) ###Markdown * using `f` before writing the print statement saves us a lot of time and effort* just need to keep in mind that we are keeping the variables in `{}`* read more here - https://docs.python.org/3/tutorial/inputoutput.html **Dealing with Numbers**So, in programming you can do lots of mathematical operations. But for that you need to have an idea of what are the various datatypes that are to be considered - or even not considered! This section will introduce with some of the possibilities and we'll cover the other complex stuff as we progress.You've got it. Keep going! 🙂 ###Code # first let us understand the datatypes. # integer datatype print(f"The datatype for variable 20 is {type(20)}") # <class 'int'> # float datatype print(f"The datatype for variable 20.02 is {type(20.02)}") # <class 'float'> # string datatype print(f"The datatype for variable 'abcd efg hijk lmnop' is {type('abcd efg hijk lmnop')}") #<class 'str'> #get the binary for the number print(bin(2020)) #prints the binary form of 2020 which is 0b11111100100 #get number(integer) from binary form print(int('0b11111100100', 2)) # 2 for printing the integer form by base as 2 # ---------------- that would be enogh for now to understand about the datatypes. # Let us now perform some arithmetic operations #arithmetic operations without variables print(f"Sum of 3 and 5 is {3+5}") print(f"difference of 3 and 5 is {3-5}") print(f"Product of 3 and 5 is {3*5}") print(f"Fraction or Division of 3 and 5 is {3/5}") print(f"exponent of 3 with 5 is {3**5}") #exponent print(f"Modulus of 3 and 5 is {3%5}")#mod #arithmetic operations with variables a = input("Enter a number. It will be stored as 'a' = ") b = input("Enter another number. It will be stored as 'b' = ") #python accepts inputs as str. So whenever we need to perform any mathematical operations, we need to change the datatypes print(f"You see, I am writing here a+b but the output will not be the sum. \nInstead, you will see the two numbers will be concatenated! \nHere is the output = {a+b}") a = float(a) #keeping in float is safer as user might feed data with decimals b = float(b) #keeping in float is safer as user might feed data with decimals print(f"Sum of {a} and {b} is {a+b}") print(f"difference of {a} and {b} is {a-b}") print(f"Product of {a} and {b} is {a*b}") print(f"Fraction or Division of {a} and {b} is {a/b}") print(f"exponent of {a} with {b} is {a**b}") #exponent print(f"Modulus of {a} and {b} is {a%b}")#mod ###Output Sum of 12.0 and 3.0 is 15.0 difference of 12.0 and 3.0 is 9.0 Product of 12.0 and 3.0 is 36.0 Fraction or Division of 12.0 and 3.0 is 4.0 exponent of 12.0 with 3.0 is 1728.0 Modulus of 12.0 and 3.0 is 0.0 ###Markdown **Math Functions in Python**To make our lives easier, there are many in-built special functions that are very useful to do specific tasks.Here we will see few of the in-built functions that can be used to perform mathematical operations.- first we need to import the math module. Read more here https://docs.python.org/3/library/math.html- This module provides access to the mathematical functions defined by the C standard.- These functions cannot be used with complex numbers; use the functions of the same name from the cmath module if you require support for complex numbers. The distinction between functions which support complex numbers and those which don’t is made since most users do not want to learn quite as much mathematics as required to understand complex numbers. Receiving an exception instead of a complex result allows earlier detection of the unexpected complex number used as a parameter, so that the programmer can determine how and why it was generated in the first place.- The following functions are provided by this module. Except when explicitly noted otherwise, all return values are floats. ###Code #importing the module import math # --------------Number-theoretic and representation functions-------------------------------------- long_string = ''' math.ceil(x) Return the ceiling of x, the smallest integer greater than or equal to x. If x is not a float, delegates to x.__ceil__(), which should return an Integral value. ''' print(long_string) print("\n--------------------math.ceil(x)-------------------------------") print(f"math.ceil(x) --- for number = 404 --- {math.ceil(404)}") print(f"math.ceil(x) --- for number = 404.01 --- {math.ceil(404.01)}") print(f"math.ceil(x) --- for number = 404.36 --- {math.ceil(404.36)}") print(f"math.ceil(x) --- for number = 404.50 --- {math.ceil(404.50)}") print(f"math.ceil(x) --- for number = 404.86 --- {math.ceil(404.86)}") print("---------------------------------------------------------------\n") long_string = ''' math.comb(n, k) Return the number of ways to choose k items from n items without repetition and without order. Evaluates to n! / (k! * (n - k)!) when k <= n and evaluates to zero when k > n. Also called the binomial coefficient because it is equivalent to the coefficient of k-th term in polynomial expansion of the expression (1 + x) ** n. Raises TypeError if either of the arguments are not integers. Raises ValueError if either of the arguments are negative. ''' print(long_string) ###Output math.comb(n, k) Return the number of ways to choose k items from n items without repetition and without order. Evaluates to n! / (k! * (n - k)!) when k <= n and evaluates to zero when k > n. Also called the binomial coefficient because it is equivalent to the coefficient of k-th term in polynomial expansion of the expression (1 + x) ** n. Raises TypeError if either of the arguments are not integers. Raises ValueError if either of the arguments are negative. ###Markdown Explore more here - https://www.programiz.com/python-programming/modules/math **Strings in Python**Here we will see how to handle strings in python.When we deal with data, we mostly deal with strings - which we then reformat according to our choices.So, it is important that we deal properly with the strings such that we don't loose data ###Code # write a long string (multiple lines without using \n) long_string = ''' Hello there! We are currently creating a long string. Write multiple lines here, without any worries. B-) ''' print(long_string) #using escape sequences #it's difficult to insert a special character in a string or print statement. #so, we use \ as our saviour! print("See, we are writing \" in a print statement without any worries!") print('Isn\'t it awesome??') #newline print("This is the first line \nThis is the second line") #backspace print("This is an incomplete li\bne") #horizontal tab print("Here comes the tab\tGot that??") #print a backslash itself print("So, here is the \\ you wanted to see!") #formating a string (we have already seen this before, now it is time to realize it !!) a = 2020 print("This code was written in the year "+str(a)) #here the number is printed in form of a string otherwise it throws an error #TypeError: can only concatenate str (not "int") to str print("After 10 years it will be the year "+str(a+10)) #same explanation as above #now let us use a shortcut print(f"The code is written in the year {a}") #see, how simple it is to format a string!! print(f"After 10 years it will be the year {a+10}") #how to get a string index text = "Climate change is real!" print(text) print(text[1:10]) #counting starts from 0 print(text[0:10]) #now see the difference print(text[:10]) #prints first 10 elements print(text[::]) #prints Everything print(text[-1]) #first element starting from the end of the string print(text[-3]) #third element starting from the end of the string print(text[::-1]) #prints in reverse order ###Output Climate change is real! limate ch Climate ch Climate ch Climate change is real! ! a !laer si egnahc etamilC ###Markdown * There are many more things to know about strings. * You are welcome to add anything relevant you wish to in this notebook!* Please collaborate and contribute :) **String functions in Python**Just like we used the math-functions above, this is also quite similar. But here you wouldn't have to import a module.Follow the code below (let the code do the talking!)Note: This section discusses one of the functions to get you started. There are many more available. Just Google them!Reference - https://www.w3schools.com/python/python_ref_string.asp ###Code mystring = 'lights WILL guide YOU home\n' # capitalize() Converts the first character to upper case print(f"\ncapitalize() Converts the first character to upper case \n\nOriginal string = {mystring} \nResult string = {mystring.capitalize()}") # casefold() Converts string into lower case print(f"\ncasefold() Converts string into lower case\n\nOriginal string = {mystring} \nResult string = {mystring.casefold()}") # center() Returns a centered string temp = "banana" print(f"\ncenter() Returns a centered string\n\nOriginal string = {temp} ") temp = temp.center(20, "0") print(f"\nResult string = {temp}") ###Output Result string = 0000000banana0000000 ###Markdown **Lists in Python**Python has several features which are used in all sorts of programming endeavors. One of them is a "list".Like always, Follow the code below (let the code do the talking!)This set of codes has been generously contributed by **Mr. Bittesh Barman.**Mr. Bittesh is a **PhD student at the Department of Chemistry, Pondicherry University, India**. Visit this URL to view his works - https://www.researchgate.net/profile/Bittesh-Barman-2 Thank you! ###Code # Working with Lists! cars = ["honda","hundai","tata"] # this is a list type data structure. each elements in list is called item. print(cars) print(cars[0])# we can call any item in this list by its index no. print(cars[2]) # Changing items in a list shoping_cart = ["Pencil", "notebook","book"] print(shoping_cart) shoping_cart[0] = "pen" # we can change item by using the index no. print(shoping_cart) #Appending to a list fruits = ['banana','orange','watermelon'] fruits.append('grapes') # we can add items in list using append method. print(fruits) # The 'insert()' method! weapons = ['pan', 'assult rifle', 'shotgun', 'pistol'] weapons.insert(3, 'sniper') # we can add item in any position of the list by using insert method. print(weapons) ###Output ['pan', 'assult rifle', 'shotgun', 'sniper', 'pistol'] ###Markdown **Tuples in Python**A tuple is a sequence of immutable (meaning unchanging over time or unable to be changed) Python objects. Follow the code below (let the code do the talking!) ###Code #defining a tuple tuple_1 = ('India', 'Japan', 100, 90, 85674); tuple_1 ###Output _____no_output_____ ###Markdown Please note that in defining a tuple, a semicolon is used! (not mandatory though). So those python memes donot hold TRUE here 😉 ###Code #size of the tuple len(tuple_1) ###Output _____no_output_____ ###Markdown The size is 5 but if we see the index, it starts with 0. Let's have a look here ###Code #Accessing elements inside the tuple print(f"The first element - {tuple_1[0]}\nThe second element - {tuple_1[1]}\nThe last element - {tuple_1[len(tuple_1)-1]} ") ###Output The first element - India The second element - Japan The last element - 85674 ###Markdown The last element was obtained by using the last index via the code `tuple_1[len(tuple_1)-1]` Just for fun!**CAUTION - Tuples are immutable**So, if we write `tuple_1[0] = some value` we will get an error! **Dictionaries in Python**Dictionaries store elements in a key-value pair format. Dictionary elements are accessed via keys while List elements are accessed by their index. Follow the code below (let the code do the talking!) ###Code #definig a dictionary dy = { "Country": "India", "Currency": "INR", "Continent": "Asia", "Language": "Multiple"} dy #Access a dictionary (using the key and not the value) dy["Country"] ###Output _____no_output_____ ###Markdown Try this - `dy["India"]` You will get an error. We need to use the key to access a specific value! ###Code #adding data to dictionary dy["Capital"] = "Delhi" dy # We can Overwrite dictionary values too dy["Currency"] = "Indian Rupee" dy # Deleting data in dictionary del dy["Language"] dy ###Output _____no_output_____ ###Markdown So the Currency key was deleted. Now let us delete the whole dictionary ###Code del dy #done ###Output _____no_output_____ ###Markdown **Comparison Operators**Used to compare 2 or more values and decide if the condition is True or False Follow the code below (let the code do the talking!)Let us consider a random variable 'x' with a random numerical value stored in it. Following is how we can compare the value stored in 'x' with other numerical entities.- Equals: x == 5- Not equal: x != 5- Greater than: x > 5- Greater than or equal: x >= 5- Less than: x < 5- Less than or equal: to x <= 5The outcome is always in the form of "True" or "False" - Boolean ###Code # let us declare a variable with a numerical value x = 1001 print(x==5) print(x!=5) print(x > 5) print(x >= 5) print( x < 5) print( x <= 5) ###Output False ###Markdown Now that we know how these work, we can proceed to use them for decision making.ie, **If-else statements** **Conditional Statements (IF-ELSE)**Think of this scenario - if I score at least 40% in the exam, I will pass, else I will fail.So, here the condition for me passing the exam is to reach the 40% mark which can be expressed as ">=" (didn't understand? Study the previous section!) . Now, it just has to be conveyed to the computer and here's how it is done!Follow the code below (let the code do the talking!)- It is basically the If - else statement- If statement is generally followed by an optional else statement- The results are always in Boolean- else statement works if the if statement returns a FALSE expression. ###Code pass_marks = float(input("Enter your marks")) if pass_marks>=40.0: print("You passed the exam.") else: print("Well, it didn't work this time. But you can do it. Please don't give up.") pass_marks = float(input("Enter your marks")) if pass_marks>=40.0: print("You passed the exam.") else: print("Well, it didn't work this time. But you can do it. Please don't give up.") ###Output Enter your marks40.001 You passed the exam. ###Markdown So, that's how we deal with the if-else statements in python.Note: **Always remember to take care of the indentation!** **Nested or Multiple IF-ELSE (also called ELIF)??**Sometimes, we need to put up multiple conditions for an event to happen. For that, we use IF-ELSE statements multiple times.So, this is how we do it in python!**Multiple if-else**Let us consider that the criteria to get a job interview is atleast 8.0 CGPA and at least 2 years of experience.So following would be the way to deal with the situation*Note - I'm sad that individuals get judged like this. Skills matter. Not numbers.* ###Code cgpa = float(input("what is your CGPA out of 10.0?")) if cgpa >=8.0: experience = float(input("how many years of experience do you have?")) if experience>=2.0: print("You are eligible for an interview") else: print("Sorry, although you have at least 8.0 GPA, you lack a minimum experience of 2 years.") else: print("Sorry, you need minimum 8.0 CGPA to be eligible") cgpa = float(input("what is your CGPA out of 10.0?")) if cgpa >=8.0: experience = float(input("how many years of experience do you have?")) if experience>=2.0: print("You are eligible for an interview") else: print("Sorry, although you have at least 8.0 GPA, you lack a minimum experience of 2 years.") else: print("Sorry, you need minimum 8.0 CGPA to be eligible") cgpa = float(input("what is your CGPA out of 10.0?")) if cgpa >=8.0: experience = float(input("how many years of experience do you have?")) if experience>=2.0: print("You are eligible for an interview") else: print("Sorry, although you have at least 8.0 GPA, you lack a minimum experience of 2 years.") else: print("Sorry, you need minimum 8.0 CGPA to be eligible") ###Output what is your CGPA out of 10.0?10 how many years of experience do you have?100 You are eligible for an interview ###Markdown **The elif statement**Let us just write a simple code where the user enters a number from 1 to 5 and the code prints the number in words. ###Code num = int(input("Enter a number between 1 to 5 - ")) if num == 1: print('One') elif num == 2: print('Two') elif num==3: print('Three') elif num==4: print("Four") else: print("Five") num = int(input("Enter a number between 1 to 5 - ")) if num == 1: print('One') elif num == 2: print('Two') elif num==3: print('Three') elif num==4: print("Four") else: print("Five") ###Output Enter a number between 1 to 5 - 2 Two ###Markdown ...and we can keep going like this.**So, basically the elif statement is nothing but an if statement after an else statement.** **Loops in python**Generally, in a program, statements are executed line by line, it means sequentially, but when a block of code needs to be executed multiple times, then what to do? Programming languages comes with that provision also, using loops.Python supports two types of loop* while loop* for loopThis set of codes has been generously contributed by **Mr. Tapas Saha**. Mr. Tapas is a **PhD student at the Department of Computer Science & Engineering, Tezpur University, India**. Visit this URL to view his works - https://www.researchgate.net/profile/Tapas-Saha-3 Thank you!**While Loop** While loop allows the user, to execute a group of statements repeatedly, but it checks the condition before entering the loop body.The repetitions will continue until the condition false.Syntax of while loop:```while expression: statement(s)```Examples:- Print the number 1 to 5: ###Code # initialization i=1 while i<=5: print("Number ::",i) # print the sum i=i+1 ###Output Number :: 1 Number :: 2 Number :: 3 Number :: 4 Number :: 5 ###Markdown Another Example* Sum of n natural number ###Code # sum = 1+2+3+...+n #Take input from the user n = int(input("Enter n: ")) # initialization sum = 0 i = 1 while i <= n: sum = sum + i i = i+1 # print the sum print("The sum is", sum) ###Output Enter n: 10 The sum is 55 ###Markdown **For Loops:** A for loop is used in python, to execute iterating over the item any sequence.It can be a list, or a tuple, or a dictionary, or a set, or a string.Syntax of for loop:```for x in sequence : body```Example:* Print each characters of the given string ###Code string=" Python" for i in string : print(i) ###Output P y t h o n ###Markdown Another Example:* Print user input string’s each character and index of the characters. ###Code #Take input from the user, string=input("Enter some String: ") # initialization i=0 for x in string : print("index of",x,"is:",i) # print i=i+1 ###Output Enter some String: Hello, I am Tapas Saha! index of H is: 0 index of e is: 1 index of l is: 2 index of l is: 3 index of o is: 4 index of , is: 5 index of is: 6 index of I is: 7 index of is: 8 index of a is: 9 index of m is: 10 index of is: 11 index of T is: 12 index of a is: 13 index of p is: 14 index of a is: 15 index of s is: 16 index of is: 17 index of S is: 18 index of a is: 19 index of h is: 20 index of a is: 21 index of ! is: 22 ###Markdown One more example!* Program to calculate the sum and of all numbers stored in a list ###Code # List of numbers n = [4, 9, 5, 10] # initialization sum = 0 mul=1 for i in n: sum = sum+i mul=mul*i print("The sum is", sum) #pint print("The multiplication is", mul) ###Output The sum is 28 The multiplication is 1800 ###Markdown **Functions in python**Nothing to write here. Let the code do the talking! 🔥🔥 Functions are like recipies. Suppose you and I want to bake a cake. You went online and googled a recipie. Now we both will follow the same recipies but you want to use chololate flavour and I want to use pineapple! So we follow the same recipie but produce new results.A function is thus a block of code that can be re-used or run whenever it is needed. Information passed to a function is called an argument (the ingredients for the recipie-analogy)**Defining the function** Let us create a dedicated function that can add 2 numbers ###Code #defining the function def add(a,b): return (a+b) # Calling a function # We will now feed some data to the function to get the sum value x = float(input("Hey user, enter the first number - ")) y = float(input("Nice! now enter the second number - ")) print(f"The sum of {x} and {y} is {add(x,y)}") ###Output Hey user, enter the first number - 2 Nice! now enter the second number - 56 The sum of 2.0 and 56.0 is 58.0 ###Markdown Saw that? Now just imagine how cool it would be to have a dedicated function for doing a more complex task!!**Lambda Expressions**Lambda function is used to create small elegant anonymous functions, generally used with `filter()` and `map()` ###Code m = lambda n:n**2 m(3) ###Output _____no_output_____ ###Markdown **The `map()` function**- This takes in a function and a list.- The function perform an operation on the entire list and return the results in a new list.- Let us see it work with a simple cube of a number ###Code my_list = [5, 10, 55 , 568, 468, 77] output_list = list(map( lambda x: x**3, my_list)) print(output_list) ###Output [125, 1000, 166375, 183250432, 102503232, 456533] ###Markdown **The `filter()` function*** This performs an operation on a list based on a specific condition after filtering* Let us see it work with a simple condition statement - numbers less than or equal to 201 ###Code my_list = [5, 10, 55 , 568, 468, 77] condition = list(filter(lambda x: (x <= 201), my_list)) print(condition) ###Output [5, 10, 55, 77] ###Markdown **Error handling in python**This is one of the most important concepts to make a coder's life easier. Just walk through the code and you'll get what I mean. Let the code do the talking! 🔥🔥 While running an automated code that works on unknown or new data, it might happen that the code encounters an error at some point. You, as a coder might not like the execution process to stop. Rather you would like to have a notification that the error was found and that particular execution was bye-passed.This is where the **`try-except`** feature of pythons comes to the rescue! ###Code # understanding an error - let us try to print an undefined variable print(name) a=111 b=222 print(a+b) # Notice that the consequent codes were not executed after the error. # Now let us attempt to bye-pass the error part of the code and move on to the next executions try: print(name) except NameError: print("Error - The variable has no value defined to be printed in the first place.") except: print("Error - Not sure what the error is, but there is something wrong!") a=111 b=222 print(a+b) ###Output Error - The variable has no value defined to be printed in the first place. 333 ###Markdown The above example only shows an application of the Built-in exceptions in python.There are many Built in Exceptions available to be used.You can learn about them here - https://docs.python.org/3/library/exceptions.htmlbltin-exceptions Till then, have fun! **File handling in python**Let the code do the talking! 🔥🔥 **File Operations using python*** Modes for file handling* Creating a file - "x"* Reading a file - "r"* Writing a file - "w"* Appending a file - "a"**Creating a file** Here we create a .txt (text) file, which we will use in the next steps! ###Code f = open("file.txt", "x") # open() is used to open the file we want to create/read/write/append ###Output _____no_output_____ ###Markdown "f" above can be considered as a file handler. One can use other names too! Now it's time to write some data in the file ###Code f.write("This is a text file!") ###Output _____no_output_____ ###Markdown The output above is the number of characters we wrote into the file.**Reading a file** This only works when the file name mentioned actually exists, just like one can only read a book if the book actually exists! ###Code f = open("file.txt", "r") print(f.read()) ###Output This is a text file! ###Markdown **Writing to a file** This creates a new file if the file name used does not exist, just like one can write a book if the book does not already exist (I know the analogy is a bit lame, but please bear with me! 😅) ###Code f = open("file.txt", "w") f.write("This sentence replaces the sentence already present in the file named 'file.txt'") #lets check the result f = open("file.txt", "r") print(f.read()) # Now, let's try writing to a file that does not exist. We will see that the file is created for us before writing into it. f = open("another-file.txt", "w") f.write("This sentence is present in the file named 'another-file.txt'") f = open("another-file.txt", "r") print(f.read()) ###Output This sentence is present in the file named 'another-file.txt' ###Markdown Nice!!**Appendig to a file** Works same as writing to a file. Only difference is that it does not replace the pre-existing text. ###Code f = open("file.txt", "r") print(f.read()) f = open("file.txt", "a") f.write("This sentence appends to the sentence already present in the file named 'file.txt'") f = open("file.txt", "r") print(f.read()) # Now, let's try appending to a file that does not exist. We will see that the file is created for us before appending into it. f = open("another-file2.txt", "a") f.write("This sentence is present in the file named 'another-file2.txt'") f = open("another-file2.txt", "r") print(f.read()) ###Output This sentence is present in the file named 'another-file2.txt'
ArtList.ipynb
###Markdown ART Version : 3.9 ###Code from art import * ###Output _____no_output_____ ###Markdown Art Counter ###Code ART_COUNTER ###Output _____no_output_____ ###Markdown Art List ###Code art_list() ###Output 100$ [̲̅$̲̅(̲̅ιοο̲̅)̲̅$̲̅] ****************************** 3 ᕙ༼ ,,ԾܫԾ,, ༽ᕗ ****************************** 5 ᕙ༼ ,,இܫஇ,, ༽ᕗ ****************************** 9/11 truth ✈__✈ █ █ ▄ ****************************** acid ⊂(◉‿◉)つ ****************************** afraid ( ゚ Д゚) ****************************** airplane1 ‛¯¯٭٭¯¯(▫▫)¯¯٭٭¯¯’ ****************************** airplane2 ✈ ****************************** ak-47 ︻┳デ═— ****************************** aliens (<>..<>) ****************************** almost cared ╰╏ ◉ 〜 ◉ ╏╯ ****************************** american money [($)] ****************************** angel ^i^ ****************************** angry ლ(ಠ益ಠ)ლ ****************************** angry face (⋟﹏⋞) ****************************** angry2 ( ͠° ͟ʖ ͡°) ****************************** ankush ︻デ┳═ー*----* ****************************** arrow »»---------------------► ****************************** arrowhead ⤜(ⱺ ʖ̯ⱺ)⤏ ****************************** atish (| - _ - |) ****************************** awesome <:3 )~~~ ****************************** awkward •͡˘㇁•͡˘ ****************************** badass (⌐■_■)--︻╦╤─ - - - ****************************** bagel nln >_< nln ****************************** barbell ▐━━━━━▌ ****************************** bat ^O^ ****************************** bautista (╯°_°)╯︵ ━━━ ****************************** bear ʕ•ᴥ•ʔ ****************************** because ∵ ****************************** bee ¸.·´¯`·¸¸.·´¯`·.¸.-<\^}0=: ****************************** being draged ╰(◣﹏◢)╯ ****************************** bender ¦̵̱ ̵̱ ̵̱ ̵̱ ̵̱(̢ ̡͇̅└͇̅┘͇̅ (▤8כ−◦ ****************************** big eyes ⺌∅‿∅⺌ ****************************** big nose ˚∆˚ ****************************** bird (⌒▽⌒) ****************************** birds ~(‾▿‾)~ ****************************** blackeye 0__# ****************************** bomb !!( ’ ‘)ノノ⌒●~* ****************************** boobies (. )( .) ****************************** boobs (.)(.) ****************************** boom box ♫♪.ılılıll|̲̅̅●̲̅̅|̲̅̅=̲̅̅|̲̅̅●̲̅̅|llılılı.♫♪ ****************************** british money [£::] ****************************** bullshit |3ᵕᶦᶦᶳᶣᶨᶵ ****************************** bunny (\_/) ****************************** butt (‿|‿) ****************************** butterfly Ƹ̵̡Ӝ̵̨̄Ʒ ****************************** camera [◉"] ****************************** canoe .,.,\______/,..,., ****************************** car `o##o> ****************************** car race ∙،°. ˘Ô≈ôﺣ » » » ****************************** care crowd (-(-_(-_-)_-)-) ****************************** cassette |[●▪▪●]| ****************************** cat face ⦿⽘⦿ ****************************** cat smile ≧◔◡◔≦ ****************************** cat1 =^..^= ****************************** cat2 龴ↀ◡ↀ龴 ****************************** caterpillar ,/\,/\,/\,/\,/\,/\,o ****************************** catlenny ( ͡° ᴥ ͡°) ****************************** chainsword |O/////[{:;:;:;:;:;:;:;:;> ****************************** chair ╦╣ ****************************** charly +:) ****************************** cheer ^(¤o¤)^ ****************************** chess ♞▀▄▀▄♝▀▄ ****************************** chess pieces ♚ ♛ ♜ ♝ ♞ ♟ ♔ ♕ ♖ ♗ ♘ ♙ ****************************** chu (´ε` ) ****************************** cigaret ()___)____________) ****************************** cigarette (____((____________()~~~ ****************************** coffee c[_] ****************************** coffee now {zzz}°°°( -_-)>c[_] ****************************** crab (\|) ._. (|/) ****************************** crayons ((̲̅ ̲̅(̲̅C̲̅r̲̅a̲̅y̲̅o̲̅l̲̲̅̅a̲̅( ̲̅((> ****************************** cry (╯︵╰,) ****************************** crying Ỏ̷͖͈̞̩͎̻̫̫̜͉̠̫͕̭̭̫̫̹̗̹͈̼̠̖͍͚̥͈̮̼͕̠̤̯̻̥̬̗̼̳̤̳̬̪̹͚̞̼̠͕̼̠̦͚̫͔̯̹͉͉̘͎͕̼̣̝͙̱̟̹̩̟̳̦̭͉̮̖̭̣̣̞̙̗̜̺̭̻̥͚͙̝̦̲̱͉͖͉̰̦͎̫̣̼͎͍̠̮͓̹̹͉̤̰̗̙͕͇͔̱͕̭͈̳̗̭͔̘̖̺̮̜̠͖̘͓̳͕̟̠̱̫̤͓͔̘̰̲͙͍͇̙͎̣̼̗̖͙̯͉̠̟͈͍͕̪͓̝̩̦̖̹̼̠̘̮͚̟͉̺̜͍͓̯̳̱̻͕̣̳͉̻̭̭̱͍̪̩̭̺͕̺̼̥̪͖̦̟͎̻̰_Ỏ̷͖͈̞̩͎̻̫̫̜͉̠̫͕̭̭̫̫̹̗̹͈̼̠̖͍͚̥͈̮̼͕̠̤̯̻̥̬̗̼̳̤̳̬̪̹͚̞̼̠͕̼̠̦͚̫͔̯̹͉͉̘͎͕̼̣̝͙̱̟̹̩̟̳̦̭͉̮̖̭̣̣̞̙̗̜̺̭̻̥͚͙̝̦̲̱͉͖͉̰̦͎̫̣̼͎͍̠̮͓̹̹͉̤̰̗̙͕͇͔̱͕̭͈̳̗̭͔̘̖̺̮̜̠͖̘͓̳͕̟̠̱̫̤͓͔̘̰̲͙͍͇̙͎̣̼̗̖͙̯͉̠̟͈͍͕̪͓̝̩̦̖̹̼̠̘̮͚̟͉̺̜͍͓̯̳̱̻͕̣̳͉̻̭̭̱͍̪̩̭̺͕̺̼̥̪͖̦̟͎̻̰ ****************************** cthulhu ^(;,;)^ ****************************** cute cat ^⨀ᴥ⨀^ ****************************** dab ヽ( •_)ᕗ ****************************** dagger cxxx|;:;:;:;:;:;:;:;> ****************************** dalek ̵̄/͇̐\ ****************************** damnyou (ᕗ ͠° ਊ ͠° )ᕗ ****************************** dance (>'-')> <('_'<) ^('_')\- \m/(-_-)\m/ <( '-')> \_( .")> <(._.)-` ****************************** dancee ♪┏(°.°)┛┗(°.°)┓┗(°.°)┛┏(°.°)┓ ♪ ****************************** dancing people ‎(/.__.)/ \(.__.\) ****************************** dead eyes ¿ⓧ_ⓧﮌ ****************************** death star defense team |-o-| (-o-) |-o-| ****************************** decorate ▂▃▅▇█▓▒░۩۞۩ ۩۞۩░▒▓█▇▅▃▂ ****************************** depressed (︶︹︶) ****************************** derp ヘ(。□°)ヘ ****************************** devil ]:-> ****************************** dgaf ┌∩┐(◣ _ ◢)┌∩┐ ****************************** dice [: :] ****************************** dick 8====D ****************************** dna sample ~ ****************************** dog ˁ˚ᴥ˚ˀ ****************************** domino [: :|:::] ****************************** don fuller ╭∩╮(Ο_Ο)╭∩╮ ****************************** drowning 人人人ヾ( ;×o×)〃 人人人 ****************************** drunkenness ヽ(´ー`)┌ ****************************** dummy <-|-'_'-|-> ****************************** dunno ¯\(°_o)/¯ ****************************** eastbound fish ><((((> ****************************** eaten apple [===]-' ****************************** eds dick 8=D ****************************** eeriemob (-(-_-(-_(-_(-_-)_-)-_-)_-)_-)-) ****************************** elephant °j°m ****************************** emo (///_ ;) ****************************** energy つ ◕_◕ ༽つ つ ◕_◕ ༽つ ****************************** envelope ✉ ****************************** epic gun ︻┳デ═— ****************************** eric >--) ) ) )*> ****************************** error (╯°□°)╯︵ ɹoɹɹƎ ****************************** exchange (╯°□°)╯︵ ǝƃuɐɥɔxǝ ****************************** eye closed (╯_╰) ****************************** eyes ℃ↂ_ↂↃ ****************************** face •|龴◡龴|• ****************************** facepalm (>ლ) ****************************** fail o(╥﹏╥)o ****************************** fart (ˆ⺫ˆ๑)<3 ****************************** fat ass (__!__) ****************************** faydre (U) [^_^] (U) ****************************** finger1 ╭∩╮(Ο_Ο)╭∩╮ ****************************** finger2 ┌∩┐(◣_◢)┌∩┐ ****************************** finn | (• ◡•)| ****************************** fish invasion ›(̠̄:̠̄c ›(̠̄:̠̄c (¦Ҝ (¦Ҝ ҉ - - - ¦̺͆¦ ▪▌ ****************************** fish swim ¸.·´¯`·.´¯`·.¸¸.·´¯`·.¸><(((º> ****************************** fish1 ><(((('> ****************************** fish2 ><> ****************************** flex ᕙ(⇀‸↼‶)ᕗ ****************************** formula1 car \ō͡≡o˞̶ ****************************** fox -^^,--,~ ****************************** frown (ღ˘⌣˘ღ) ****************************** fu (ಠ_ಠ)┌∩┐ ****************************** fuck off t(-.-t) ****************************** fuck you nlm (-_-) mln ****************************** fuckall ╭∩╮(︶︿︶)╭∩╮ ****************************** fungry Σ_(꒪ཀ꒪」∠)_ ****************************** ghost ‹’’›(Ͼ˳Ͽ)‹’’› ****************************** gimme ༼ つ ◕_◕ ༽つ ****************************** glasses -@-@- ****************************** glasses2 ᒡ◯ᵔ◯ᒢ ****************************** glitter (*・‿・)ノ⌒*:・゚✧ ****************************** go away bear ╭∩╮ʕ•ᴥ•ʔ╭∩╮ ****************************** gotit (☞゚∀゚)☞ ****************************** gtalk fit (•̪̀●́)=ε/̵͇̿̿/'̿̿ ̿ ̿̿ N --------{---(@ ****************************** guitar c====(=#O| ) ~~ ♬·¯·♩¸¸♪·¯·♫¸ ****************************** gun ︻╦╤─ ****************************** hacksaw [|^^^^^^^ ****************************** hairstyle ⨌⨀_⨀⨌ ****************************** hal @_'-' ****************************** happy ۜ\(סּںסּَ` )/ۜ ****************************** happy birthday 1 ዞᏜ℘℘Ꮍ ℬℹℛʈዞᗬᏜᎽ ****************************** happy square 【ツ】 ****************************** happy2 ⎦˚◡˚⎣ ****************************** happy3 ㋡ ****************************** head shot ->~∑≥_≤) ****************************** headphone d[-_-]b ****************************** heart1 »-(¯`·.·´¯)-> ****************************** heart2 ♡ ****************************** heart3 <3 ****************************** hell yeah (òÓ,)_\,,/ ****************************** hello (ʘ‿ʘ)╯ ****************************** help ٩(͡๏̯͡๏)۶ ****************************** high five ( ⌒o⌒)人(⌒-⌒ )v ****************************** homer (_8(|) ****************************** homer simpson =(:o) ****************************** honeycute ❤◦.¸¸. ◦✿ ****************************** house __̴ı̴̴̡̡̡ ̡͌l̡̡̡ ̡͌l̡*̡̡ ̴̡ı̴̴̡ ̡̡͡|̲̲̲͡͡͡ ̲▫̲͡ ̲̲̲͡͡π̲̲͡͡ ̲̲͡▫̲̲͡͡ ̲|̡̡̡ ̡ ̴̡ı̴̡̡ ̡͌l̡̡̡̡.___ ****************************** hoxom h(o x o )m ****************************** hug me (っ◕‿◕)っ ****************************** huhu █▬█ █▄█ █▬█ █▄█ ****************************** human •͡˘㇁•͡˘ ****************************** hybrix ʕʘ̅͜ʘ̅ʔ ****************************** i dont care ╭∩╮(︶︿︶)╭∩╮ ****************************** i kill you ̿ ̿̿'̿̿\̵͇̿̿\=(•̪●)=/̵͇̿̿/'̿̿ ̿ ̿ ****************************** inlove (✿ ♥‿♥) ****************************** jaymz (•̪●)==ε/̵͇̿​̿/’̿’̿ ̿ ̿̿ `(•.°)~ ****************************** jazz musician ヽ(⌐■_■)ノ♪♬ ****************************** john lennon ((ºjº)) ****************************** jokeranonimous ╭∩╮ (òÓ,) ╭∩╮ ****************************** jokeranonimous2 ╭∩╮(ô¿ô)╭∩╮ ****************************** joy n_n ****************************** kablewee ̿' ̿'\̵͇̿̿\з=( ͡ °_̯͡° )=ε/̵͇̿̿/'̿'̿ ̿ ****************************** killer (⌐■_■)--︻╦╤─ - - - (╥﹏╥) ****************************** kilroy was here " Ü " ****************************** king -_- ****************************** kirby (つ -‘ _ ‘- )つ ****************************** kirby dance <(''<) <( ' ' )> (> '')> ****************************** kiss (o'3'o) ****************************** kiss my ass (_x_) ****************************** kitty =^..^= ****************************** knife )xxxxx[;;;;;;;;;> ****************************** koala @( * O * )@ ****************************** kokain ̿ ̿' ̿'\̵͇̿̿\з=(•̪●)=ε/̵͇̿̿/'̿''̿ ̿ ****************************** kyubey /人 ⌒ ‿‿ ⌒ 人\ ****************************** kyubey2 /人 ◕‿‿◕ 人\ ****************************** laughing (^▽^) ****************************** lenny ( ͡° ͜ʖ ͡°) ****************************** line brack ●▬▬▬▬๑۩۩๑▬▬▬▬▬● ****************************** linqan :Q___ ****************************** loading ███▒▒▒▒▒▒▒ ****************************** long rose ---------------------{{---<((@) ****************************** looking face ô¿ô ****************************** love ⓛⓞⓥⓔ ****************************** love in my eye (♥_♥) ****************************** love you »-(¯`·.·´¯)-><-(¯`·.·´¯)-« ****************************** love2 ~♡ⓛⓞⓥⓔ♡~ ****************************** machinegun ,==,-- ****************************** mail box |M|/ ****************************** man spider /╲/\༼ *ಠ 益 ಠ* ༽/\╱\ ****************************** man tears ಥ_ಥ ****************************** mango ) _ _ __/°°¬ ****************************** marge simpson ()()():| ****************************** med ب_ب ****************************** med man (⋗_⋖) ****************************** meditation ‿( ́ ̵ _-`)‿ ****************************** meep \(°^°)/ ****************************** melp1 (<>..<>) ****************************** melp2 (<(<>(<>.(<>..<>).<>)<>)>) ****************************** message1 (¯`·._.·(¯`·._.· ·._.·´¯)·._.·´¯) ****************************** message2 ,.-~*´¨¯¨`*·~-.¸-(-,.-~*´¨¯¨`*·~-.¸ ****************************** metal \m/_(>_<)_\m/ ****************************** mini penis =D ****************************** mis mujeres (-(-_(-_-)_-)-) ****************************** monkey @('_')@ ****************************** monocle (╭ರ_•́) ****************************** monster ٩(̾●̮̮̃̾•̃̾)۶ ****************************** monster2 ٩(- ̮̮̃-̃)۶ ****************************** mouse ----{,_,"> ****************************** mtmtika :o + :p = 69 ****************************** musical ¸¸♬·¯·♩¸¸♪·¯·♫¸¸¸¸♬·¯·♩¸¸♪·¯·♫¸¸ ****************************** myancat mmmyyyyy<⦿⽘⦿>aaaannn ****************************** nathan ♪└( ̄◇ ̄)┐♪└( ̄◇ ̄)┐♪└( ̄◇ ̄)┐♪ ****************************** needle |==|iiii|>----- ****************************** neo (⌐■_■)--︻╦╤─ - - - ****************************** nope t(-_-t) ****************************** nose \˚ㄥ˚\ ****************************** nose2 |'L'| ****************************** oar ===========(8888) ****************************** old lady boobs |\o/\o/| ****************************** owlkin (ᾢȍˬȍ)ᾢ ļ ļ ļ ļ ļ ****************************** pac man ᗧ···ᗣ···ᗣ·· ****************************** panda ヽ( ̄(エ) ̄)ノ ****************************** party time ┏(-_-)┛┗(-_- )┓┗(-_-)┛┏(-_-)┓ ****************************** peace yo! (‾⌣‾)♉ ****************************** penis 8===D ****************************** penis2 ○○)=======o) ****************************** perky ( ๏ Y ๏ ) ****************************** pictou |\_______(#*#)_______/| ****************************** pig1 ^(*(oo)*)^ ****************************** pig2 ༼☉ɷ⊙༽ ****************************** piggy (∩◕(oo)◕∩) ****************************** ping pong ( •_•)O*¯`·.¸.·´¯`°Q(•_• ) ****************************** pirate ✌(◕‿-)✌ ****************************** pistols1 ¯¯̿̿¯̿̿'̿̿̿̿̿̿̿'̿̿'̿̿̿̿̿'̿̿̿)͇̿̿)̿̿̿̿ '̿̿̿̿̿̿\̵͇̿̿\=(•̪̀●́)=o/̵͇̿̿/'̿̿ ̿ ̿̿ ****************************** pistols2 ̿' ̿'\̵͇̿̿\з=(◕_◕)=ε/̵͇̿̿/'̿'̿ ̿ ****************************** playing in snow (╯^□^)╯︵ ❄☃❄ ****************************** point (☞゚ヮ゚)☞ ****************************** polar bear ˁ˚ᴥ˚ˀ ****************************** possessed <>_<> ****************************** professor """⌐(ಠ۾ಠ)¬""" ****************************** puls ––•–√\/––√\/––•–– ****************************** punch O=('-'Q) ****************************** rak /⦿L⦿\ ****************************** rare ┌ಠ_ಠ)┌∩┐ ᶠᶸᶜᵏ♥ᵧₒᵤ ****************************** real face ( ͡° ͜ʖ ͡°) ****************************** regular ass (_!_) ****************************** religious ☪ ✡ † ☨ ✞ ✝ ☥ ☦ ☓ ♁ ☩ ****************************** roadblock X+X+X+X+X ****************************** robber -╤╗_(◙◙)_╔╤- - - - \o/ \o/ \o/ ****************************** robot boy ◖(◣☩◢)◗ ****************************** robot1 d[ o_0 ]b ****************************** robot2 c[○┬●]כ ****************************** rock on \,,/(^_^)\,,/ ****************************** rocket ∙∙∙∙∙·▫▫ᵒᴼᵒ▫ₒₒ▫ᵒᴼᵒ▫ₒₒ▫ᵒᴼᵒ☼)===> ****************************** roke _\m/ ****************************** rope ╚(▲_▲)╝ ****************************** rose1 --------{---(@ ****************************** rose2 @}}>----- ****************************** rose3 @-->--->--- ****************************** round bird ,(u°)> ****************************** round cat ~(^._.) ****************************** russian boobs [.][.] ****************************** ryans dick 8======D ****************************** sad1 ε(´סּ︵סּ`)з ****************************** sad2 (✖╭╮✖) ****************************** sat '(◣_◢)' ****************************** satan ↑_(ΦwΦ;)Ψ ****************************** scissors ✄ ****************************** sean the sheep <('--')> ****************************** sex symbol ◢♂◣◥♀◤◢♂◣◥♀◤ ****************************** shark ~~~~~~^~~~~~ ****************************** shark attack ~~~~~~\o/~~~~~/\~~~~~ ****************************** shocked (∩╹□╹∩) ****************************** shrug ¯\_(ツ)_/¯ ****************************** singing d(^o^)b¸¸♬·¯·♩¸¸♪·¯·♫¸¸ ****************************** singing2 ♪└( ̄◇ ̄)┐♪ ****************************** sky free ѧѦ ѧ ︵͡︵ ̢ ̱ ̧̱ι̵̱̊ι̶̨̱ ̶̱ ︵ Ѧѧ ︵͡ ︵ ѧ Ѧ ̵̗̊o̵̖ ︵ ѦѦ ѧ ****************************** sleeping (-.-)Zzz... ****************************** sleeping baby [{-_-}] ZZZzz zz z... ****************************** sleepy coffee ( -_-)旦~ ****************************** slenderman ϟƖΣNd€RMαN ****************************** smooth (づ  ̄ ³ ̄)づ ⓈⓂⓄⓄⓉⒽ ****************************** smug bastard (‾⌣‾) ****************************** snail '-'_@_ ****************************** sniper rifle ︻デ┳═ー ****************************** sniperstars ✯╾━╤デ╦︻✯ ****************************** snowing ✲´*。.❄¨¯`*✲。❄。*。 ****************************** snowman ☃ ****************************** sophie <XX""XX> ****************************** sorreh bro (◢_◣) ****************************** sparkling heart -`ღ´- ****************************** spear >>-;;;------;;--> ****************************** spell cast ╰( ⁰ ਊ ⁰ )━☆゚.*・。゚ ****************************** sperm ~~o ****************************** spider1 //O\ ****************************** spider2 /\oo/\ ****************************** spot ( . Y . ) ****************************** squee ヾ(◎o◎,,;)ノ ****************************** squid くコ:彡 ****************************** srs face (ಠ_ಠ) ****************************** star in my eyes <*_*> ****************************** stars ✌⊂(✰‿✰)つ✌ ****************************** stars in my eyes <*_*> ****************************** stars2 ⋆ ✢ ✣ ✤ ✥ ✦ ✧ ✩ ✪ ✫ ✬ ✭ ✮ ✯ ✰ ★ ****************************** sunglasses (•_•)>⌐■-■ (⌐■_■) ****************************** sunny day ☁ ▅▒░☼‿☼░▒▅ ☁ ****************************** superman -^mOm^- ****************************** superman logo /s\ ****************************** sword1 (===||:::::::::::::::> ****************************** sword2 ▬▬ι═══════ﺤ -═══════ι▬▬ ****************************** sword3 ס₪₪₪₪§|(Ξ≥≤≥≤≥≤ΞΞΞΞΞΞΞΞΞΞ> ****************************** sword4 |O/////[{:;:;:;:;:;:;:;:;> ****************************** sword5 <%%%%|==========> ****************************** table flip (╯°□°)╯︵ ┻━┻ ****************************** teddy ˁ(⦿ᴥ⦿)ˀ ****************************** teepee /|\ ****************************** telephone ε(๏_๏)з】 ****************************** text decoration (¯`·._.··¸.-~*´¨¯¨`*·~-.,-(__)-,.-~*´¨¯¨`*·~-.¸··._.·´¯) ****************************** thanks \(^-^)/ ****************************** this guy (☞゚∀゚)☞ ****************************** this is areku d(^o^)b ****************************** tie-fighter |—O—| ****************************** train /˳˳_˳˳\!˳˳X˳˳!(˳˳_˳˳)[˳˳_˳˳] ****************************** tron (\/)(;,,;)(\/) ****************************** trumpet -=iii=<() ****************************** ufo1 .-=-. ****************************** ufo2 .-=o=-. ****************************** ukulele { o }==(::) ****************************** umadbro ¯\_(ツ)_/¯ ****************************** up (◔/‿\◔) ****************************** upsidedown ( ͜。 ͡ʖ ͜。) ****************************** victory V(-.o)V ****************************** wat ಠ_ಠ ****************************** wat-wat Σ(‘◉⌓◉’) ****************************** waves °º¤ø,¸¸,ø¤º°`°º¤ø,¸,ø¤°º¤ø,¸¸,ø¤º°`°º¤ø,¸ ****************************** weather ☼ ☀ ☁ ☂ ☃ ☄ ☾ ☽ ❄ ☇ ☈ ⊙ ☉ ℃ ℉ ° ❅ ✺ ϟ ****************************** westbound fish < )))) >< ****************************** what? ة_ة ****************************** what?? (Ͼ˳Ͽ)..!!! ****************************** why ლ( `Д’ ლ) ****************************** wizard (∩ ͡° ͜ʖ ͡°)⊃━☆゚. * ****************************** woman ▓⚗_⚗▓ ****************************** worm _/\__/\__0> ****************************** worm2 ~ ****************************** wtf dude? \(◑д◐)>∠(◑д◐) ****************************** yessir ∠(・`_´・ ) ****************************** yo __o000o__(o)(o)__o000o__ ****************************** yolo Yᵒᵘ Oᶰˡʸ Lᶤᵛᵉ Oᶰᶜᵉ ****************************** zable ಠ_ರೃ ****************************** zoidberg (\/)(Ö,,,,Ö)(\/) ****************************** zombie 'º_º' ****************************** ###Markdown ART Version : 5.3 ###Code from art import * ###Output _____no_output_____ ###Markdown Art Counter ###Code ART_COUNTER ###Output _____no_output_____ ###Markdown Art List ###Code art_list() ###Output 3 ᕙ༼ ,,ԾܫԾ,, ༽ᕗ ****************************** 5 ᕙ༼ ,,இܫஇ,, ༽ᕗ ****************************** 9/11 truth ✈__✈ █ █ ▄ ****************************** acid ⊂(◉‿◉)つ ****************************** afraid ( ゚ Д゚) ****************************** airplane1 ‛¯¯٭٭¯¯(▫▫)¯¯٭٭¯¯’ ****************************** airplane2 ✈ ****************************** airplane3 ✈ ✈ ———- ♒✈ ****************************** ak-47 ︻┳デ═— ****************************** alien ::) ****************************** almost cared ╰╏ ◉ 〜 ◉ ╏╯ ****************************** american money1 [($)] ****************************** american money2 [̲̅$̲̅(̲̅1̲̅)̲̅$̲̅] ****************************** american money3 [̲̅$̲̅(̲̅5̲̅)̲̅$̲̅] ****************************** american money4 [̲̅$̲̅(̲̅ιοο̲̅)̲̅$̲̅] ****************************** american money5 [̲̅$̲̅(̲̅2οο̲̅)̲̅$̲̅] ****************************** angel1 ^i^ ****************************** angel2 O:-) ****************************** angry ლ(ಠ益ಠ)ლ ****************************** angry birds ( ఠൠఠ )ノ ****************************** angry face (⋟﹏⋞) ****************************** angry face2 (╬ ಠ益ಠ) ****************************** angry troll ヽ༼ ಠ益ಠ ༽ノ ****************************** angry2 ( ͠° ͟ʖ ͡°) ****************************** ankush ︻デ┳═ー*----* ****************************** arrow1 »»---------------------► ****************************** arrow2 XXX--------> ****************************** arrowhead ⤜(ⱺ ʖ̯ⱺ)⤏ ****************************** at what cost ლ(ಠ益ಠლ) ****************************** atish (| - _ - |) ****************************** awesome <:3 )~~~ ****************************** awkward •͡˘㇁•͡˘ ****************************** bad hair1 =:-) ****************************** bad hair2 =:-( ****************************** bagel nln >_< nln ****************************** band aid (̲̅:̲̅:̲̅:̲̅[̲̅ ̲̅]̲̅:̲̅:̲̅:̲̅ ) ****************************** barbell ▐━━━━━▌ ****************************** barcode1 █║▌│ █│║▌ ║││█║▌ │║║█║ │║║█║ ****************************** barcode2 ║█║▌║█║▌│║▌║▌█║ ****************************** barf (´ж`ς) ****************************** baseball fan q:o) ****************************** basking in glory ヽ(´ー`)ノ ****************************** bat1 ^O^ ****************************** bat2 ^v^ ****************************** bautista (╯°_°)╯︵ ━━━ ****************************** bear ʕ•ᴥ•ʔ ****************************** bear GTFO ʕ •`ᴥ•´ʔ ****************************** bear squiting ʕᵔᴥᵔʔ ****************************** bear2 (ʳ ´º㉨º) ****************************** because ∵ ****************************** bee ¸.·´¯`·¸¸.·´¯`·.¸.-<\^}0=: ****************************** being draged ╰(◣﹏◢)╯ ****************************** bender ¦̵̱ ̵̱ ̵̱ ̵̱ ̵̱(̢ ̡͇̅└͇̅┘͇̅ (▤8כ−◦ ****************************** big eyes ⺌∅‿∅⺌ ****************************** big kiss :-X ****************************** big nose ˚∆˚ ****************************** big smile :-D ****************************** bird (⌒▽⌒) ****************************** birds ~(‾▿‾)~ ****************************** blackeye 0__# ****************************** bomb !!( ’ ‘)ノノ⌒●~* ****************************** boobies (. )( .) ****************************** boobs (.)(.) ****************************** boobs2 (·人·) ****************************** boombox1 ♫♪.ılılıll|̲̅̅●̲̅̅|̲̅̅=̲̅̅|̲̅̅●̲̅̅|llılılı.♫♪ ****************************** boombox2 ♫♪ |̲̅̅●̲̅̅|̲̅̅=̲̅̅|̲̅̅●̲̅̅| ♫♪ ****************************** boxing ლ(•́•́ლ) ****************************** breakdown ಥ﹏ಥ ****************************** british money [£::] ****************************** buck teeth :-B ****************************** bugs bunny E=B ****************************** bullshit |3ᵕᶦᶦᶳᶣᶨᶵ ****************************** bunny (\_/) ****************************** butt (‿|‿) ****************************** butterfly Ƹ̵̡Ӝ̵̨̄Ʒ ****************************** camera [◉"] ****************************** canoe .,.,\______/,..,., ****************************** car `o##o> ****************************** car race ∙،°. ˘Ô≈ôﺣ » » » ****************************** care crowd (-(-_(-_-)_-)-) ****************************** careless ◔_◔ ****************************** carpet roll @__ ****************************** cassette1 |[●▪▪●]| ****************************** cassette2 [¯ↂ■■ↂ¯] ****************************** cat face ⦿⽘⦿ ****************************** cat smile ≧◔◡◔≦ ****************************** cat1 =^..^= ****************************** cat2 龴ↀ◡ↀ龴 ****************************** cat3 ^.--.^ ****************************** cat4 (=^ェ^=) ****************************** caterpillar ,/\,/\,/\,/\,/\,/\,o ****************************** catlenny ( ͡° ᴥ ͡°) ****************************** chair ╦╣ ****************************** charly +:) ****************************** chasing ''⌐(ಠ۾ಠ)¬'' ****************************** cheer ^(¤o¤)^ ****************************** cheers ( ^_^)o自自o(^_^ ) ****************************** chess ♞▀▄▀▄♝▀▄ ****************************** chess pieces ♚ ♛ ♜ ♝ ♞ ♟ ♔ ♕ ♖ ♗ ♘ ♙ ****************************** chicken ʚ(•` ****************************** chu (´ε` ) ****************************** cigarette1 (̅_̅_̅_̅(̅_̅_̅_̅_̅_̅_̅_̅_̅̅_̅()ڪے ****************************** cigarette2 (____((____________()~~~ ****************************** cigarette3 ()___)____________) ****************************** clowning *:o) ****************************** club bold ♣ ****************************** club regular ♧ ****************************** coffee now {zzz}°°°( -_-)>c[_] ****************************** coffee1 c[_] ****************************** coffee2 l_D ****************************** coffee3 l_P ****************************** coffee4 l_B ****************************** computer mouse [E} ****************************** concerned (@_@) ****************************** confused scratch (⊙.☉)7 ****************************** confused1 :-/ ****************************** confused10 (*'__'*) ****************************** confused2 :-\ ****************************** confused3 (°~°) ****************************** confused4 ^^' ****************************** confused5 é_è ****************************** confused6 (˚ㄥ_˚) ****************************** confused7 (; ͡°_ʖ ͡°) ****************************** confused8 (´•_•`) ****************************** confused9 (ˇ_ˇ’!l) ****************************** crab (\|) ._. (|/) ****************************** crayons ((̲̅ ̲̅(̲̅C̲̅r̲̅a̲̅y̲̅o̲̅l̲̲̅̅a̲̅( ̲̅((> ****************************** crazy ミ●﹏☉ミ ****************************** creeper ƪ(ړײ)‎ƪ​​ ****************************** crotch shot \*/ ****************************** cry (╯︵╰,) ****************************** cry face 。゚( ゚இ‸இ゚)゚。 ****************************** cry troll ༼ ༎ຶ ෴ ༎ຶ༽ ****************************** crying1 Ỏ̷͖͈̞̩͎̻̫̫̜͉̠̫͕̭̭̫̫̹̗̹͈̼̠̖͍͚̥͈̮̼͕̠̤̯̻̥̬̗̼̳̤̳̬̪̹͚̞̼̠͕̼̠̦͚̫͔̯̹͉͉̘͎͕̼̣̝͙̱̟̹̩̟̳̦̭͉̮̖̭̣̣̞̙̗̜̺̭̻̥͚͙̝̦̲̱͉͖͉̰̦͎̫̣̼͎͍̠̮͓̹̹͉̤̰̗̙͕͇͔̱͕̭͈̳̗̭͔̘̖̺̮̜̠͖̘͓̳͕̟̠̱̫̤͓͔̘̰̲͙͍͇̙͎̣̼̗̖͙̯͉̠̟͈͍͕̪͓̝̩̦̖̹̼̠̘̮͚̟͉̺̜͍͓̯̳̱̻͕̣̳͉̻̭̭̱͍̪̩̭̺͕̺̼̥̪͖̦̟͎̻̰_Ỏ̷͖͈̞̩͎̻̫̫̜͉̠̫͕̭̭̫̫̹̗̹͈̼̠̖͍͚̥͈̮̼͕̠̤̯̻̥̬̗̼̳̤̳̬̪̹͚̞̼̠͕̼̠̦͚̫͔̯̹͉͉̘͎͕̼̣̝͙̱̟̹̩̟̳̦̭͉̮̖̭̣̣̞̙̗̜̺̭̻̥͚͙̝̦̲̱͉͖͉̰̦͎̫̣̼͎͍̠̮͓̹̹͉̤̰̗̙͕͇͔̱͕̭͈̳̗̭͔̘̖̺̮̜̠͖̘͓̳͕̟̠̱̫̤͓͔̘̰̲͙͍͇̙͎̣̼̗̖͙̯͉̠̟͈͍͕̪͓̝̩̦̖̹̼̠̘̮͚̟͉̺̜͍͓̯̳̱̻͕̣̳͉̻̭̭̱͍̪̩̭̺͕̺̼̥̪͖̦̟͎̻̰ ****************************** crying2 :~( ****************************** cthulhu ^(;,;)^ ****************************** cthulhu2 ( ;,;) ****************************** cup1 (▓ ****************************** cup2 \̅_̅/̷̚ʾ ****************************** cussing :-# ****************************** cute cat ^⨀ᴥ⨀^ ****************************** cute face (。◕‿◕。) ****************************** cute face2 (ღ˘◡˘ღ) ****************************** cute face3 ✿◕ ‿ ◕✿ ****************************** cute face4 ❀◕ ‿ ◕❀ ****************************** cute face5 (✿◠‿◠) ****************************** cute face6 (◕‿◕✿) ****************************** cute face7 ☾˙❀‿❀˙☽ ****************************** cute face8 (◡‿◡✿) ****************************** cute face9 ლ(╹◡╹ლ) ****************************** dab ヽ( •_)ᕗ ****************************** dagger cxxx|;:;:;:;:;:;:;:;> ****************************** dalek ̵̄/͇̐\ ****************************** damnyou (ᕗ ͠° ਊ ͠° )ᕗ ****************************** dance (>'-')> <('_'<) ^('_')\- \m/(-_-)\m/ <( '-')> \_( .")> <(._.)-` ****************************** dance2 ♪♪ ヽ(ˇ∀ˇ )ゞ ****************************** dancee ♪┏(°.°)┛┗(°.°)┓┗(°.°)┛┏(°.°)┓ ♪ ****************************** dancing ┌(ㆆ㉨ㆆ)ʃ ****************************** dancing people ‎(/.__.)/ \(.__.\) ****************************** dead child '-=,o ****************************** dead eyes ¿ⓧ_ⓧﮌ ****************************** dead girl '==>x\9 ****************************** dead guy '==xx\0 ****************************** dear god why щ(゚Д゚щ) ****************************** death star defense team |-o-| (-o-) |-o-| ****************************** decorate ▂▃▅▇█▓▒░۩۞۩ ۩۞۩░▒▓█▇▅▃▂ ****************************** depressed (︶︹︶) ****************************** derp ヘ(。□°)ヘ ****************************** devil ]:-> ****************************** devilish grin >:-D ****************************** devilish smile >:) ****************************** devious smile ಠ‿ಠ ****************************** dgaf ┌∩┐(◣ _ ◢)┌∩┐ ****************************** diamond bold ♦ ****************************** diamond regular ♢ ****************************** dice [: :] ****************************** dick 8====D ****************************** disagree ٩◔̯◔۶ ****************************** discombobulated ⊙﹏⊙ ****************************** dislike1 (Ծ‸ Ծ) ****************************** dislike2 ( ಠ ʖ̯ ಠ) ****************************** dna sample ~ ****************************** do you even lift bro? ᕦ(ò_óˇ)ᕤ ****************************** domino [: :|:::] ****************************** don king ==8-) ****************************** double flip ┻━┻ ︵ヽ(`Д´)ノ︵ ┻━┻ ****************************** drowning 人人人ヾ( ;×o×)〃 人人人 ****************************** druling1 :-... ****************************** druling2 :-P~~~ ****************************** drunkenness ヽ(´ー`)┌ ****************************** dude glasses1 @[O],[O] ****************************** dude glasses2 @(o),(o) ****************************** dummy <-|-'_'-|-> ****************************** dunno ¯\(°_o)/¯ ****************************** dunno2 \_(シ)_/ ****************************** dunno3 └㋡┘ ****************************** dunno4 ╘㋡╛ ****************************** dunno5 ٩㋡۶ ****************************** eastbound fish ><((((> ****************************** eaten apple [===]-' ****************************** eds dick 8=D ****************************** eeriemob (-(-_-(-_(-_(-_-)_-)-_-)_-)_-)-) ****************************** electrocardiogram1 √v^√v^√v^√v^√♥ ****************************** electrocardiogram2 v^v^v^v^√\/♥ ****************************** electrocardiogram3 /\/\/\/\/\/\/\/\/\/\/\v^♥ ****************************** electrocardiogram4 √v^√v^♥√v^√v^√ ****************************** elephant °j°m ****************************** emo (///_ ;) ****************************** emo dance ヾ(-_- )ゞ ****************************** energy つ ◕_◕ ༽つ つ ◕_◕ ༽つ ****************************** envelope ✉ ****************************** epic gun ︻┳デ═— ****************************** equalizer ▇ ▅ █ ▅ ▇ ▂ ▃ ▁ ▁ ▅ ▃ ▅ ▅ ▄ ▅ ▇ ****************************** eric >--) ) ) )*> ****************************** error (╯°□°)╯︵ ɹoɹɹƎ ****************************** exchange (╯°□°)╯︵ ǝƃuɐɥɔxǝ ****************************** excited ☜(⌒▽⌒)☞ ****************************** exorcism ح(•̀ж•́)ง † ****************************** eye closed (╯_╰) ****************************** eye roll ⥀.⥀ ****************************** eyes ℃ↂ_ↂↃ ****************************** face •|龴◡龴|• ****************************** facepalm (>ლ) ****************************** fail o(╥﹏╥)o ****************************** fart (ˆ⺫ˆ๑)<3 ****************************** fat ass (__!__) ****************************** faydre (U) [^_^] (U) ****************************** feel perky (`・ω・´) ****************************** fido V•ᴥ•V ****************************** fight (ง'̀-'́)ง ****************************** finger1 ╭∩╮(Ο_Ο)╭∩╮ ****************************** finger2 ┌∩┐(◣_◢)┌∩┐ ****************************** finger3 ಠ︵ಠ凸 ****************************** finger4 ┌∩┐(>_<)┌∩┐ ****************************** finn | (• ◡•)| ****************************** fish invasion ›(̠̄:̠̄c ›(̠̄:̠̄c (¦Ҝ (¦Ҝ ҉ - - - ¦̺͆¦ ▪▌ ****************************** fish skeleton1 >-}-}-}-> ****************************** fish skeleton2 >++('> ****************************** fish swim ¸.·´¯`·.´¯`·.¸¸.·´¯`·.¸><(((º> ****************************** fish1 ><(((('> ****************************** fish2 ><> ****************************** fish3 `·.¸¸ ><((((º>.·´¯`·><((((º> ****************************** fish4 ><> ><> ****************************** fish5 <>< ****************************** fish6 <`)))>< ****************************** fisticuffs ლ(`ー´ლ) ****************************** flex ᕙ(⇀‸↼‶)ᕗ ****************************** flip friend (ノಠ ∩ಠ)ノ彡( \o°o)\ ****************************** fly away ⁽⁽ଘ( ˊᵕˋ )ଓ⁾⁾ ****************************** flying ح˚௰˚づ ****************************** fork ---= ****************************** formula1 car \ō͡≡o˞̶ ****************************** fox -^^,--,~ ****************************** french kiss :-XÞ ****************************** frown (ღ˘⌣˘ღ) ****************************** fu (ಠ_ಠ)┌∩┐ ****************************** fuck off t(-.-t) ****************************** fuck you nlm (-_-) mln ****************************** fuck you2 (° ͜ʖ͡°)╭∩╮ ****************************** fuckall ╭∩╮(︶︿︶)╭∩╮ ****************************** full mouth :-I ****************************** fungry Σ_(꒪ཀ꒪」∠)_ ****************************** ghost ‹’’›(Ͼ˳Ͽ)‹’’› ****************************** gimme ༼ つ ◕_◕ ༽つ ****************************** glasses -@-@- ****************************** glasses2 ᒡ◯ᵔ◯ᒢ ****************************** glitter (*・‿・)ノ⌒*:・゚✧ ****************************** go away bear ╭∩╮ʕ•ᴥ•ʔ╭∩╮ ****************************** gotit (☞゚∀゚)☞ ****************************** gtalk fit (•̪̀●́)=ε/̵͇̿̿/'̿̿ ̿ ̿̿ N --------{---(@ ****************************** guitar c====(=#O| ) ~~ ♬·¯·♩¸¸♪·¯·♫¸ ****************************** gun1 ︻╦╤─ ****************************** gun2 ︻デ═一 ****************************** gun3 ╦̵̵̿╤─ ҉ ~ • ****************************** gun4 ︻╦̵̵͇̿̿̿̿╤── ****************************** hacksaw [|^^^^^^^ ****************************** hairstyle ⨌⨀_⨀⨌ ****************************** hal @_'-' ****************************** hammer #== ****************************** happy ۜ\(סּںסּَ` )/ۜ ****************************** happy birthday 1 ዞᏜ℘℘Ꮍ ℬℹℛʈዞᗬᏜᎽ ****************************** happy face ヽ(´▽`)/ ****************************** happy hug \(ᵔᵕᵔ)/ ****************************** happy square 【ツ】 ****************************** happy10 (´ツ`) ****************************** happy11 ( ^◡^)っ ****************************** happy12 ┏(^0^)┛┗(^0^)┓ ****************************** happy13 (°⌣°) ****************************** happy14 ٩(^‿^)۶ ****************************** happy15 (•‿•) ****************************** happy16 ó‿ó ****************************** happy17 ٩◔‿◔۶ ****************************** happy18 ಠ◡ಠ ****************************** happy19 ●‿● ****************************** happy2 ⎦˚◡˚⎣ ****************************** happy20 ( '‿' ) ****************************** happy21 ^‿^ ****************************** happy22 ┌( ಠ‿ಠ)┘ ****************************** happy23 (˘◡˘) ****************************** happy24 ☯‿☯ ****************************** happy25 \(• ◡ •)/ ****************************** happy26 ( ͡ʘ ͜ʖ ͡ʘ) ****************************** happy27 ( ͡• ͜ʖ ͡• ) ****************************** happy3 ㋡ ****************************** happy4 ^_^ ****************************** happy5 [^_^] ****************************** happy6 (ツ) ****************************** happy7 【シ】 ****************************** happy8 ㋛ ****************************** happy9 (シ) ****************************** head shot ->~∑≥_≤) ****************************** headphone1 d[-_-]b ****************************** headphone2 d(-_-)b ****************************** headphone3 (W) ****************************** heart bold ♥ ****************************** heart regular ♡ ****************************** heart1 »-(¯`·.·´¯)-> ****************************** heart2 ♡♡ ****************************** heart3 <3 ****************************** hell yeah (òÓ,)_\,,/ ****************************** hello (ʘ‿ʘ)╯ ****************************** hello2 (ツ)ノ ****************************** help ٩(͡๏̯͡๏)۶ ****************************** high five ( ⌒o⌒)人(⌒-⌒ )v ****************************** hitchhicking (งツ)ว ****************************** homer (_8(|) ****************************** homer simpson =(:o) ****************************** honeycute ❤◦.¸¸. ◦✿ ****************************** house __̴ı̴̴̡̡̡ ̡͌l̡̡̡ ̡͌l̡*̡̡ ̴̡ı̴̴̡ ̡̡͡|̲̲̲͡͡͡ ̲▫̲͡ ̲̲̲͡͡π̲̲͡͡ ̲̲͡▫̲̲͡͡ ̲|̡̡̡ ̡ ̴̡ı̴̡̡ ̡͌l̡̡̡̡.___ ****************************** hoxom h(o x o )m ****************************** hug me (っ◕‿◕)っ ****************************** hugger (づ ̄ ³ ̄)づ ****************************** huhu █▬█ █▄█ █▬█ █▄█ ****************************** hybrix ʕʘ̅͜ʘ̅ʔ ****************************** i dont care ╭∩╮(︶︿︶)╭∩╮ ****************************** i kill you ̿ ̿̿'̿̿\̵͇̿̿\=(•̪●)=/̵͇̿̿/'̿̿ ̿ ̿ ****************************** im a hugger (⊃。•́‿•̀。)⊃ ****************************** infinity (X) ****************************** injured (҂◡_◡) ****************************** inlove (✿ ♥‿♥) ****************************** innocent face ʘ‿ʘ ****************************** japanese lion face °‿‿° ****************************** jaymz (•̪●)==ε/̵͇̿​̿/’̿’̿ ̿ ̿̿ `(•.°)~ ****************************** jazz musician ヽ(⌐■_■)ノ♪♬ ****************************** john lennon ((ºjº)) ****************************** joker1 🂠 ****************************** joker2 🃟 ****************************** joker3 🃏 ****************************** joker4 🃠 ****************************** jokeranonimous ╭∩╮ (òÓ,) ╭∩╮ ****************************** jokeranonimous2 ╭∩╮(ô¿ô)╭∩╮ ****************************** joy n_n ****************************** judgemental \{ಠʖಠ\} ****************************** judging ( ఠ ͟ʖ ఠ) ****************************** kablewee ̿' ̿'\̵͇̿̿\з=( ͡ °_̯͡° )=ε/̵͇̿̿/'̿'̿ ̿ ****************************** killer (⌐■_■)--︻╦╤─ - - - (╥﹏╥) ****************************** kilroy was here " Ü " ****************************** king -_- ****************************** kirby (つ -‘ _ ‘- )つ ****************************** kirby dance <(''<) <( ' ' )> (> '')> ****************************** kiss (o'3'o) ****************************** kiss my ass (_x_) ****************************** kiss2 (︶ε︶メ) ****************************** kiss3 ╮(︶ε︶メ)╭ ****************************** kissing ( ˘ ³˘)♥ ****************************** kissing2 ( ˘з˘)ε˘`) ****************************** kissing3 (~˘з˘)~~(˘ε˘~) ****************************** kissing4 (っ˘з(O.O )♥ ****************************** kissing5 (`˘з(•˘⌣˘•) ****************************** kissing6 (っ˘з(˘⌣˘ ) ****************************** kitty =^. .^= ****************************** kitty emote ᵒᴥᵒ# ****************************** knife1 )xxxxx[;;;;;;;;;> ****************************** knife2 )xxx[::::::::::> ****************************** koala @( * O * )@ ****************************** kokain ̿ ̿' ̿'\̵͇̿̿\з=(•̪●)=ε/̵͇̿̿/'̿''̿ ̿ ****************************** kyubey /人 ⌒ ‿‿ ⌒ 人\ ****************************** kyubey2 /人 ◕‿‿◕ 人\ ****************************** laughing (^▽^) ****************************** lenny ( ͡° ͜ʖ ͡°) ****************************** licking lips :-9 ****************************** line brack ●▬▬▬▬๑۩۩๑▬▬▬▬▬● ****************************** linqan :Q___ ****************************** listening to headphones ◖ᵔᴥᵔ◗ ♪ ♫ ****************************** loading1 █▒▒▒▒▒▒▒▒▒ ****************************** loading2 ███▒▒▒▒▒▒▒ ****************************** loading3 █████▒▒▒▒▒ ****************************** loading4 ███████▒▒▒ ****************************** loading5 █████████▒ ****************************** loading6 ██████████ ****************************** loch ness monster _mmmP ****************************** long rose ---------------------{{---<((@) ****************************** looking down (._.) ****************************** looking face ô¿ô ****************************** love ⓛⓞⓥⓔ ****************************** love in my eye1 (♥_♥) ****************************** love in my eye2 (。❤◡❤。) ****************************** love in my eye3 (❤◡❤) ****************************** love2 ~♡ⓛⓞⓥⓔ♡~ ****************************** love3 ♥‿♥ ****************************** love4 (Ɔ ˘⌣˘)♥(˘⌣˘ C) ****************************** machinegun ,==,-- ****************************** mad òÓ ****************************** mad10 (•ˋ _ ˊ•) ****************************** mad2 (ノ`Д ́)ノ ****************************** mad3 >_< ****************************** mad4 ~_~ ****************************** mad5 Ծ_Ծ ****************************** mad6 ⋋_⋌ ****************************** mad7 (ノ≥∇≤)ノ ****************************** mad8 {{{(>_<)}}} ****************************** mad9 ƪ(`▿▿▿▿´ƪ) ****************************** mail box |M|/ ****************************** man spider /╲/\༼ *ಠ 益 ಠ* ༽/\╱\ ****************************** man tears ಥ_ಥ ****************************** mango ) _ _ __/°°¬ ****************************** marge simpson ()()():| ****************************** marshmallows -()_)--()_)--- ****************************** med ب_ب ****************************** med man (⋗_⋖) ****************************** meditation ‿( ́ ̵ _-`)‿ ****************************** meep \(°^°)/ ****************************** melp1 (<>..<>) ****************************** melp2 (<(<>(<>.(<>..<>).<>)<>)>) ****************************** meow ฅ^•ﻌ•^ฅ ****************************** metal \m/_(>_<)_\m/ ****************************** mini penis =D ****************************** monkey @('_')@ ****************************** monocle (╭ರ_•́) ****************************** monster ٩(̾●̮̮̃̾•̃̾)۶ ****************************** monster2 ٩(- ̮̮̃-̃)۶ ****************************** mouse1 ----{,_,"> ****************************** mouse2 . ~~(__^·> ****************************** mouse3 <·^__)~~ . ****************************** mouse4 —-{,_,”><",_,}---- ****************************** mouse5 <:3 )~~~~ ****************************** mouse6 <^__)~ ****************************** mouse7 ~(__^> ****************************** mtmtika :o + :p = 69 ****************************** myancat mmmyyyyy<⦿⽘⦿>aaaannn ****************************** nathan ♪└( ̄◇ ̄)┐♪└( ̄◇ ̄)┐♪└( ̄◇ ̄)┐♪ ****************************** needle1 ┣▇▇▇═─ ****************************** needle2 |==|iiii|>----- ****************************** neo (⌐■_■)--︻╦╤─ - - - ****************************** nerd ::( ****************************** no support 乁( ◔ ౪◔)「 ┑( ̄Д  ̄)┍ ****************************** nope t(-_-t) ****************************** nose \˚ㄥ˚\ ****************************** nose2 |'L'| ****************************** oar ===========(8888) ****************************** old lady boobs |\o/\o/| ****************************** opera ヾ(´〇`)ノ♪♪♪ ****************************** owlkin (ᾢȍˬȍ)ᾢ ļ ļ ļ ļ ļ ****************************** pac man ᗧ···ᗣ···ᗣ·· ****************************** palm tree 'T` ****************************** panda ヽ( ̄(エ) ̄)ノ ****************************** party time ┏(-_-)┛┗(-_- )┓┗(-_-)┛┏(-_-)┓ ****************************** peace yo! (‾⌣‾)♉ ****************************** peepers ಠಠ ****************************** penis 8===D ****************************** penis2 ○○)=======o) ****************************** perky ( ๏ Y ๏ ) ****************************** pictou |\_______(#*#)_______/| ****************************** pie fight ---=======[} ****************************** pig1 ^(*(oo)*)^ ****************************** pig2 ༼☉ɷ⊙༽ ****************************** piggy (∩◕(oo)◕∩) ****************************** ping pong ( •_•)O*¯`·.¸.·´¯`°Q(•_• ) ****************************** pipe ====\_/ ****************************** pirate ✌(◕‿-)✌ ****************************** pistols1 ¯¯̿̿¯̿̿'̿̿̿̿̿̿̿'̿̿'̿̿̿̿̿'̿̿̿)͇̿̿)̿̿̿̿ '̿̿̿̿̿̿\̵͇̿̿\=(•̪̀●́)=o/̵͇̿̿/'̿̿ ̿ ̿̿ ****************************** pistols2 ̿' ̿'\̵͇̿̿\з=(◕_◕)=ε/̵͇̿̿/'̿'̿ ̿ ****************************** pistols3 ̿̿ ̿̿ ̿’̿̿’̿\̵͇̿̿\з=( ͡ °_̯͡° )=ε/̵͇̿̿/’̿̿’̿ ̿ ̿̿ ̿̿ ****************************** pistols4 (•̪●)=ε/̵͇̿̿/’̿’̿ ̿ ̿̿ ̿ ̿”” ****************************** pistols5 ̿̿ ̿̿ ̿̿ ̿'̿'\̵͇̿̿\з=( ͠° ͟ʖ ͡°)=ε/̵͇̿̿/'̿̿ ̿ ̿ ̿ ̿ ̿ ****************************** playing cards [♥]]] [♦]]] [♣]]] [♠]]] ****************************** playing cards clubs [♣]]] ****************************** playing cards clubs waterfall 🃏🃑🃒🃓🃔🃕🃖🃗🃘🃙🃚🃛🃜🃝🃞 ****************************** playing cards diamonds [♦]]] ****************************** playing cards diamonds waterfall 🃟🃁🃂🃃🃄🃅🃆🃇🃈🃉🃊🃋🃌🃍🃎 ****************************** playing cards hearts [♥]]] ****************************** playing cards hearts waterfall 🂠🂱🂲🂳🂴🂵🂶🂷🂸🂹🂺🂻🂼🂽🂾 ****************************** playing cards spades [♠]]] ****************************** playing cards spades waterfall 🃠🂡🂢🂣🂤🂥🂦🂧🂨🂩🂪🂫🂬🂭🂮 ****************************** playing cards waterfall 🂱🂲🂳🂴🂵🂶🂷🂸🂹🂺🂻🂼🂽🂾🃁🃂🃃🃄🃅🃆🃇🃈🃉🃊🃋🃌🃍🃎🃑🃒🃓🃔🃕🃖🃗🃘🃙🃚🃛🃜🃝🃞🂡🂢🂣🂤🂥🂦🂧🂨🂩🂪🂫🂬🂭🂮🂠🃏🃟 ****************************** playing cards waterfall (trump) 🃠🃡🃢🃣🃤🃥🃦🃧🃨🃩🃪🃫🃬🃭🃮🃯🃰🃱🃲🃳🃴🃵 ****************************** playing in snow (╯^□^)╯︵ ❄☃❄ ****************************** point (☞゚ヮ゚)☞ ****************************** polar bear ˁ˚ᴥ˚ˀ ****************************** possessed <>_<> ****************************** power lines TTT ****************************** pretty eyes ఠ_ఠ ****************************** professor """⌐(ಠ۾ಠ)¬""" ****************************** puls ––•–√\/––√\/––•–– ****************************** punch O=('-'Q) ****************************** pursing lips :-" ****************************** put the table back ┬─┬ ノ( ゜-゜ノ) ****************************** rak /⦿L⦿\ ****************************** rare ┌ಠ_ಠ)┌∩┐ ᶠᶸᶜᵏ♥ᵧₒᵤ ****************************** ready to cry :-} ****************************** real face ( ͡° ͜ʖ ͡°) ****************************** really mad >:-I ****************************** really sad :-C ****************************** regular ass (_!_) ****************************** religious ☪ ✡ † ☨ ✞ ✝ ☥ ☦ ☓ ♁ ☩ ****************************** resting my eyes ᴖ̮ ̮ᴖ ****************************** roadblock X+X+X+X+X ****************************** robber -╤╗_(◙◙)_╔╤- - - - \o/ \o/ \o/ ****************************** robot boy ◖(◣☩◢)◗ ****************************** robot1 d[ o_0 ]b ****************************** robot2 c[○┬●]כ ****************************** robot3 {•̃_•̃} ****************************** rock on1 \,,/(^_^)\,,/ ****************************** rock on2 \m/(-_-)\m/ ****************************** rocket ∙∙∙∙∙·▫▫ᵒᴼᵒ▫ₒₒ▫ᵒᴼᵒ▫ₒₒ▫ᵒᴼᵒ☼)===> ****************************** roke _\m/ ****************************** rope ╚(▲_▲)╝ ****************************** rose1 --------{---(@ ****************************** rose2 @}}>----- ****************************** rose3 @-->--->--- ****************************** rose4 @}~}~~~ ****************************** rose5 @-}-- ****************************** rose6 @)}---^----- ****************************** rose7 @->-->--- ****************************** round bird ,(u°)> ****************************** round cat ~(^._.) ****************************** running ε=ε=ε=┌(;*´Д`)ノ ****************************** russian boobs [.][.] ****************************** ryans dick 8======D ****************************** sad and confused ¯\_(⊙︿⊙)_/¯ ****************************** sad and crying (ᵟຶ︵ ᵟຶ) ****************************** sad face (ಥ⌣ಥ) ****************************** sad1 ε(´סּ︵סּ`)з ****************************** sad2 (✖╭╮✖) ****************************** sad3 (◑﹏◐) ****************************** sad4 (◕_◕) ****************************** sad5 (´ᗣ`) ****************************** sad6 Y_Y ****************************** sat '(◣_◢)' ****************************** satan ↑_(ΦwΦ;)Ψ ****************************** satisfied (◠﹏◠) ****************************** scissors ✄ ****************************** screaming :-@ ****************************** seal (ᵔᴥᵔ) ****************************** sean the sheep <('--')> ****************************** sex symbol ◢♂◣◥♀◤◢♂◣◥♀◤ ****************************** shark ~~~~~~^~~~~~ ****************************** shark attack ~~~~~~\o/~~~~~/\~~~~~ ****************************** shark face ( ˇ෴ˇ ) ****************************** sheep °l°(,,,,); ****************************** shocked1 (∩╹□╹∩) ****************************** shocked2 :-O ****************************** shrug ¯\_(ツ)_/¯ ****************************** shy (๑•́ ₃ •̀๑) ****************************** singing d(^o^)b¸¸♬·¯·♩¸¸♪·¯·♫¸¸ ****************************** singing2 ♪└( ̄◇ ̄)┐♪ ****************************** sky free ѧѦ ѧ ︵͡︵ ̢ ̱ ̧̱ι̵̱̊ι̶̨̱ ̶̱ ︵ Ѧѧ ︵͡ ︵ ѧ Ѧ ̵̗̊o̵̖ ︵ ѦѦ ѧ ****************************** sleeping (-.-)Zzz... ****************************** sleeping baby [{-_-}] ZZZzz zz z... ****************************** sleepy 눈_눈 ****************************** sleepy coffee ( -_-)旦~ ****************************** slenderman ϟƖΣNd€RMαN ****************************** smile :-) ****************************** smirk :-, ****************************** smooth (づ  ̄ ³ ̄)づ ⓈⓂⓄⓄⓉⒽ ****************************** smug bastard (‾⌣‾) ****************************** snail1 '-'_@_ ****************************** snail2 '\Q___ ****************************** sniper rifle ︻デ┳═ー ****************************** sniperstars ✯╾━╤デ╦︻✯ ****************************** snowing ✲´*。.❄¨¯`*✲。❄。*。 ****************************** snowman1 ☃ ****************************** snowman2 { }( : ^ )( """" )( ) ****************************** sophie <XX""XX> ****************************** sorreh bro (◢_◣) ****************************** spade bold ♠ ****************************** spade regular ♤ ****************************** sparkling heart -`ღ´- ****************************** spear >>-;;;------;;--> ****************************** spell cast ╰( ⁰ ਊ ⁰ )━☆゚.*・。゚ ****************************** sperm ~~o ****************************** spider1 //O\ ****************************** spider2 /\oo/\ ****************************** spider3 ///\oo/\\\ ****************************** spider4 /╲/\╭ºoꍘoº╮/\╱\ ****************************** spot ( . Y . ) ****************************** squee ヾ(◎o◎,,;)ノ ****************************** squid くコ:彡 ****************************** squigle with spirals 6\9 ****************************** srs face (ಠ_ಠ) ****************************** star in my eyes <*_*> ****************************** staring ٩(๏_๏)۶ ****************************** stars ✌⊂(✰‿✰)つ✌ ****************************** stars2 ⋆ ✢ ✣ ✤ ✥ ✦ ✧ ✩ ✪ ✫ ✬ ✭ ✮ ✯ ✰ ★ ****************************** stealth fighter -^- ****************************** stranger danger (づ。◕‿‿◕。)づ ****************************** strut ᕕ( ᐛ )ᕗ ****************************** stunna shades (っ▀¯▀)つ ****************************** sunglasses1 (•_•)>⌐■-■ (⌐■_■) ****************************** sunglasses2 B-) ****************************** sunny day ☁ ▅▒░☼‿☼░▒▅ ☁ ****************************** superman -^mOm^- ****************************** superman logo /s\ ****************************** surprised1 =:-o ****************************** surprised10 (⊙.◎) ****************************** surprised11 ๏_๏ ****************************** surprised12 (˚-˚) ****************************** surprised13 ˚o˚ ****************************** surprised14 (O.O) ****************************** surprised15 ( ゚o゚) ****************************** surprised16 ◉_◉ ****************************** surprised17 【•】_【•】 ****************************** surprised18 (•ิ_•) ****************************** surprised19 ⊙⊙ ****************************** surprised2 ( ゚Д゚) ****************************** surprised20 ͡๏_͡๏ ****************************** surprised3 (O_o) ****************************** surprised4 (º_•) ****************************** surprised5 (º.º) ****************************** surprised6 ⊙▃⊙ ****************************** surprised7 O.o ****************************** surprised8 ●_● ****************************** surprised9 (⊙̃.o) ****************************** swim _(ッ)>_// ****************************** swim2 ー(ッ)」 ****************************** swim3 _(ッ)へ ****************************** sword1 (===||:::::::::::::::> ****************************** sword10 O==I======> ****************************** sword2 ▬▬ι═══════ﺤ -═══════ι▬▬ ****************************** sword3 ס₪₪₪₪§|(Ξ≥≤≥≤≥≤ΞΞΞΞΞΞΞΞΞΞ> ****************************** sword4 |O/////[{:;:;:;:;:;:;:;:;> ****************************** sword5 <%%%%|==========> ****************************** sword6 o()xxxx[{::::::::::::::::::::::::::::::::::> ****************************** sword7 o==[]::::::::::::::::> ****************************** sword8 ▬▬ι═══════> ****************************** sword9 <═══════ι▬▬ ****************************** table flip (╯°□°)╯︵ ┻━┻ ****************************** table flip10 (/ .□.)\ ︵╰(゜Д゜)╯︵ /(.□. \) ****************************** table flip2 (ノಥ益ಥ)ノ ┻━┻ ****************************** table flip3 ┬─┬ノ( º _ ºノ) ****************************** table flip4 (ノಠ益ಠ)ノ彡┻━┻ ****************************** table flip5 ┬──┬ ¯\_(ツ) ****************************** table flip6 ┻━┻ ︵ ¯\(ツ)/¯ ︵ ┻━┻ ****************************** table flip7 (╯°□°)╯︵ ┻━┻ ︵ ╯(°□° ╯) ****************************** table flip8 (╯°Д°)╯︵ /(.□ . \) ****************************** table flip9 (ノ^_^)ノ┻━┻ ┬─┬ ノ( ^_^ノ) ****************************** taking a dump (⩾﹏⩽) ****************************** teddy ˁ(⦿ᴥ⦿)ˀ ****************************** teepee /|\ ****************************** telephone ε(๏_๏)з】 ****************************** tent1 //\ ****************************** tent2 /\\ ****************************** tgif “ヽ(´▽`)ノ” ****************************** thanks \(^-^)/ ****************************** things that can_t be unseen ♨_♨ ****************************** this is areku d(^o^)b ****************************** tidy up ┬─┬⃰͡ (ᵔᵕᵔ͜ ) ****************************** tie-fighter |—O—| ****************************** tired ###Markdown ART Version : 4.5 ###Code from art import * ###Output _____no_output_____ ###Markdown Art Counter ###Code ART_COUNTER ###Output _____no_output_____ ###Markdown Art List ###Code art_list() ###Output 100$ [̲̅$̲̅(̲̅ιοο̲̅)̲̅$̲̅] ****************************** 3 ᕙ༼ ,,ԾܫԾ,, ༽ᕗ ****************************** 5 ᕙ༼ ,,இܫஇ,, ༽ᕗ ****************************** 9/11 truth ✈__✈ █ █ ▄ ****************************** acid ⊂(◉‿◉)つ ****************************** afraid ( ゚ Д゚) ****************************** airplane1 ‛¯¯٭٭¯¯(▫▫)¯¯٭٭¯¯’ ****************************** airplane2 ✈ ****************************** ak-47 ︻┳デ═— ****************************** alien ::) ****************************** aliens (<>..<>) ****************************** almost cared ╰╏ ◉ 〜 ◉ ╏╯ ****************************** american money1 [($)] ****************************** american money2 [̲̅$̲̅(̲̅1̲̅)̲̅$̲̅] ****************************** american money3 [̲̅$̲̅(̲̅5̲̅)̲̅$̲̅] ****************************** american money4 [̲̅$̲̅(̲̅ιοο̲̅)̲̅$̲̅] ****************************** american money5 [̲̅$̲̅(̲̅2οο̲̅)̲̅$̲̅] ****************************** angel1 ^i^ ****************************** angel2 O:-) ****************************** angry ლ(ಠ益ಠ)ლ ****************************** angry face (⋟﹏⋞) ****************************** angry2 ( ͠° ͟ʖ ͡°) ****************************** ankush ︻デ┳═ー*----* ****************************** arrow1 »»---------------------► ****************************** arrow2 XXX--------> ****************************** arrowhead ⤜(ⱺ ʖ̯ⱺ)⤏ ****************************** atish (| - _ - |) ****************************** awesome <:3 )~~~ ****************************** awkward •͡˘㇁•͡˘ ****************************** bad hair1 =:-) ****************************** bad hair2 =:-( ****************************** badass (⌐■_■)--︻╦╤─ - - - ****************************** bagel nln >_< nln ****************************** band aid (̲̅:̲̅:̲̅:̲̅[̲̅ ̲̅]̲̅:̲̅:̲̅:̲̅ ) ****************************** barbell ▐━━━━━▌ ****************************** barcode1 █║▌│ █│║▌ ║││█║▌ │║║█║ │║║█║ ****************************** barcode2 ║█║▌║█║▌│║▌║▌█║ ****************************** baseball fan q:o) ****************************** bat1 ^O^ ****************************** bat2 ^v^ ****************************** bautista (╯°_°)╯︵ ━━━ ****************************** bear ʕ•ᴥ•ʔ ****************************** because ∵ ****************************** bee ¸.·´¯`·¸¸.·´¯`·.¸.-<\^}0=: ****************************** being draged ╰(◣﹏◢)╯ ****************************** bender ¦̵̱ ̵̱ ̵̱ ̵̱ ̵̱(̢ ̡͇̅└͇̅┘͇̅ (▤8כ−◦ ****************************** big eyes ⺌∅‿∅⺌ ****************************** big kiss :-X ****************************** big nose ˚∆˚ ****************************** big smile :-D ****************************** bird (⌒▽⌒) ****************************** birds ~(‾▿‾)~ ****************************** blackeye 0__# ****************************** bomb !!( ’ ‘)ノノ⌒●~* ****************************** boobies (. )( .) ****************************** boobs (.)(.) ****************************** boombox1 ♫♪.ılılıll|̲̅̅●̲̅̅|̲̅̅=̲̅̅|̲̅̅●̲̅̅|llılılı.♫♪ ****************************** boombox2 ♫♪ |̲̅̅●̲̅̅|̲̅̅=̲̅̅|̲̅̅●̲̅̅| ♫♪ ****************************** british money [£::] ****************************** buck teeth :-B ****************************** bugs bunny E=B ****************************** bullshit |3ᵕᶦᶦᶳᶣᶨᶵ ****************************** bunny (\_/) ****************************** butt (‿|‿) ****************************** butterfly Ƹ̵̡Ӝ̵̨̄Ʒ ****************************** camera [◉"] ****************************** canoe .,.,\______/,..,., ****************************** car `o##o> ****************************** car race ∙،°. ˘Ô≈ôﺣ » » » ****************************** care crowd (-(-_(-_-)_-)-) ****************************** carpet roll @__ ****************************** cassette1 |[●▪▪●]| ****************************** cassette2 [¯ↂ■■ↂ¯] ****************************** cat face ⦿⽘⦿ ****************************** cat smile ≧◔◡◔≦ ****************************** cat1 =^..^= ****************************** cat2 龴ↀ◡ↀ龴 ****************************** cat3 ^.--.^ ****************************** caterpillar ,/\,/\,/\,/\,/\,/\,o ****************************** catlenny ( ͡° ᴥ ͡°) ****************************** chainsword |O/////[{:;:;:;:;:;:;:;:;> ****************************** chair ╦╣ ****************************** charly +:) ****************************** cheer ^(¤o¤)^ ****************************** chess ♞▀▄▀▄♝▀▄ ****************************** chess pieces ♚ ♛ ♜ ♝ ♞ ♟ ♔ ♕ ♖ ♗ ♘ ♙ ****************************** chu (´ε` ) ****************************** cigarette1 (̅_̅_̅_̅(̅_̅_̅_̅_̅_̅_̅_̅_̅̅_̅()ڪے ****************************** cigarette2 (____((____________()~~~ ****************************** cigarette3 ()___)____________) ****************************** clowning *:o) ****************************** club bold ♣ ****************************** club regular ♧ ****************************** coffee now {zzz}°°°( -_-)>c[_] ****************************** coffee1 c[_] ****************************** coffee2 l_D ****************************** coffee3 l_P ****************************** coffee4 l_B ****************************** computer mouse [E} ****************************** concerned (@_@) ****************************** confused1 :-/ ****************************** confused2 :-\ ****************************** crab (\|) ._. (|/) ****************************** crayons ((̲̅ ̲̅(̲̅C̲̅r̲̅a̲̅y̲̅o̲̅l̲̲̅̅a̲̅( ̲̅((> ****************************** crotch shot \*/ ****************************** cry (╯︵╰,) ****************************** crying1 Ỏ̷͖͈̞̩͎̻̫̫̜͉̠̫͕̭̭̫̫̹̗̹͈̼̠̖͍͚̥͈̮̼͕̠̤̯̻̥̬̗̼̳̤̳̬̪̹͚̞̼̠͕̼̠̦͚̫͔̯̹͉͉̘͎͕̼̣̝͙̱̟̹̩̟̳̦̭͉̮̖̭̣̣̞̙̗̜̺̭̻̥͚͙̝̦̲̱͉͖͉̰̦͎̫̣̼͎͍̠̮͓̹̹͉̤̰̗̙͕͇͔̱͕̭͈̳̗̭͔̘̖̺̮̜̠͖̘͓̳͕̟̠̱̫̤͓͔̘̰̲͙͍͇̙͎̣̼̗̖͙̯͉̠̟͈͍͕̪͓̝̩̦̖̹̼̠̘̮͚̟͉̺̜͍͓̯̳̱̻͕̣̳͉̻̭̭̱͍̪̩̭̺͕̺̼̥̪͖̦̟͎̻̰_Ỏ̷͖͈̞̩͎̻̫̫̜͉̠̫͕̭̭̫̫̹̗̹͈̼̠̖͍͚̥͈̮̼͕̠̤̯̻̥̬̗̼̳̤̳̬̪̹͚̞̼̠͕̼̠̦͚̫͔̯̹͉͉̘͎͕̼̣̝͙̱̟̹̩̟̳̦̭͉̮̖̭̣̣̞̙̗̜̺̭̻̥͚͙̝̦̲̱͉͖͉̰̦͎̫̣̼͎͍̠̮͓̹̹͉̤̰̗̙͕͇͔̱͕̭͈̳̗̭͔̘̖̺̮̜̠͖̘͓̳͕̟̠̱̫̤͓͔̘̰̲͙͍͇̙͎̣̼̗̖͙̯͉̠̟͈͍͕̪͓̝̩̦̖̹̼̠̘̮͚̟͉̺̜͍͓̯̳̱̻͕̣̳͉̻̭̭̱͍̪̩̭̺͕̺̼̥̪͖̦̟͎̻̰ ****************************** crying2 :~( ****************************** cthulhu ^(;,;)^ ****************************** cup1 (▓ ****************************** cup2 \̅_̅/̷̚ʾ ****************************** cussing :-# ****************************** cute cat ^⨀ᴥ⨀^ ****************************** dab ヽ( •_)ᕗ ****************************** dagger cxxx|;:;:;:;:;:;:;:;> ****************************** dalek ̵̄/͇̐\ ****************************** damnyou (ᕗ ͠° ਊ ͠° )ᕗ ****************************** dance (>'-')> <('_'<) ^('_')\- \m/(-_-)\m/ <( '-')> \_( .")> <(._.)-` ****************************** dancee ♪┏(°.°)┛┗(°.°)┓┗(°.°)┛┏(°.°)┓ ♪ ****************************** dancing people ‎(/.__.)/ \(.__.\) ****************************** dead child '-=,o ****************************** dead eyes ¿ⓧ_ⓧﮌ ****************************** dead girl '==>x\9 ****************************** dead guy '==xx\0 ****************************** death star defense team |-o-| (-o-) |-o-| ****************************** decorate ▂▃▅▇█▓▒░۩۞۩ ۩۞۩░▒▓█▇▅▃▂ ****************************** depressed (︶︹︶) ****************************** derp ヘ(。□°)ヘ ****************************** devil ]:-> ****************************** devilish grin >:-D ****************************** devilish smile >:) ****************************** dgaf ┌∩┐(◣ _ ◢)┌∩┐ ****************************** diamond bold ♦ ****************************** diamond regular ♢ ****************************** dice [: :] ****************************** dick 8====D ****************************** dna sample ~ ****************************** dog ˁ˚ᴥ˚ˀ ****************************** domino [: :|:::] ****************************** don fuller ╭∩╮(Ο_Ο)╭∩╮ ****************************** don king ==8-) ****************************** drowning 人人人ヾ( ;×o×)〃 人人人 ****************************** druling1 :-... ****************************** druling2 :-P~~~ ****************************** drunkenness ヽ(´ー`)┌ ****************************** dude glasses1 @[O],[O] ****************************** dude glasses2 @(o),(o) ****************************** dummy <-|-'_'-|-> ****************************** dunno ¯\(°_o)/¯ ****************************** eastbound fish ><((((> ****************************** eaten apple [===]-' ****************************** eds dick 8=D ****************************** eeriemob (-(-_-(-_(-_(-_-)_-)-_-)_-)_-)-) ****************************** electrocardiogram1 √v^√v^√v^√v^√♥ ****************************** electrocardiogram2 v^v^v^v^√\/♥ ****************************** electrocardiogram3 /\/\/\/\/\/\/\/\/\/\/\v^♥ ****************************** electrocardiogram4 √v^√v^♥√v^√v^√ ****************************** elephant °j°m ****************************** emo (///_ ;) ****************************** energy つ ◕_◕ ༽つ つ ◕_◕ ༽つ ****************************** envelope ✉ ****************************** epic gun ︻┳デ═— ****************************** equalizer ▇ ▅ █ ▅ ▇ ▂ ▃ ▁ ▁ ▅ ▃ ▅ ▅ ▄ ▅ ▇ ****************************** eric >--) ) ) )*> ****************************** error (╯°□°)╯︵ ɹoɹɹƎ ****************************** exchange (╯°□°)╯︵ ǝƃuɐɥɔxǝ ****************************** eye closed (╯_╰) ****************************** eyes ℃ↂ_ↂↃ ****************************** face •|龴◡龴|• ****************************** facepalm (>ლ) ****************************** fail o(╥﹏╥)o ****************************** fart (ˆ⺫ˆ๑)<3 ****************************** fat ass (__!__) ****************************** faydre (U) [^_^] (U) ****************************** finger1 ╭∩╮(Ο_Ο)╭∩╮ ****************************** finger2 ┌∩┐(◣_◢)┌∩┐ ****************************** finn | (• ◡•)| ****************************** fish invasion ›(̠̄:̠̄c ›(̠̄:̠̄c (¦Ҝ (¦Ҝ ҉ - - - ¦̺͆¦ ▪▌ ****************************** fish skeleton1 >-}-}-}-> ****************************** fish skeleton2 >++('> ****************************** fish swim ¸.·´¯`·.´¯`·.¸¸.·´¯`·.¸><(((º> ****************************** fish1 ><(((('> ****************************** fish2 ><> ****************************** fish3 `·.¸¸ ><((((º>.·´¯`·><((((º> ****************************** fish4 ><> ><> ****************************** fish5 <>< ****************************** fish6 <`)))>< ****************************** flex ᕙ(⇀‸↼‶)ᕗ ****************************** fork ---= ****************************** formula1 car \ō͡≡o˞̶ ****************************** fox -^^,--,~ ****************************** french kiss :-XÞ ****************************** frown (ღ˘⌣˘ღ) ****************************** fu (ಠ_ಠ)┌∩┐ ****************************** fuck off t(-.-t) ****************************** fuck you nlm (-_-) mln ****************************** fuckall ╭∩╮(︶︿︶)╭∩╮ ****************************** full mouth :-I ****************************** fungry Σ_(꒪ཀ꒪」∠)_ ****************************** ghost ‹’’›(Ͼ˳Ͽ)‹’’› ****************************** gimme ༼ つ ◕_◕ ༽つ ****************************** glasses -@-@- ****************************** glasses2 ᒡ◯ᵔ◯ᒢ ****************************** glitter (*・‿・)ノ⌒*:・゚✧ ****************************** go away bear ╭∩╮ʕ•ᴥ•ʔ╭∩╮ ****************************** gotit (☞゚∀゚)☞ ****************************** gtalk fit (•̪̀●́)=ε/̵͇̿̿/'̿̿ ̿ ̿̿ N --------{---(@ ****************************** guitar c====(=#O| ) ~~ ♬·¯·♩¸¸♪·¯·♫¸ ****************************** gun1 ︻╦╤─ ****************************** gun2 ︻デ═一 ****************************** gun3 ╦̵̵̿╤─ ҉ ~ • ****************************** hacksaw [|^^^^^^^ ****************************** hairstyle ⨌⨀_⨀⨌ ****************************** hal @_'-' ****************************** hammer #== ****************************** happy ۜ\(סּںסּَ` )/ۜ ****************************** happy birthday 1 ዞᏜ℘℘Ꮍ ℬℹℛʈዞᗬᏜᎽ ****************************** happy square 【ツ】 ****************************** happy2 ⎦˚◡˚⎣ ****************************** happy3 ㋡ ****************************** happy4 ^_^ ****************************** happy5 [^_^] ****************************** head shot ->~∑≥_≤) ****************************** headphone1 d[-_-]b ****************************** headphone2 d(-_-)b ****************************** headphone3 (W) ****************************** heart bold ♥ ****************************** heart regular ♡ ****************************** heart1 »-(¯`·.·´¯)-> ****************************** heart2 ♡ ****************************** heart3 <3 ****************************** hell yeah (òÓ,)_\,,/ ****************************** hello (ʘ‿ʘ)╯ ****************************** help ٩(͡๏̯͡๏)۶ ****************************** high five ( ⌒o⌒)人(⌒-⌒ )v ****************************** homer (_8(|) ****************************** homer simpson =(:o) ****************************** honeycute ❤◦.¸¸. ◦✿ ****************************** house __̴ı̴̴̡̡̡ ̡͌l̡̡̡ ̡͌l̡*̡̡ ̴̡ı̴̴̡ ̡̡͡|̲̲̲͡͡͡ ̲▫̲͡ ̲̲̲͡͡π̲̲͡͡ ̲̲͡▫̲̲͡͡ ̲|̡̡̡ ̡ ̴̡ı̴̡̡ ̡͌l̡̡̡̡.___ ****************************** hoxom h(o x o )m ****************************** hug me (っ◕‿◕)っ ****************************** huhu █▬█ █▄█ █▬█ █▄█ ****************************** human •͡˘㇁•͡˘ ****************************** hybrix ʕʘ̅͜ʘ̅ʔ ****************************** i dont care ╭∩╮(︶︿︶)╭∩╮ ****************************** i kill you ̿ ̿̿'̿̿\̵͇̿̿\=(•̪●)=/̵͇̿̿/'̿̿ ̿ ̿ ****************************** infinity (X) ****************************** inlove (✿ ♥‿♥) ****************************** jaymz (•̪●)==ε/̵͇̿​̿/’̿’̿ ̿ ̿̿ `(•.°)~ ****************************** jazz musician ヽ(⌐■_■)ノ♪♬ ****************************** john lennon ((ºjº)) ****************************** jokeranonimous ╭∩╮ (òÓ,) ╭∩╮ ****************************** jokeranonimous2 ╭∩╮(ô¿ô)╭∩╮ ****************************** joy n_n ****************************** kablewee ̿' ̿'\̵͇̿̿\з=( ͡ °_̯͡° )=ε/̵͇̿̿/'̿'̿ ̿ ****************************** killer (⌐■_■)--︻╦╤─ - - - (╥﹏╥) ****************************** kilroy was here " Ü " ****************************** king -_- ****************************** kirby (つ -‘ _ ‘- )つ ****************************** kirby dance <(''<) <( ' ' )> (> '')> ****************************** kiss (o'3'o) ****************************** kiss my ass (_x_) ****************************** kitty1 =^..^= ****************************** kitty2 =^. .^= ****************************** knife1 )xxxxx[;;;;;;;;;> ****************************** knife2 )xxx[::::::::::> ****************************** koala @( * O * )@ ****************************** kokain ̿ ̿' ̿'\̵͇̿̿\з=(•̪●)=ε/̵͇̿̿/'̿''̿ ̿ ****************************** kyubey /人 ⌒ ‿‿ ⌒ 人\ ****************************** kyubey2 /人 ◕‿‿◕ 人\ ****************************** laughing (^▽^) ****************************** lenny ( ͡° ͜ʖ ͡°) ****************************** licking lips :-9 ****************************** line brack ●▬▬▬▬๑۩۩๑▬▬▬▬▬● ****************************** linqan :Q___ ****************************** loading1 █▒▒▒▒▒▒▒▒▒ ****************************** loading2 ███▒▒▒▒▒▒▒ ****************************** loading3 █████▒▒▒▒▒ ****************************** loading4 ███████▒▒▒ ****************************** loading5 █████████▒ ****************************** loading6 ██████████ ****************************** loch ness monster _mmmP ****************************** long rose ---------------------{{---<((@) ****************************** looking face ô¿ô ****************************** love ⓛⓞⓥⓔ ****************************** love in my eye1 (♥_♥) ****************************** love in my eye2 (。❤◡❤。) ****************************** love in my eye3 (❤◡❤) ****************************** love you »-(¯`·.·´¯)-><-(¯`·.·´¯)-« ****************************** love2 ~♡ⓛⓞⓥⓔ♡~ ****************************** machinegun ,==,-- ****************************** mail box |M|/ ****************************** man spider /╲/\༼ *ಠ 益 ಠ* ༽/\╱\ ****************************** man tears ಥ_ಥ ****************************** mango ) _ _ __/°°¬ ****************************** marge simpson ()()():| ****************************** marshmallows -()_)--()_)--- ****************************** med ب_ب ****************************** med man (⋗_⋖) ****************************** meditation ‿( ́ ̵ _-`)‿ ****************************** meep \(°^°)/ ****************************** melp1 (<>..<>) ****************************** melp2 (<(<>(<>.(<>..<>).<>)<>)>) ****************************** message1 (¯`·._.·(¯`·._.· ·._.·´¯)·._.·´¯) ****************************** message2 ,.-~*´¨¯¨`*·~-.¸-(-,.-~*´¨¯¨`*·~-.¸ ****************************** metal \m/_(>_<)_\m/ ****************************** mini penis =D ****************************** mis mujeres (-(-_(-_-)_-)-) ****************************** monkey @('_')@ ****************************** monocle (╭ರ_•́) ****************************** monster ٩(̾●̮̮̃̾•̃̾)۶ ****************************** monster2 ٩(- ̮̮̃-̃)۶ ****************************** mouse1 ----{,_,"> ****************************** mouse2 . ~~(__^·> ****************************** mouse3 <·^__)~~ . ****************************** mouse4 —-{,_,”><",_,}---- ****************************** mouse5 <:3 )~~~~ ****************************** mouse6 <^__)~ ****************************** mouse7 ~(__^> ****************************** mtmtika :o + :p = 69 ****************************** musical ¸¸♬·¯·♩¸¸♪·¯·♫¸¸¸¸♬·¯·♩¸¸♪·¯·♫¸¸ ****************************** myancat mmmyyyyy<⦿⽘⦿>aaaannn ****************************** nathan ♪└( ̄◇ ̄)┐♪└( ̄◇ ̄)┐♪└( ̄◇ ̄)┐♪ ****************************** needle1 ┣▇▇▇═─ ****************************** needle2 |==|iiii|>----- ****************************** neo (⌐■_■)--︻╦╤─ - - - ****************************** nerd ::( ****************************** nope t(-_-t) ****************************** nose \˚ㄥ˚\ ****************************** nose2 |'L'| ****************************** oar ===========(8888) ****************************** old lady boobs |\o/\o/| ****************************** owlkin (ᾢȍˬȍ)ᾢ ļ ļ ļ ļ ļ ****************************** pac man ᗧ···ᗣ···ᗣ·· ****************************** palm tree 'T` ****************************** panda ヽ( ̄(エ) ̄)ノ ****************************** party time ┏(-_-)┛┗(-_- )┓┗(-_-)┛┏(-_-)┓ ****************************** peace yo! (‾⌣‾)♉ ****************************** penis 8===D ****************************** penis2 ○○)=======o) ****************************** perky ( ๏ Y ๏ ) ****************************** pictou |\_______(#*#)_______/| ****************************** pie fight ---=======[} ****************************** pig1 ^(*(oo)*)^ ****************************** pig2 ༼☉ɷ⊙༽ ****************************** piggy (∩◕(oo)◕∩) ****************************** ping pong ( •_•)O*¯`·.¸.·´¯`°Q(•_• ) ****************************** pipe ====\_/ ****************************** pirate ✌(◕‿-)✌ ****************************** pistols1 ¯¯̿̿¯̿̿'̿̿̿̿̿̿̿'̿̿'̿̿̿̿̿'̿̿̿)͇̿̿)̿̿̿̿ '̿̿̿̿̿̿\̵͇̿̿\=(•̪̀●́)=o/̵͇̿̿/'̿̿ ̿ ̿̿ ****************************** pistols2 ̿' ̿'\̵͇̿̿\з=(◕_◕)=ε/̵͇̿̿/'̿'̿ ̿ ****************************** pistols3 ̿̿ ̿̿ ̿’̿̿’̿\̵͇̿̿\з=( ͡ °_̯͡° )=ε/̵͇̿̿/’̿̿’̿ ̿ ̿̿ ̿̿ ****************************** playing cards [♥]]] [♦]]] [♣]]] [♠]]] ****************************** playing cards clubs [♣]]] ****************************** playing cards diamonds [♦]]] ****************************** playing cards hearts [♥]]] ****************************** playing cards spades [♠]]] ****************************** playing in snow (╯^□^)╯︵ ❄☃❄ ****************************** point (☞゚ヮ゚)☞ ****************************** polar bear ˁ˚ᴥ˚ˀ ****************************** possessed <>_<> ****************************** power lines TTT ****************************** professor """⌐(ಠ۾ಠ)¬""" ****************************** puls ––•–√\/––√\/––•–– ****************************** punch O=('-'Q) ****************************** pursing lips :-" ****************************** rak /⦿L⦿\ ****************************** rare ┌ಠ_ಠ)┌∩┐ ᶠᶸᶜᵏ♥ᵧₒᵤ ****************************** ready to cry :-} ****************************** real face ( ͡° ͜ʖ ͡°) ****************************** really mad >:-I ****************************** really sad :-C ****************************** regular ass (_!_) ****************************** religious ☪ ✡ † ☨ ✞ ✝ ☥ ☦ ☓ ♁ ☩ ****************************** roadblock X+X+X+X+X ****************************** robber -╤╗_(◙◙)_╔╤- - - - \o/ \o/ \o/ ****************************** robot boy ◖(◣☩◢)◗ ****************************** robot1 d[ o_0 ]b ****************************** robot2 c[○┬●]כ ****************************** rock on1 \,,/(^_^)\,,/ ****************************** rock on2 \m/(-_-)\m/ ****************************** rocket ∙∙∙∙∙·▫▫ᵒᴼᵒ▫ₒₒ▫ᵒᴼᵒ▫ₒₒ▫ᵒᴼᵒ☼)===> ****************************** roke _\m/ ****************************** rope ╚(▲_▲)╝ ****************************** rose1 --------{---(@ ****************************** rose2 @}}>----- ****************************** rose3 @-->--->--- ****************************** rose4 @}~}~~~ ****************************** rose5 @-}-- ****************************** rose6 @)}---^----- ****************************** rose7 @->-->--- ****************************** round bird ,(u°)> ****************************** round cat ~(^._.) ****************************** russian boobs [.][.] ****************************** ryans dick 8======D ****************************** sad1 ε(´סּ︵סּ`)з ****************************** sad2 (✖╭╮✖) ****************************** sad3 (◑﹏◐) ****************************** sad4 (◕_◕) ****************************** sat '(◣_◢)' ****************************** satan ↑_(ΦwΦ;)Ψ ****************************** scissors ✄ ****************************** screaming :-@ ****************************** sean the sheep <('--')> ****************************** sex symbol ◢♂◣◥♀◤◢♂◣◥♀◤ ****************************** shark ~~~~~~^~~~~~ ****************************** shark attack ~~~~~~\o/~~~~~/\~~~~~ ****************************** sheep °l°(,,,,); ****************************** shocked1 (∩╹□╹∩) ****************************** shocked2 :-O ****************************** shrug ¯\_(ツ)_/¯ ****************************** singing d(^o^)b¸¸♬·¯·♩¸¸♪·¯·♫¸¸ ****************************** singing2 ♪└( ̄◇ ̄)┐♪ ****************************** sky free ѧѦ ѧ ︵͡︵ ̢ ̱ ̧̱ι̵̱̊ι̶̨̱ ̶̱ ︵ Ѧѧ ︵͡ ︵ ѧ Ѧ ̵̗̊o̵̖ ︵ ѦѦ ѧ ****************************** sleeping (-.-)Zzz... ****************************** sleeping baby [{-_-}] ZZZzz zz z... ****************************** sleepy coffee ( -_-)旦~ ****************************** slenderman ϟƖΣNd€RMαN ****************************** smile :-) ****************************** smirk :-, ****************************** smooth (づ  ̄ ³ ̄)づ ⓈⓂⓄⓄⓉⒽ ****************************** smug bastard (‾⌣‾) ****************************** snail1 '-'_@_ ****************************** snail2 '\Q___ ****************************** sniper rifle ︻デ┳═ー ****************************** sniperstars ✯╾━╤デ╦︻✯ ****************************** snowing ✲´*。.❄¨¯`*✲。❄。*。 ****************************** snowman1 ☃ ****************************** snowman2 { }( : ^ )( """" )( ) ****************************** sophie <XX""XX> ****************************** sorreh bro (◢_◣) ****************************** spade bold ♠ ****************************** spade regular ♤ ****************************** sparkling heart -`ღ´- ****************************** spear >>-;;;------;;--> ****************************** spell cast ╰( ⁰ ਊ ⁰ )━☆゚.*・。゚ ****************************** sperm ~~o ****************************** spider1 //O\ ****************************** spider2 /\oo/\ ****************************** spider3 ///\oo/\\\ ****************************** spot ( . Y . ) ****************************** squee ヾ(◎o◎,,;)ノ ****************************** squid くコ:彡 ****************************** squigle with spirals 6\9 ****************************** srs face (ಠ_ಠ) ****************************** star in my eyes <*_*> ****************************** stars ✌⊂(✰‿✰)つ✌ ****************************** stars in my eyes <*_*> ****************************** stars2 ⋆ ✢ ✣ ✤ ✥ ✦ ✧ ✩ ✪ ✫ ✬ ✭ ✮ ✯ ✰ ★ ****************************** stealth fighter -^- ****************************** sunglasses1 (•_•)>⌐■-■ (⌐■_■) ****************************** sunglasses2 B-) ****************************** sunny day ☁ ▅▒░☼‿☼░▒▅ ☁ ****************************** superman -^mOm^- ****************************** superman logo /s\ ****************************** surprised1 =:-o ****************************** surprised2 (O_o) ****************************** sword1 (===||:::::::::::::::> ****************************** sword10 O==I======> ****************************** sword2 ▬▬ι═══════ﺤ -═══════ι▬▬ ****************************** sword3 ס₪₪₪₪§|(Ξ≥≤≥≤≥≤ΞΞΞΞΞΞΞΞΞΞ> ****************************** sword4 |O/////[{:;:;:;:;:;:;:;:;> ****************************** sword5 <%%%%|==========> ****************************** sword6 o()xxxx[{::::::::::::::::::::::::::::::::::> ****************************** sword7 o==[]::::::::::::::::> ****************************** sword8 ▬▬ι═══════> ****************************** sword9 <═══════ι▬▬ ****************************** table flip (╯°□°)╯︵ ┻━┻ ****************************** teddy ˁ(⦿ᴥ⦿)ˀ ****************************** teepee /|\ ****************************** telephone ε(๏_๏)з】 ****************************** tent1 //\ ****************************** tent2 /\\ ****************************** text decoration (¯`·._.··¸.-~*´¨¯¨`*·~-.,-(__)-,.-~*´¨¯¨`*·~-.¸··._.·´¯) ****************************** thanks \(^-^)/ ****************************** this guy (☞゚∀゚)☞ ****************************** this is areku d(^o^)b ****************************** tie-fighter |—O—| ****************************** toungue out1 :-Þ ****************************** toungue out2 :-P ****************************** train /˳˳_˳˳\!˳˳X˳˳!(˳˳_˳˳)[˳˳_˳˳] ****************************** tree stump J"l ****************************** tron (\/)(;,,;)(\/) ****************************** trumpet -=iii=<() ****************************** ufo1 .-=-. ****************************** ufo2 .-=o=-. ****************************** ukulele { o }==(::) ****************************** umadbro ¯\_(ツ)_/¯ ****************************** up (◔/‿\◔) ****************************** upsidedown ( ͜。 ͡ʖ ͜。) ****************************** vagina (:) ****************************** victory V(-.o)V ****************************** volcano1 /"\ ****************************** volcano2 /W\ ****************************** volcano3 /V\ ****************************** wat ಠ_ಠ ****************************** wat-wat Σ(‘◉⌓◉’) ****************************** waves °º¤ø,¸¸,ø¤º°`°º¤ø,¸,ø¤°º¤ø,¸¸,ø¤º°`°º¤ø,¸ ****************************** weather ☼ ☀ ☁ ☂ ☃ ☄ ☾ ☽ ❄ ☇ ☈ ⊙ ☉ ℃ ℉ ° ❅ ✺ ϟ ****************************** westbound fish < )))) >< ****************************** what? ة_ة ****************************** what?? (Ͼ˳Ͽ)..!!! ****************************** why ლ( `Д’ ლ) ****************************** wink ;-) ****************************** wizard (∩ ͡° ͜ʖ ͡°)⊃━☆゚. * ****************************** woman ▓⚗_⚗▓ ****************************** woops :-* ****************************** worm _/\__/\__0> ****************************** worm2 ~ ****************************** wtf dude? \(◑д◐)>∠(◑д◐) ****************************** yessir ∠(・`_´・ ) ****************************** yo __o000o__(o)(o)__o000o__ ****************************** yolo Yᵒᵘ Oᶰˡʸ Lᶤᵛᵉ Oᶰᶜᵉ ****************************** zable ಠ_ರೃ ****************************** zoidberg (\/)(Ö,,,,Ö)(\/) ****************************** zombie 'º_º' ****************************** ###Markdown ART Version : 5.4 ###Code from art import * ###Output _____no_output_____ ###Markdown Art Counter ###Code ART_COUNTER ###Output _____no_output_____ ###Markdown Art List ###Code art_list() ###Output 3 ᕙ༼ ,,ԾܫԾ,, ༽ᕗ ****************************** 5 ᕙ༼ ,,இܫஇ,, ༽ᕗ ****************************** 9/11 truth ✈__✈ █ █ ▄ ****************************** acid ⊂(◉‿◉)つ ****************************** afraid ( ゚ Д゚) ****************************** airplane1 ‛¯¯٭٭¯¯(▫▫)¯¯٭٭¯¯’ ****************************** airplane2 ✈ ****************************** airplane3 ✈ ✈ ———- ♒✈ ****************************** ak-47 ︻┳デ═— ****************************** alien ::) ****************************** almost cared ╰╏ ◉ 〜 ◉ ╏╯ ****************************** american money1 [($)] ****************************** american money2 [̲̅$̲̅(̲̅1̲̅)̲̅$̲̅] ****************************** american money3 [̲̅$̲̅(̲̅5̲̅)̲̅$̲̅] ****************************** american money4 [̲̅$̲̅(̲̅ιοο̲̅)̲̅$̲̅] ****************************** american money5 [̲̅$̲̅(̲̅2οο̲̅)̲̅$̲̅] ****************************** angel1 ^i^ ****************************** angel2 O:-) ****************************** angry ლ(ಠ益ಠ)ლ ****************************** angry birds ( ఠൠఠ )ノ ****************************** angry face (⋟﹏⋞) ****************************** angry face2 (╬ ಠ益ಠ) ****************************** angry troll ヽ༼ ಠ益ಠ ༽ノ ****************************** angry2 ( ͠° ͟ʖ ͡°) ****************************** ankush ︻デ┳═ー*----* ****************************** arrow1 »»---------------------► ****************************** arrow2 XXX--------> ****************************** arrowhead ⤜(ⱺ ʖ̯ⱺ)⤏ ****************************** at what cost ლ(ಠ益ಠლ) ****************************** atish (| - _ - |) ****************************** awesome <:3 )~~~ ****************************** awkward •͡˘㇁•͡˘ ****************************** bad hair1 =:-) ****************************** bad hair2 =:-( ****************************** bagel nln >_< nln ****************************** band aid (̲̅:̲̅:̲̅:̲̅[̲̅ ̲̅]̲̅:̲̅:̲̅:̲̅ ) ****************************** barbell ▐━━━━━▌ ****************************** barcode1 █║▌│ █│║▌ ║││█║▌ │║║█║ │║║█║ ****************************** barcode2 ║█║▌║█║▌│║▌║▌█║ ****************************** barf (´ж`ς) ****************************** baseball fan q:o) ****************************** basking in glory ヽ(´ー`)ノ ****************************** bat1 ^O^ ****************************** bat2 ^v^ ****************************** bautista (╯°_°)╯︵ ━━━ ****************************** bear ʕ•ᴥ•ʔ ****************************** bear GTFO ʕ •`ᴥ•´ʔ ****************************** bear squiting ʕᵔᴥᵔʔ ****************************** bear2 (ʳ ´º㉨º) ****************************** because ∵ ****************************** bee ¸.·´¯`·¸¸.·´¯`·.¸.-<\^}0=: ****************************** being draged ╰(◣﹏◢)╯ ****************************** bender ¦̵̱ ̵̱ ̵̱ ̵̱ ̵̱(̢ ̡͇̅└͇̅┘͇̅ (▤8כ−◦ ****************************** big eyes ⺌∅‿∅⺌ ****************************** big kiss :-X ****************************** big nose ˚∆˚ ****************************** big smile :-D ****************************** bird (⌒▽⌒) ****************************** birds ~(‾▿‾)~ ****************************** blackeye 0__# ****************************** bomb !!( ’ ‘)ノノ⌒●~* ****************************** boobies (. )( .) ****************************** boobs (.)(.) ****************************** boobs2 (·人·) ****************************** boombox1 ♫♪.ılılıll|̲̅̅●̲̅̅|̲̅̅=̲̅̅|̲̅̅●̲̅̅|llılılı.♫♪ ****************************** boombox2 ♫♪ |̲̅̅●̲̅̅|̲̅̅=̲̅̅|̲̅̅●̲̅̅| ♫♪ ****************************** boxing ლ(•́•́ლ) ****************************** breakdown ಥ﹏ಥ ****************************** british money [£::] ****************************** buck teeth :-B ****************************** bugs bunny E=B ****************************** bullshit |3ᵕᶦᶦᶳᶣᶨᶵ ****************************** bunny (\_/) ****************************** butt (‿|‿) ****************************** butterfly Ƹ̵̡Ӝ̵̨̄Ʒ ****************************** camera [◉"] ****************************** canoe .,.,\______/,..,., ****************************** car `o##o> ****************************** car race ∙،°. ˘Ô≈ôﺣ » » » ****************************** care crowd (-(-_(-_-)_-)-) ****************************** careless ◔_◔ ****************************** carpet roll @__ ****************************** cassette1 |[●▪▪●]| ****************************** cassette2 [¯ↂ■■ↂ¯] ****************************** cat face ⦿⽘⦿ ****************************** cat smile ≧◔◡◔≦ ****************************** cat1 =^..^= ****************************** cat2 龴ↀ◡ↀ龴 ****************************** cat3 ^.--.^ ****************************** cat4 (=^ェ^=) ****************************** caterpillar ,/\,/\,/\,/\,/\,/\,o ****************************** catlenny ( ͡° ᴥ ͡°) ****************************** chair ╦╣ ****************************** charly +:) ****************************** chasing ''⌐(ಠ۾ಠ)¬'' ****************************** cheer ^(¤o¤)^ ****************************** cheers ( ^_^)o自自o(^_^ ) ****************************** chess ♞▀▄▀▄♝▀▄ ****************************** chess pieces ♚ ♛ ♜ ♝ ♞ ♟ ♔ ♕ ♖ ♗ ♘ ♙ ****************************** chicken ʚ(•` ****************************** chu (´ε` ) ****************************** cigarette1 (̅_̅_̅_̅(̅_̅_̅_̅_̅_̅_̅_̅_̅̅_̅()ڪے ****************************** cigarette2 (____((____________()~~~ ****************************** cigarette3 ()___)____________) ****************************** clowning *:o) ****************************** club bold ♣ ****************************** club regular ♧ ****************************** coffee now {zzz}°°°( -_-)>c[_] ****************************** coffee1 c[_] ****************************** coffee2 l_D ****************************** coffee3 l_P ****************************** coffee4 l_B ****************************** computer mouse [E} ****************************** concerned (@_@) ****************************** confused scratch (⊙.☉)7 ****************************** confused1 :-/ ****************************** confused10 (*'__'*) ****************************** confused2 :-\ ****************************** confused3 (°~°) ****************************** confused4 ^^' ****************************** confused5 é_è ****************************** confused6 (˚ㄥ_˚) ****************************** confused7 (; ͡°_ʖ ͡°) ****************************** confused8 (´•_•`) ****************************** confused9 (ˇ_ˇ’!l) ****************************** crab (\|) ._. (|/) ****************************** crayons ((̲̅ ̲̅(̲̅C̲̅r̲̅a̲̅y̲̅o̲̅l̲̲̅̅a̲̅( ̲̅((> ****************************** crazy ミ●﹏☉ミ ****************************** creeper ƪ(ړײ)‎ƪ​​ ****************************** crotch shot \*/ ****************************** cry (╯︵╰,) ****************************** cry face 。゚( ゚இ‸இ゚)゚。 ****************************** cry troll ༼ ༎ຶ ෴ ༎ຶ༽ ****************************** crying1 Ỏ̷͖͈̞̩͎̻̫̫̜͉̠̫͕̭̭̫̫̹̗̹͈̼̠̖͍͚̥͈̮̼͕̠̤̯̻̥̬̗̼̳̤̳̬̪̹͚̞̼̠͕̼̠̦͚̫͔̯̹͉͉̘͎͕̼̣̝͙̱̟̹̩̟̳̦̭͉̮̖̭̣̣̞̙̗̜̺̭̻̥͚͙̝̦̲̱͉͖͉̰̦͎̫̣̼͎͍̠̮͓̹̹͉̤̰̗̙͕͇͔̱͕̭͈̳̗̭͔̘̖̺̮̜̠͖̘͓̳͕̟̠̱̫̤͓͔̘̰̲͙͍͇̙͎̣̼̗̖͙̯͉̠̟͈͍͕̪͓̝̩̦̖̹̼̠̘̮͚̟͉̺̜͍͓̯̳̱̻͕̣̳͉̻̭̭̱͍̪̩̭̺͕̺̼̥̪͖̦̟͎̻̰_Ỏ̷͖͈̞̩͎̻̫̫̜͉̠̫͕̭̭̫̫̹̗̹͈̼̠̖͍͚̥͈̮̼͕̠̤̯̻̥̬̗̼̳̤̳̬̪̹͚̞̼̠͕̼̠̦͚̫͔̯̹͉͉̘͎͕̼̣̝͙̱̟̹̩̟̳̦̭͉̮̖̭̣̣̞̙̗̜̺̭̻̥͚͙̝̦̲̱͉͖͉̰̦͎̫̣̼͎͍̠̮͓̹̹͉̤̰̗̙͕͇͔̱͕̭͈̳̗̭͔̘̖̺̮̜̠͖̘͓̳͕̟̠̱̫̤͓͔̘̰̲͙͍͇̙͎̣̼̗̖͙̯͉̠̟͈͍͕̪͓̝̩̦̖̹̼̠̘̮͚̟͉̺̜͍͓̯̳̱̻͕̣̳͉̻̭̭̱͍̪̩̭̺͕̺̼̥̪͖̦̟͎̻̰ ****************************** crying2 :~( ****************************** cthulhu ^(;,;)^ ****************************** cthulhu2 ( ;,;) ****************************** cup1 (▓ ****************************** cup2 \̅_̅/̷̚ʾ ****************************** cussing :-# ****************************** cute cat ^⨀ᴥ⨀^ ****************************** cute face (。◕‿◕。) ****************************** cute face2 (ღ˘◡˘ღ) ****************************** cute face3 ✿◕ ‿ ◕✿ ****************************** cute face4 ❀◕ ‿ ◕❀ ****************************** cute face5 (✿◠‿◠) ****************************** cute face6 (◕‿◕✿) ****************************** cute face7 ☾˙❀‿❀˙☽ ****************************** cute face8 (◡‿◡✿) ****************************** cute face9 ლ(╹◡╹ლ) ****************************** dab ヽ( •_)ᕗ ****************************** dagger cxxx|;:;:;:;:;:;:;:;> ****************************** dalek ̵̄/͇̐\ ****************************** damnyou (ᕗ ͠° ਊ ͠° )ᕗ ****************************** dance (>'-')> <('_'<) ^('_')\- \m/(-_-)\m/ <( '-')> \_( .")> <(._.)-` ****************************** dance2 ♪♪ ヽ(ˇ∀ˇ )ゞ ****************************** dancee ♪┏(°.°)┛┗(°.°)┓┗(°.°)┛┏(°.°)┓ ♪ ****************************** dancing ┌(ㆆ㉨ㆆ)ʃ ****************************** dancing people ‎(/.__.)/ \(.__.\) ****************************** dead child '-=,o ****************************** dead eyes ¿ⓧ_ⓧﮌ ****************************** dead girl '==>x\9 ****************************** dead guy '==xx\0 ****************************** dear god why щ(゚Д゚щ) ****************************** death star defense team |-o-| (-o-) |-o-| ****************************** decorate ▂▃▅▇█▓▒░۩۞۩ ۩۞۩░▒▓█▇▅▃▂ ****************************** depressed (︶︹︶) ****************************** derp ヘ(。□°)ヘ ****************************** devil ]:-> ****************************** devilish grin >:-D ****************************** devilish smile >:) ****************************** devious smile ಠ‿ಠ ****************************** dgaf ┌∩┐(◣ _ ◢)┌∩┐ ****************************** diamond bold ♦ ****************************** diamond regular ♢ ****************************** dice [: :] ****************************** dick 8====D ****************************** disagree ٩◔̯◔۶ ****************************** discombobulated ⊙﹏⊙ ****************************** dislike1 (Ծ‸ Ծ) ****************************** dislike2 ( ಠ ʖ̯ ಠ) ****************************** dna sample ~ ****************************** do you even lift bro? ᕦ(ò_óˇ)ᕤ ****************************** domino [: :|:::] ****************************** don king ==8-) ****************************** double flip ┻━┻ ︵ヽ(`Д´)ノ︵ ┻━┻ ****************************** drowning 人人人ヾ( ;×o×)〃 人人人 ****************************** druling1 :-... ****************************** druling2 :-P~~~ ****************************** drunkenness ヽ(´ー`)┌ ****************************** dude glasses1 @[O],[O] ****************************** dude glasses2 @(o),(o) ****************************** dummy <-|-'_'-|-> ****************************** dunno ¯\(°_o)/¯ ****************************** dunno2 \_(シ)_/ ****************************** dunno3 └㋡┘ ****************************** dunno4 ╘㋡╛ ****************************** dunno5 ٩㋡۶ ****************************** eastbound fish ><((((> ****************************** eaten apple [===]-' ****************************** eds dick 8=D ****************************** eeriemob (-(-_-(-_(-_(-_-)_-)-_-)_-)_-)-) ****************************** electrocardiogram1 √v^√v^√v^√v^√♥ ****************************** electrocardiogram2 v^v^v^v^√\/♥ ****************************** electrocardiogram3 /\/\/\/\/\/\/\/\/\/\/\v^♥ ****************************** electrocardiogram4 √v^√v^♥√v^√v^√ ****************************** elephant °j°m ****************************** emo (///_ ;) ****************************** emo dance ヾ(-_- )ゞ ****************************** energy つ ◕_◕ ༽つ つ ◕_◕ ༽つ ****************************** envelope ✉ ****************************** epic gun ︻┳デ═— ****************************** equalizer ▇ ▅ █ ▅ ▇ ▂ ▃ ▁ ▁ ▅ ▃ ▅ ▅ ▄ ▅ ▇ ****************************** eric >--) ) ) )*> ****************************** error (╯°□°)╯︵ ɹoɹɹƎ ****************************** exchange (╯°□°)╯︵ ǝƃuɐɥɔxǝ ****************************** excited ☜(⌒▽⌒)☞ ****************************** exorcism ح(•̀ж•́)ง † ****************************** eye closed (╯_╰) ****************************** eye roll ⥀.⥀ ****************************** eyes ℃ↂ_ↂↃ ****************************** face •|龴◡龴|• ****************************** facepalm (>ლ) ****************************** fail o(╥﹏╥)o ****************************** fart (ˆ⺫ˆ๑)<3 ****************************** fat ass (__!__) ****************************** faydre (U) [^_^] (U) ****************************** feel perky (`・ω・´) ****************************** fido V•ᴥ•V ****************************** fight (ง'̀-'́)ง ****************************** finger1 ╭∩╮(Ο_Ο)╭∩╮ ****************************** finger2 ┌∩┐(◣_◢)┌∩┐ ****************************** finger3 ಠ︵ಠ凸 ****************************** finger4 ┌∩┐(>_<)┌∩┐ ****************************** finn | (• ◡•)| ****************************** fish invasion ›(̠̄:̠̄c ›(̠̄:̠̄c (¦Ҝ (¦Ҝ ҉ - - - ¦̺͆¦ ▪▌ ****************************** fish skeleton1 >-}-}-}-> ****************************** fish skeleton2 >++('> ****************************** fish swim ¸.·´¯`·.´¯`·.¸¸.·´¯`·.¸><(((º> ****************************** fish1 ><(((('> ****************************** fish2 ><> ****************************** fish3 `·.¸¸ ><((((º>.·´¯`·><((((º> ****************************** fish4 ><> ><> ****************************** fish5 <>< ****************************** fish6 <`)))>< ****************************** fisticuffs ლ(`ー´ლ) ****************************** flex ᕙ(⇀‸↼‶)ᕗ ****************************** flip friend (ノಠ ∩ಠ)ノ彡( \o°o)\ ****************************** fly away ⁽⁽ଘ( ˊᵕˋ )ଓ⁾⁾ ****************************** flying ح˚௰˚づ ****************************** fork ---= ****************************** formula1 car \ō͡≡o˞̶ ****************************** fox -^^,--,~ ****************************** french kiss :-XÞ ****************************** frown (ღ˘⌣˘ღ) ****************************** fu (ಠ_ಠ)┌∩┐ ****************************** fuck off t(-.-t) ****************************** fuck you nlm (-_-) mln ****************************** fuck you2 (° ͜ʖ͡°)╭∩╮ ****************************** fuckall ╭∩╮(︶︿︶)╭∩╮ ****************************** full mouth :-I ****************************** fungry Σ_(꒪ཀ꒪」∠)_ ****************************** ghost ‹’’›(Ͼ˳Ͽ)‹’’› ****************************** gimme ༼ つ ◕_◕ ༽つ ****************************** glasses -@-@- ****************************** glasses2 ᒡ◯ᵔ◯ᒢ ****************************** glitter (*・‿・)ノ⌒*:・゚✧ ****************************** go away bear ╭∩╮ʕ•ᴥ•ʔ╭∩╮ ****************************** gotit (☞゚∀゚)☞ ****************************** gtalk fit (•̪̀●́)=ε/̵͇̿̿/'̿̿ ̿ ̿̿ N --------{---(@ ****************************** guitar c====(=#O| ) ~~ ♬·¯·♩¸¸♪·¯·♫¸ ****************************** gun1 ︻╦╤─ ****************************** gun2 ︻デ═一 ****************************** gun3 ╦̵̵̿╤─ ҉ ~ • ****************************** gun4 ︻╦̵̵͇̿̿̿̿╤── ****************************** hacksaw [|^^^^^^^ ****************************** hairstyle ⨌⨀_⨀⨌ ****************************** hal @_'-' ****************************** hammer #== ****************************** happy ۜ\(סּںסּَ` )/ۜ ****************************** happy birthday 1 ዞᏜ℘℘Ꮍ ℬℹℛʈዞᗬᏜᎽ ****************************** happy face ヽ(´▽`)/ ****************************** happy hug \(ᵔᵕᵔ)/ ****************************** happy square 【ツ】 ****************************** happy10 (´ツ`) ****************************** happy11 ( ^◡^)っ ****************************** happy12 ┏(^0^)┛┗(^0^)┓ ****************************** happy13 (°⌣°) ****************************** happy14 ٩(^‿^)۶ ****************************** happy15 (•‿•) ****************************** happy16 ó‿ó ****************************** happy17 ٩◔‿◔۶ ****************************** happy18 ಠ◡ಠ ****************************** happy19 ●‿● ****************************** happy2 ⎦˚◡˚⎣ ****************************** happy20 ( '‿' ) ****************************** happy21 ^‿^ ****************************** happy22 ┌( ಠ‿ಠ)┘ ****************************** happy23 (˘◡˘) ****************************** happy24 ☯‿☯ ****************************** happy25 \(• ◡ •)/ ****************************** happy26 ( ͡ʘ ͜ʖ ͡ʘ) ****************************** happy27 ( ͡• ͜ʖ ͡• ) ****************************** happy3 ㋡ ****************************** happy4 ^_^ ****************************** happy5 [^_^] ****************************** happy6 (ツ) ****************************** happy7 【シ】 ****************************** happy8 ㋛ ****************************** happy9 (シ) ****************************** head shot ->~∑≥_≤) ****************************** headphone1 d[-_-]b ****************************** headphone2 d(-_-)b ****************************** headphone3 (W) ****************************** heart bold ♥ ****************************** heart regular ♡ ****************************** heart1 »-(¯`·.·´¯)-> ****************************** heart2 ♡♡ ****************************** heart3 <3 ****************************** hell yeah (òÓ,)_\,,/ ****************************** hello (ʘ‿ʘ)╯ ****************************** hello2 (ツ)ノ ****************************** help ٩(͡๏̯͡๏)۶ ****************************** high five ( ⌒o⌒)人(⌒-⌒ )v ****************************** hitchhicking (งツ)ว ****************************** homer (_8(|) ****************************** homer simpson =(:o) ****************************** honeycute ❤◦.¸¸. ◦✿ ****************************** house __̴ı̴̴̡̡̡ ̡͌l̡̡̡ ̡͌l̡*̡̡ ̴̡ı̴̴̡ ̡̡͡|̲̲̲͡͡͡ ̲▫̲͡ ̲̲̲͡͡π̲̲͡͡ ̲̲͡▫̲̲͡͡ ̲|̡̡̡ ̡ ̴̡ı̴̡̡ ̡͌l̡̡̡̡.___ ****************************** hoxom h(o x o )m ****************************** hug me (っ◕‿◕)っ ****************************** hugger (づ ̄ ³ ̄)づ ****************************** huhu █▬█ █▄█ █▬█ █▄█ ****************************** hybrix ʕʘ̅͜ʘ̅ʔ ****************************** i dont care ╭∩╮(︶︿︶)╭∩╮ ****************************** i kill you ̿ ̿̿'̿̿\̵͇̿̿\=(•̪●)=/̵͇̿̿/'̿̿ ̿ ̿ ****************************** im a hugger (⊃。•́‿•̀。)⊃ ****************************** infinity (X) ****************************** injured (҂◡_◡) ****************************** inlove (✿ ♥‿♥) ****************************** innocent face ʘ‿ʘ ****************************** japanese lion face °‿‿° ****************************** jaymz (•̪●)==ε/̵͇̿​̿/’̿’̿ ̿ ̿̿ `(•.°)~ ****************************** jazz musician ヽ(⌐■_■)ノ♪♬ ****************************** john lennon ((ºjº)) ****************************** joker1 🂠 ****************************** joker2 🃟 ****************************** joker3 🃏 ****************************** joker4 🃠 ****************************** jokeranonimous ╭∩╮ (òÓ,) ╭∩╮ ****************************** jokeranonimous2 ╭∩╮(ô¿ô)╭∩╮ ****************************** joy n_n ****************************** judgemental \{ಠʖಠ\} ****************************** judging ( ఠ ͟ʖ ఠ) ****************************** kablewee ̿' ̿'\̵͇̿̿\з=( ͡ °_̯͡° )=ε/̵͇̿̿/'̿'̿ ̿ ****************************** killer (⌐■_■)--︻╦╤─ - - - (╥﹏╥) ****************************** kilroy was here " Ü " ****************************** king -_- ****************************** kirby (つ -‘ _ ‘- )つ ****************************** kirby dance <(''<) <( ' ' )> (> '')> ****************************** kiss (o'3'o) ****************************** kiss my ass (_x_) ****************************** kiss2 (︶ε︶メ) ****************************** kiss3 ╮(︶ε︶メ)╭ ****************************** kissing ( ˘ ³˘)♥ ****************************** kissing2 ( ˘з˘)ε˘`) ****************************** kissing3 (~˘з˘)~~(˘ε˘~) ****************************** kissing4 (っ˘з(O.O )♥ ****************************** kissing5 (`˘з(•˘⌣˘•) ****************************** kissing6 (っ˘з(˘⌣˘ ) ****************************** kitty =^. .^= ****************************** kitty emote ᵒᴥᵒ# ****************************** knife1 )xxxxx[;;;;;;;;;> ****************************** knife2 )xxx[::::::::::> ****************************** koala @( * O * )@ ****************************** kokain ̿ ̿' ̿'\̵͇̿̿\з=(•̪●)=ε/̵͇̿̿/'̿''̿ ̿ ****************************** kyubey /人 ⌒ ‿‿ ⌒ 人\ ****************************** kyubey2 /人 ◕‿‿◕ 人\ ****************************** laughing (^▽^) ****************************** lenny ( ͡° ͜ʖ ͡°) ****************************** licking lips :-9 ****************************** line brack ●▬▬▬▬๑۩۩๑▬▬▬▬▬● ****************************** linqan :Q___ ****************************** listening to headphones ◖ᵔᴥᵔ◗ ♪ ♫ ****************************** loading1 █▒▒▒▒▒▒▒▒▒ ****************************** loading2 ███▒▒▒▒▒▒▒ ****************************** loading3 █████▒▒▒▒▒ ****************************** loading4 ███████▒▒▒ ****************************** loading5 █████████▒ ****************************** loading6 ██████████ ****************************** loch ness monster _mmmP ****************************** long rose ---------------------{{---<((@) ****************************** looking down (._.) ****************************** looking face ô¿ô ****************************** love ⓛⓞⓥⓔ ****************************** love in my eye1 (♥_♥) ****************************** love in my eye2 (。❤◡❤。) ****************************** love in my eye3 (❤◡❤) ****************************** love2 ~♡ⓛⓞⓥⓔ♡~ ****************************** love3 ♥‿♥ ****************************** love4 (Ɔ ˘⌣˘)♥(˘⌣˘ C) ****************************** machinegun ,==,-- ****************************** mad òÓ ****************************** mad10 (•ˋ _ ˊ•) ****************************** mad2 (ノ`Д ́)ノ ****************************** mad3 >_< ****************************** mad4 ~_~ ****************************** mad5 Ծ_Ծ ****************************** mad6 ⋋_⋌ ****************************** mad7 (ノ≥∇≤)ノ ****************************** mad8 {{{(>_<)}}} ****************************** mad9 ƪ(`▿▿▿▿´ƪ) ****************************** mail box |M|/ ****************************** man spider /╲/\༼ *ಠ 益 ಠ* ༽/\╱\ ****************************** man tears ಥ_ಥ ****************************** mango ) _ _ __/°°¬ ****************************** marge simpson ()()():| ****************************** marshmallows -()_)--()_)--- ****************************** med ب_ب ****************************** med man (⋗_⋖) ****************************** meditation ‿( ́ ̵ _-`)‿ ****************************** meep \(°^°)/ ****************************** melp1 (<>..<>) ****************************** melp2 (<(<>(<>.(<>..<>).<>)<>)>) ****************************** meow ฅ^•ﻌ•^ฅ ****************************** metal \m/_(>_<)_\m/ ****************************** mini penis =D ****************************** monkey @('_')@ ****************************** monocle (╭ರ_•́) ****************************** monster ٩(̾●̮̮̃̾•̃̾)۶ ****************************** monster2 ٩(- ̮̮̃-̃)۶ ****************************** mouse1 ----{,_,"> ****************************** mouse2 . ~~(__^·> ****************************** mouse3 <·^__)~~ . ****************************** mouse4 —-{,_,”><",_,}---- ****************************** mouse5 <:3 )~~~~ ****************************** mouse6 <^__)~ ****************************** mouse7 ~(__^> ****************************** mtmtika :o + :p = 69 ****************************** myancat mmmyyyyy<⦿⽘⦿>aaaannn ****************************** nathan ♪└( ̄◇ ̄)┐♪└( ̄◇ ̄)┐♪└( ̄◇ ̄)┐♪ ****************************** needle1 ┣▇▇▇═─ ****************************** needle2 |==|iiii|>----- ****************************** neo (⌐■_■)--︻╦╤─ - - - ****************************** nerd ::( ****************************** no support 乁( ◔ ౪◔)「 ┑( ̄Д  ̄)┍ ****************************** nope t(-_-t) ****************************** nose \˚ㄥ˚\ ****************************** nose2 |'L'| ****************************** oar ===========(8888) ****************************** old lady boobs |\o/\o/| ****************************** opera ヾ(´〇`)ノ♪♪♪ ****************************** owlkin (ᾢȍˬȍ)ᾢ ļ ļ ļ ļ ļ ****************************** pac man ᗧ···ᗣ···ᗣ·· ****************************** palm tree 'T` ****************************** panda ヽ( ̄(エ) ̄)ノ ****************************** party time ┏(-_-)┛┗(-_- )┓┗(-_-)┛┏(-_-)┓ ****************************** peace yo! (‾⌣‾)♉ ****************************** peepers ಠಠ ****************************** penis 8===D ****************************** penis2 ○○)=======o) ****************************** perky ( ๏ Y ๏ ) ****************************** pictou |\_______(#*#)_______/| ****************************** pie fight ---=======[} ****************************** pig1 ^(*(oo)*)^ ****************************** pig2 ༼☉ɷ⊙༽ ****************************** piggy (∩◕(oo)◕∩) ****************************** ping pong ( •_•)O*¯`·.¸.·´¯`°Q(•_• ) ****************************** pipe ====\_/ ****************************** pirate ✌(◕‿-)✌ ****************************** pistols1 ¯¯̿̿¯̿̿'̿̿̿̿̿̿̿'̿̿'̿̿̿̿̿'̿̿̿)͇̿̿)̿̿̿̿ '̿̿̿̿̿̿\̵͇̿̿\=(•̪̀●́)=o/̵͇̿̿/'̿̿ ̿ ̿̿ ****************************** pistols2 ̿' ̿'\̵͇̿̿\з=(◕_◕)=ε/̵͇̿̿/'̿'̿ ̿ ****************************** pistols3 ̿̿ ̿̿ ̿’̿̿’̿\̵͇̿̿\з=( ͡ °_̯͡° )=ε/̵͇̿̿/’̿̿’̿ ̿ ̿̿ ̿̿ ****************************** pistols4 (•̪●)=ε/̵͇̿̿/’̿’̿ ̿ ̿̿ ̿ ̿”” ****************************** pistols5 ̿̿ ̿̿ ̿̿ ̿'̿'\̵͇̿̿\з=( ͠° ͟ʖ ͡°)=ε/̵͇̿̿/'̿̿ ̿ ̿ ̿ ̿ ̿ ****************************** playing cards [♥]]] [♦]]] [♣]]] [♠]]] ****************************** playing cards clubs [♣]]] ****************************** playing cards clubs waterfall 🃏🃑🃒🃓🃔🃕🃖🃗🃘🃙🃚🃛🃜🃝🃞 ****************************** playing cards diamonds [♦]]] ****************************** playing cards diamonds waterfall 🃟🃁🃂🃃🃄🃅🃆🃇🃈🃉🃊🃋🃌🃍🃎 ****************************** playing cards hearts [♥]]] ****************************** playing cards hearts waterfall 🂠🂱🂲🂳🂴🂵🂶🂷🂸🂹🂺🂻🂼🂽🂾 ****************************** playing cards spades [♠]]] ****************************** playing cards spades waterfall 🃠🂡🂢🂣🂤🂥🂦🂧🂨🂩🂪🂫🂬🂭🂮 ****************************** playing cards waterfall 🂱🂲🂳🂴🂵🂶🂷🂸🂹🂺🂻🂼🂽🂾🃁🃂🃃🃄🃅🃆🃇🃈🃉🃊🃋🃌🃍🃎🃑🃒🃓🃔🃕🃖🃗🃘🃙🃚🃛🃜🃝🃞🂡🂢🂣🂤🂥🂦🂧🂨🂩🂪🂫🂬🂭🂮🂠🃏🃟 ****************************** playing cards waterfall (trump) 🃠🃡🃢🃣🃤🃥🃦🃧🃨🃩🃪🃫🃬🃭🃮🃯🃰🃱🃲🃳🃴🃵 ****************************** playing in snow (╯^□^)╯︵ ❄☃❄ ****************************** point (☞゚ヮ゚)☞ ****************************** polar bear ˁ˚ᴥ˚ˀ ****************************** possessed <>_<> ****************************** power lines TTT ****************************** pretty eyes ఠ_ఠ ****************************** professor """⌐(ಠ۾ಠ)¬""" ****************************** puls ––•–√\/––√\/––•–– ****************************** punch O=('-'Q) ****************************** pursing lips :-" ****************************** put the table back ┬─┬ ノ( ゜-゜ノ) ****************************** rak /⦿L⦿\ ****************************** rare ┌ಠ_ಠ)┌∩┐ ᶠᶸᶜᵏ♥ᵧₒᵤ ****************************** ready to cry :-} ****************************** real face ( ͡° ͜ʖ ͡°) ****************************** really mad >:-I ****************************** really sad :-C ****************************** regular ass (_!_) ****************************** religious ☪ ✡ † ☨ ✞ ✝ ☥ ☦ ☓ ♁ ☩ ****************************** resting my eyes ᴖ̮ ̮ᴖ ****************************** roadblock X+X+X+X+X ****************************** robber -╤╗_(◙◙)_╔╤- - - - \o/ \o/ \o/ ****************************** robot boy ◖(◣☩◢)◗ ****************************** robot1 d[ o_0 ]b ****************************** robot2 c[○┬●]כ ****************************** robot3 {•̃_•̃} ****************************** rock on1 \,,/(^_^)\,,/ ****************************** rock on2 \m/(-_-)\m/ ****************************** rocket ∙∙∙∙∙·▫▫ᵒᴼᵒ▫ₒₒ▫ᵒᴼᵒ▫ₒₒ▫ᵒᴼᵒ☼)===> ****************************** roke _\m/ ****************************** rope ╚(▲_▲)╝ ****************************** rose1 --------{---(@ ****************************** rose2 @}}>----- ****************************** rose3 @-->--->--- ****************************** rose4 @}~}~~~ ****************************** rose5 @-}-- ****************************** rose6 @)}---^----- ****************************** rose7 @->-->--- ****************************** round bird ,(u°)> ****************************** round cat ~(^._.) ****************************** running ε=ε=ε=┌(;*´Д`)ノ ****************************** russian boobs [.][.] ****************************** ryans dick 8======D ****************************** sad and confused ¯\_(⊙︿⊙)_/¯ ****************************** sad and crying (ᵟຶ︵ ᵟຶ) ****************************** sad face (ಥ⌣ಥ) ****************************** sad1 ε(´סּ︵סּ`)з ****************************** sad2 (✖╭╮✖) ****************************** sad3 (◑﹏◐) ****************************** sad4 (◕_◕) ****************************** sad5 (´ᗣ`) ****************************** sad6 Y_Y ****************************** sat '(◣_◢)' ****************************** satan ↑_(ΦwΦ;)Ψ ****************************** satisfied (◠﹏◠) ****************************** scissors ✄ ****************************** screaming :-@ ****************************** seal (ᵔᴥᵔ) ****************************** sean the sheep <('--')> ****************************** sex symbol ◢♂◣◥♀◤◢♂◣◥♀◤ ****************************** shark ~~~~~~^~~~~~ ****************************** shark attack ~~~~~~\o/~~~~~/\~~~~~ ****************************** shark face ( ˇ෴ˇ ) ****************************** sheep °l°(,,,,); ****************************** shocked1 (∩╹□╹∩) ****************************** shocked2 :-O ****************************** shrug ¯\_(ツ)_/¯ ****************************** shy (๑•́ ₃ •̀๑) ****************************** singing d(^o^)b¸¸♬·¯·♩¸¸♪·¯·♫¸¸ ****************************** singing2 ♪└( ̄◇ ̄)┐♪ ****************************** sky free ѧѦ ѧ ︵͡︵ ̢ ̱ ̧̱ι̵̱̊ι̶̨̱ ̶̱ ︵ Ѧѧ ︵͡ ︵ ѧ Ѧ ̵̗̊o̵̖ ︵ ѦѦ ѧ ****************************** sleeping (-.-)Zzz... ****************************** sleeping baby [{-_-}] ZZZzz zz z... ****************************** sleepy 눈_눈 ****************************** sleepy coffee ( -_-)旦~ ****************************** slenderman ϟƖΣNd€RMαN ****************************** smile :-) ****************************** smirk :-, ****************************** smooth (づ  ̄ ³ ̄)づ ⓈⓂⓄⓄⓉⒽ ****************************** smug bastard (‾⌣‾) ****************************** snail1 '-'_@_ ****************************** snail2 '\Q___ ****************************** sniper rifle ︻デ┳═ー ****************************** sniperstars ✯╾━╤デ╦︻✯ ****************************** snowing ✲´*。.❄¨¯`*✲。❄。*。 ****************************** snowman1 ☃ ****************************** snowman2 { }( : ^ )( """" )( ) ****************************** sophie <XX""XX> ****************************** sorreh bro (◢_◣) ****************************** spade bold ♠ ****************************** spade regular ♤ ****************************** sparkling heart -`ღ´- ****************************** spear >>-;;;------;;--> ****************************** spell cast ╰( ⁰ ਊ ⁰ )━☆゚.*・。゚ ****************************** sperm ~~o ****************************** spider1 //O\ ****************************** spider2 /\oo/\ ****************************** spider3 ///\oo/\\\ ****************************** spider4 /╲/\╭ºoꍘoº╮/\╱\ ****************************** spot ( . Y . ) ****************************** squee ヾ(◎o◎,,;)ノ ****************************** squid くコ:彡 ****************************** squigle with spirals 6\9 ****************************** srs face (ಠ_ಠ) ****************************** star in my eyes <*_*> ****************************** staring ٩(๏_๏)۶ ****************************** stars ✌⊂(✰‿✰)つ✌ ****************************** stars2 ⋆ ✢ ✣ ✤ ✥ ✦ ✧ ✩ ✪ ✫ ✬ ✭ ✮ ✯ ✰ ★ ****************************** stealth fighter -^- ****************************** stranger danger (づ。◕‿‿◕。)づ ****************************** strut ᕕ( ᐛ )ᕗ ****************************** stunna shades (っ▀¯▀)つ ****************************** sunglasses1 (•_•)>⌐■-■ (⌐■_■) ****************************** sunglasses2 B-) ****************************** sunny day ☁ ▅▒░☼‿☼░▒▅ ☁ ****************************** superman -^mOm^- ****************************** superman logo /s\ ****************************** surprised1 =:-o ****************************** surprised10 (⊙.◎) ****************************** surprised11 ๏_๏ ****************************** surprised12 (˚-˚) ****************************** surprised13 ˚o˚ ****************************** surprised14 (O.O) ****************************** surprised15 ( ゚o゚) ****************************** surprised16 ◉_◉ ****************************** surprised17 【•】_【•】 ****************************** surprised18 (•ิ_•) ****************************** surprised19 ⊙⊙ ****************************** surprised2 ( ゚Д゚) ****************************** surprised20 ͡๏_͡๏ ****************************** surprised3 (O_o) ****************************** surprised4 (º_•) ****************************** surprised5 (º.º) ****************************** surprised6 ⊙▃⊙ ****************************** surprised7 O.o ****************************** surprised8 ●_● ****************************** surprised9 (⊙̃.o) ****************************** swim _(ッ)>_// ****************************** swim2 ー(ッ)」 ****************************** swim3 _(ッ)へ ****************************** sword1 (===||:::::::::::::::> ****************************** sword10 O==I======> ****************************** sword2 ▬▬ι═══════ﺤ -═══════ι▬▬ ****************************** sword3 ס₪₪₪₪§|(Ξ≥≤≥≤≥≤ΞΞΞΞΞΞΞΞΞΞ> ****************************** sword4 |O/////[{:;:;:;:;:;:;:;:;> ****************************** sword5 <%%%%|==========> ****************************** sword6 o()xxxx[{::::::::::::::::::::::::::::::::::> ****************************** sword7 o==[]::::::::::::::::> ****************************** sword8 ▬▬ι═══════> ****************************** sword9 <═══════ι▬▬ ****************************** table flip (╯°□°)╯︵ ┻━┻ ****************************** table flip10 (/ .□.)\ ︵╰(゜Д゜)╯︵ /(.□. \) ****************************** table flip2 (ノಥ益ಥ)ノ ┻━┻ ****************************** table flip3 ┬─┬ノ( º _ ºノ) ****************************** table flip4 (ノಠ益ಠ)ノ彡┻━┻ ****************************** table flip5 ┬──┬ ¯\_(ツ) ****************************** table flip6 ┻━┻ ︵ ¯\(ツ)/¯ ︵ ┻━┻ ****************************** table flip7 (╯°□°)╯︵ ┻━┻ ︵ ╯(°□° ╯) ****************************** table flip8 (╯°Д°)╯︵ /(.□ . \) ****************************** table flip9 (ノ^_^)ノ┻━┻ ┬─┬ ノ( ^_^ノ) ****************************** taking a dump (⩾﹏⩽) ****************************** teddy ˁ(⦿ᴥ⦿)ˀ ****************************** teepee /|\ ****************************** telephone ε(๏_๏)з】 ****************************** tent1 //\ ****************************** tent2 /\\ ****************************** tgif “ヽ(´▽`)ノ” ****************************** thanks \(^-^)/ ****************************** things that can_t be unseen ♨_♨ ****************************** this is areku d(^o^)b ****************************** tidy up ┬─┬⃰͡ (ᵔᵕᵔ͜ ) ****************************** tie-fighter |—O—| ****************************** tired ###Markdown ART Version : 5.1 ###Code from art import * ###Output _____no_output_____ ###Markdown Art Counter ###Code ART_COUNTER ###Output _____no_output_____ ###Markdown Art List ###Code art_list() ###Output 3 ᕙ༼ ,,ԾܫԾ,, ༽ᕗ ****************************** 5 ᕙ༼ ,,இܫஇ,, ༽ᕗ ****************************** 9/11 truth ✈__✈ █ █ ▄ ****************************** acid ⊂(◉‿◉)つ ****************************** afraid ( ゚ Д゚) ****************************** airplane1 ‛¯¯٭٭¯¯(▫▫)¯¯٭٭¯¯’ ****************************** airplane2 ✈ ****************************** airplane3 ✈ ✈ ———- ♒✈ ****************************** ak-47 ︻┳デ═— ****************************** alien ::) ****************************** almost cared ╰╏ ◉ 〜 ◉ ╏╯ ****************************** american money1 [($)] ****************************** american money2 [̲̅$̲̅(̲̅1̲̅)̲̅$̲̅] ****************************** american money3 [̲̅$̲̅(̲̅5̲̅)̲̅$̲̅] ****************************** american money4 [̲̅$̲̅(̲̅ιοο̲̅)̲̅$̲̅] ****************************** american money5 [̲̅$̲̅(̲̅2οο̲̅)̲̅$̲̅] ****************************** angel1 ^i^ ****************************** angel2 O:-) ****************************** angry ლ(ಠ益ಠ)ლ ****************************** angry birds ( ఠൠఠ )ノ ****************************** angry face (⋟﹏⋞) ****************************** angry face2 (╬ ಠ益ಠ) ****************************** angry troll ヽ༼ ಠ益ಠ ༽ノ ****************************** angry2 ( ͠° ͟ʖ ͡°) ****************************** ankush ︻デ┳═ー*----* ****************************** arrow1 »»---------------------► ****************************** arrow2 XXX--------> ****************************** arrowhead ⤜(ⱺ ʖ̯ⱺ)⤏ ****************************** at what cost ლ(ಠ益ಠლ) ****************************** atish (| - _ - |) ****************************** awesome <:3 )~~~ ****************************** awkward •͡˘㇁•͡˘ ****************************** bad hair1 =:-) ****************************** bad hair2 =:-( ****************************** bagel nln >_< nln ****************************** band aid (̲̅:̲̅:̲̅:̲̅[̲̅ ̲̅]̲̅:̲̅:̲̅:̲̅ ) ****************************** barbell ▐━━━━━▌ ****************************** barcode1 █║▌│ █│║▌ ║││█║▌ │║║█║ │║║█║ ****************************** barcode2 ║█║▌║█║▌│║▌║▌█║ ****************************** barf (´ж`ς) ****************************** baseball fan q:o) ****************************** basking in glory ヽ(´ー`)ノ ****************************** bat1 ^O^ ****************************** bat2 ^v^ ****************************** bautista (╯°_°)╯︵ ━━━ ****************************** bear ʕ•ᴥ•ʔ ****************************** bear GTFO ʕ •`ᴥ•´ʔ ****************************** bear squiting ʕᵔᴥᵔʔ ****************************** bear2 (ʳ ´º㉨º) ****************************** because ∵ ****************************** bee ¸.·´¯`·¸¸.·´¯`·.¸.-<\^}0=: ****************************** being draged ╰(◣﹏◢)╯ ****************************** bender ¦̵̱ ̵̱ ̵̱ ̵̱ ̵̱(̢ ̡͇̅└͇̅┘͇̅ (▤8כ−◦ ****************************** big eyes ⺌∅‿∅⺌ ****************************** big kiss :-X ****************************** big nose ˚∆˚ ****************************** big smile :-D ****************************** bird (⌒▽⌒) ****************************** birds ~(‾▿‾)~ ****************************** blackeye 0__# ****************************** bomb !!( ’ ‘)ノノ⌒●~* ****************************** boobies (. )( .) ****************************** boobs (.)(.) ****************************** boobs2 (·人·) ****************************** boombox1 ♫♪.ılılıll|̲̅̅●̲̅̅|̲̅̅=̲̅̅|̲̅̅●̲̅̅|llılılı.♫♪ ****************************** boombox2 ♫♪ |̲̅̅●̲̅̅|̲̅̅=̲̅̅|̲̅̅●̲̅̅| ♫♪ ****************************** boxing ლ(•́•́ლ) ****************************** breakdown ಥ﹏ಥ ****************************** british money [£::] ****************************** buck teeth :-B ****************************** bugs bunny E=B ****************************** bullshit |3ᵕᶦᶦᶳᶣᶨᶵ ****************************** bunny (\_/) ****************************** butt (‿|‿) ****************************** butterfly Ƹ̵̡Ӝ̵̨̄Ʒ ****************************** camera [◉"] ****************************** canoe .,.,\______/,..,., ****************************** car `o##o> ****************************** car race ∙،°. ˘Ô≈ôﺣ » » » ****************************** care crowd (-(-_(-_-)_-)-) ****************************** careless ◔_◔ ****************************** carpet roll @__ ****************************** cassette1 |[●▪▪●]| ****************************** cassette2 [¯ↂ■■ↂ¯] ****************************** cat face ⦿⽘⦿ ****************************** cat smile ≧◔◡◔≦ ****************************** cat1 =^..^= ****************************** cat2 龴ↀ◡ↀ龴 ****************************** cat3 ^.--.^ ****************************** cat4 (=^ェ^=) ****************************** caterpillar ,/\,/\,/\,/\,/\,/\,o ****************************** catlenny ( ͡° ᴥ ͡°) ****************************** chair ╦╣ ****************************** charly +:) ****************************** chasing ''⌐(ಠ۾ಠ)¬'' ****************************** cheer ^(¤o¤)^ ****************************** cheers ( ^_^)o自自o(^_^ ) ****************************** chess ♞▀▄▀▄♝▀▄ ****************************** chess pieces ♚ ♛ ♜ ♝ ♞ ♟ ♔ ♕ ♖ ♗ ♘ ♙ ****************************** chicken ʚ(•` ****************************** chu (´ε` ) ****************************** cigarette1 (̅_̅_̅_̅(̅_̅_̅_̅_̅_̅_̅_̅_̅̅_̅()ڪے ****************************** cigarette2 (____((____________()~~~ ****************************** cigarette3 ()___)____________) ****************************** clowning *:o) ****************************** club bold ♣ ****************************** club regular ♧ ****************************** coffee now {zzz}°°°( -_-)>c[_] ****************************** coffee1 c[_] ****************************** coffee2 l_D ****************************** coffee3 l_P ****************************** coffee4 l_B ****************************** computer mouse [E} ****************************** concerned (@_@) ****************************** confused scratch (⊙.☉)7 ****************************** confused1 :-/ ****************************** confused10 (*'__'*) ****************************** confused2 :-\ ****************************** confused3 (°~°) ****************************** confused4 ^^' ****************************** confused5 é_è ****************************** confused6 (˚ㄥ_˚) ****************************** confused7 (; ͡°_ʖ ͡°) ****************************** confused8 (´•_•`) ****************************** confused9 (ˇ_ˇ’!l) ****************************** crab (\|) ._. (|/) ****************************** crayons ((̲̅ ̲̅(̲̅C̲̅r̲̅a̲̅y̲̅o̲̅l̲̲̅̅a̲̅( ̲̅((> ****************************** crazy ミ●﹏☉ミ ****************************** creeper ƪ(ړײ)‎ƪ​​ ****************************** crotch shot \*/ ****************************** cry (╯︵╰,) ****************************** cry face 。゚( ゚இ‸இ゚)゚。 ****************************** cry troll ༼ ༎ຶ ෴ ༎ຶ༽ ****************************** crying1 Ỏ̷͖͈̞̩͎̻̫̫̜͉̠̫͕̭̭̫̫̹̗̹͈̼̠̖͍͚̥͈̮̼͕̠̤̯̻̥̬̗̼̳̤̳̬̪̹͚̞̼̠͕̼̠̦͚̫͔̯̹͉͉̘͎͕̼̣̝͙̱̟̹̩̟̳̦̭͉̮̖̭̣̣̞̙̗̜̺̭̻̥͚͙̝̦̲̱͉͖͉̰̦͎̫̣̼͎͍̠̮͓̹̹͉̤̰̗̙͕͇͔̱͕̭͈̳̗̭͔̘̖̺̮̜̠͖̘͓̳͕̟̠̱̫̤͓͔̘̰̲͙͍͇̙͎̣̼̗̖͙̯͉̠̟͈͍͕̪͓̝̩̦̖̹̼̠̘̮͚̟͉̺̜͍͓̯̳̱̻͕̣̳͉̻̭̭̱͍̪̩̭̺͕̺̼̥̪͖̦̟͎̻̰_Ỏ̷͖͈̞̩͎̻̫̫̜͉̠̫͕̭̭̫̫̹̗̹͈̼̠̖͍͚̥͈̮̼͕̠̤̯̻̥̬̗̼̳̤̳̬̪̹͚̞̼̠͕̼̠̦͚̫͔̯̹͉͉̘͎͕̼̣̝͙̱̟̹̩̟̳̦̭͉̮̖̭̣̣̞̙̗̜̺̭̻̥͚͙̝̦̲̱͉͖͉̰̦͎̫̣̼͎͍̠̮͓̹̹͉̤̰̗̙͕͇͔̱͕̭͈̳̗̭͔̘̖̺̮̜̠͖̘͓̳͕̟̠̱̫̤͓͔̘̰̲͙͍͇̙͎̣̼̗̖͙̯͉̠̟͈͍͕̪͓̝̩̦̖̹̼̠̘̮͚̟͉̺̜͍͓̯̳̱̻͕̣̳͉̻̭̭̱͍̪̩̭̺͕̺̼̥̪͖̦̟͎̻̰ ****************************** crying2 :~( ****************************** cthulhu ^(;,;)^ ****************************** cthulhu2 ( ;,;) ****************************** cup1 (▓ ****************************** cup2 \̅_̅/̷̚ʾ ****************************** cussing :-# ****************************** cute cat ^⨀ᴥ⨀^ ****************************** cute face (。◕‿◕。) ****************************** cute face2 (ღ˘◡˘ღ) ****************************** cute face3 ✿◕ ‿ ◕✿ ****************************** cute face4 ❀◕ ‿ ◕❀ ****************************** cute face5 (✿◠‿◠) ****************************** cute face6 (◕‿◕✿) ****************************** cute face7 ☾˙❀‿❀˙☽ ****************************** cute face8 (◡‿◡✿) ****************************** cute face9 ლ(╹◡╹ლ) ****************************** dab ヽ( •_)ᕗ ****************************** dagger cxxx|;:;:;:;:;:;:;:;> ****************************** dalek ̵̄/͇̐\ ****************************** damnyou (ᕗ ͠° ਊ ͠° )ᕗ ****************************** dance (>'-')> <('_'<) ^('_')\- \m/(-_-)\m/ <( '-')> \_( .")> <(._.)-` ****************************** dance2 ♪♪ ヽ(ˇ∀ˇ )ゞ ****************************** dancee ♪┏(°.°)┛┗(°.°)┓┗(°.°)┛┏(°.°)┓ ♪ ****************************** dancing ┌(ㆆ㉨ㆆ)ʃ ****************************** dancing people ‎(/.__.)/ \(.__.\) ****************************** dead child '-=,o ****************************** dead eyes ¿ⓧ_ⓧﮌ ****************************** dead girl '==>x\9 ****************************** dead guy '==xx\0 ****************************** dear god why щ(゚Д゚щ) ****************************** death star defense team |-o-| (-o-) |-o-| ****************************** decorate ▂▃▅▇█▓▒░۩۞۩ ۩۞۩░▒▓█▇▅▃▂ ****************************** depressed (︶︹︶) ****************************** derp ヘ(。□°)ヘ ****************************** devil ]:-> ****************************** devilish grin >:-D ****************************** devilish smile >:) ****************************** devious smile ಠ‿ಠ ****************************** dgaf ┌∩┐(◣ _ ◢)┌∩┐ ****************************** diamond bold ♦ ****************************** diamond regular ♢ ****************************** dice [: :] ****************************** dick 8====D ****************************** disagree ٩◔̯◔۶ ****************************** discombobulated ⊙﹏⊙ ****************************** dislike1 (Ծ‸ Ծ) ****************************** dislike2 ( ಠ ʖ̯ ಠ) ****************************** dna sample ~ ****************************** do you even lift bro? ᕦ(ò_óˇ)ᕤ ****************************** domino [: :|:::] ****************************** don king ==8-) ****************************** double flip ┻━┻ ︵ヽ(`Д´)ノ︵ ┻━┻ ****************************** drowning 人人人ヾ( ;×o×)〃 人人人 ****************************** druling1 :-... ****************************** druling2 :-P~~~ ****************************** drunkenness ヽ(´ー`)┌ ****************************** dude glasses1 @[O],[O] ****************************** dude glasses2 @(o),(o) ****************************** dummy <-|-'_'-|-> ****************************** dunno ¯\(°_o)/¯ ****************************** dunno2 \_(シ)_/ ****************************** dunno3 └㋡┘ ****************************** dunno4 ╘㋡╛ ****************************** dunno5 ٩㋡۶ ****************************** eastbound fish ><((((> ****************************** eaten apple [===]-' ****************************** eds dick 8=D ****************************** eeriemob (-(-_-(-_(-_(-_-)_-)-_-)_-)_-)-) ****************************** electrocardiogram1 √v^√v^√v^√v^√♥ ****************************** electrocardiogram2 v^v^v^v^√\/♥ ****************************** electrocardiogram3 /\/\/\/\/\/\/\/\/\/\/\v^♥ ****************************** electrocardiogram4 √v^√v^♥√v^√v^√ ****************************** elephant °j°m ****************************** emo (///_ ;) ****************************** emo dance ヾ(-_- )ゞ ****************************** energy つ ◕_◕ ༽つ つ ◕_◕ ༽つ ****************************** envelope ✉ ****************************** epic gun ︻┳デ═— ****************************** equalizer ▇ ▅ █ ▅ ▇ ▂ ▃ ▁ ▁ ▅ ▃ ▅ ▅ ▄ ▅ ▇ ****************************** eric >--) ) ) )*> ****************************** error (╯°□°)╯︵ ɹoɹɹƎ ****************************** exchange (╯°□°)╯︵ ǝƃuɐɥɔxǝ ****************************** excited ☜(⌒▽⌒)☞ ****************************** exorcism ح(•̀ж•́)ง † ****************************** eye closed (╯_╰) ****************************** eye roll ⥀.⥀ ****************************** eyes ℃ↂ_ↂↃ ****************************** face •|龴◡龴|• ****************************** facepalm (>ლ) ****************************** fail o(╥﹏╥)o ****************************** fart (ˆ⺫ˆ๑)<3 ****************************** fat ass (__!__) ****************************** faydre (U) [^_^] (U) ****************************** feel perky (`・ω・´) ****************************** fido V•ᴥ•V ****************************** fight (ง'̀-'́)ง ****************************** finger1 ╭∩╮(Ο_Ο)╭∩╮ ****************************** finger2 ┌∩┐(◣_◢)┌∩┐ ****************************** finger3 ಠ︵ಠ凸 ****************************** finger4 ┌∩┐(>_<)┌∩┐ ****************************** finn | (• ◡•)| ****************************** fish invasion ›(̠̄:̠̄c ›(̠̄:̠̄c (¦Ҝ (¦Ҝ ҉ - - - ¦̺͆¦ ▪▌ ****************************** fish skeleton1 >-}-}-}-> ****************************** fish skeleton2 >++('> ****************************** fish swim ¸.·´¯`·.´¯`·.¸¸.·´¯`·.¸><(((º> ****************************** fish1 ><(((('> ****************************** fish2 ><> ****************************** fish3 `·.¸¸ ><((((º>.·´¯`·><((((º> ****************************** fish4 ><> ><> ****************************** fish5 <>< ****************************** fish6 <`)))>< ****************************** fisticuffs ლ(`ー´ლ) ****************************** flex ᕙ(⇀‸↼‶)ᕗ ****************************** flip friend (ノಠ ∩ಠ)ノ彡( \o°o)\ ****************************** fly away ⁽⁽ଘ( ˊᵕˋ )ଓ⁾⁾ ****************************** flying ح˚௰˚づ ****************************** fork ---= ****************************** formula1 car \ō͡≡o˞̶ ****************************** fox -^^,--,~ ****************************** french kiss :-XÞ ****************************** frown (ღ˘⌣˘ღ) ****************************** fu (ಠ_ಠ)┌∩┐ ****************************** fuck off t(-.-t) ****************************** fuck you nlm (-_-) mln ****************************** fuck you2 (° ͜ʖ͡°)╭∩╮ ****************************** fuckall ╭∩╮(︶︿︶)╭∩╮ ****************************** full mouth :-I ****************************** fungry Σ_(꒪ཀ꒪」∠)_ ****************************** ghost ‹’’›(Ͼ˳Ͽ)‹’’› ****************************** gimme ༼ つ ◕_◕ ༽つ ****************************** glasses -@-@- ****************************** glasses2 ᒡ◯ᵔ◯ᒢ ****************************** glitter (*・‿・)ノ⌒*:・゚✧ ****************************** go away bear ╭∩╮ʕ•ᴥ•ʔ╭∩╮ ****************************** gotit (☞゚∀゚)☞ ****************************** gtalk fit (•̪̀●́)=ε/̵͇̿̿/'̿̿ ̿ ̿̿ N --------{---(@ ****************************** guitar c====(=#O| ) ~~ ♬·¯·♩¸¸♪·¯·♫¸ ****************************** gun1 ︻╦╤─ ****************************** gun2 ︻デ═一 ****************************** gun3 ╦̵̵̿╤─ ҉ ~ • ****************************** gun4 ︻╦̵̵͇̿̿̿̿╤── ****************************** hacksaw [|^^^^^^^ ****************************** hairstyle ⨌⨀_⨀⨌ ****************************** hal @_'-' ****************************** hammer #== ****************************** happy ۜ\(סּںסּَ` )/ۜ ****************************** happy birthday 1 ዞᏜ℘℘Ꮍ ℬℹℛʈዞᗬᏜᎽ ****************************** happy face ヽ(´▽`)/ ****************************** happy hug \(ᵔᵕᵔ)/ ****************************** happy square 【ツ】 ****************************** happy10 (´ツ`) ****************************** happy11 ( ^◡^)っ ****************************** happy12 ┏(^0^)┛┗(^0^)┓ ****************************** happy13 (°⌣°) ****************************** happy14 ٩(^‿^)۶ ****************************** happy15 (•‿•) ****************************** happy16 ó‿ó ****************************** happy17 ٩◔‿◔۶ ****************************** happy18 ಠ◡ಠ ****************************** happy19 ●‿● ****************************** happy2 ⎦˚◡˚⎣ ****************************** happy20 ( '‿' ) ****************************** happy21 ^‿^ ****************************** happy22 ┌( ಠ‿ಠ)┘ ****************************** happy23 (˘◡˘) ****************************** happy24 ☯‿☯ ****************************** happy25 \(• ◡ •)/ ****************************** happy26 ( ͡ʘ ͜ʖ ͡ʘ) ****************************** happy27 ( ͡• ͜ʖ ͡• ) ****************************** happy3 ㋡ ****************************** happy4 ^_^ ****************************** happy5 [^_^] ****************************** happy6 (ツ) ****************************** happy7 【シ】 ****************************** happy8 ㋛ ****************************** happy9 (シ) ****************************** head shot ->~∑≥_≤) ****************************** headphone1 d[-_-]b ****************************** headphone2 d(-_-)b ****************************** headphone3 (W) ****************************** heart bold ♥ ****************************** heart regular ♡ ****************************** heart1 »-(¯`·.·´¯)-> ****************************** heart2 ♡♡ ****************************** heart3 <3 ****************************** hell yeah (òÓ,)_\,,/ ****************************** hello (ʘ‿ʘ)╯ ****************************** hello2 (ツ)ノ ****************************** help ٩(͡๏̯͡๏)۶ ****************************** high five ( ⌒o⌒)人(⌒-⌒ )v ****************************** hitchhicking (งツ)ว ****************************** homer (_8(|) ****************************** homer simpson =(:o) ****************************** honeycute ❤◦.¸¸. ◦✿ ****************************** house __̴ı̴̴̡̡̡ ̡͌l̡̡̡ ̡͌l̡*̡̡ ̴̡ı̴̴̡ ̡̡͡|̲̲̲͡͡͡ ̲▫̲͡ ̲̲̲͡͡π̲̲͡͡ ̲̲͡▫̲̲͡͡ ̲|̡̡̡ ̡ ̴̡ı̴̡̡ ̡͌l̡̡̡̡.___ ****************************** hoxom h(o x o )m ****************************** hug me (っ◕‿◕)っ ****************************** hugger (づ ̄ ³ ̄)づ ****************************** huhu █▬█ █▄█ █▬█ █▄█ ****************************** hybrix ʕʘ̅͜ʘ̅ʔ ****************************** i dont care ╭∩╮(︶︿︶)╭∩╮ ****************************** i kill you ̿ ̿̿'̿̿\̵͇̿̿\=(•̪●)=/̵͇̿̿/'̿̿ ̿ ̿ ****************************** im a hugger (⊃。•́‿•̀。)⊃ ****************************** infinity (X) ****************************** injured (҂◡_◡) ****************************** inlove (✿ ♥‿♥) ****************************** innocent face ʘ‿ʘ ****************************** japanese lion face °‿‿° ****************************** jaymz (•̪●)==ε/̵͇̿​̿/’̿’̿ ̿ ̿̿ `(•.°)~ ****************************** jazz musician ヽ(⌐■_■)ノ♪♬ ****************************** john lennon ((ºjº)) ****************************** jokeranonimous ╭∩╮ (òÓ,) ╭∩╮ ****************************** jokeranonimous2 ╭∩╮(ô¿ô)╭∩╮ ****************************** joy n_n ****************************** judgemental \{ಠʖಠ\} ****************************** judging ( ఠ ͟ʖ ఠ) ****************************** kablewee ̿' ̿'\̵͇̿̿\з=( ͡ °_̯͡° )=ε/̵͇̿̿/'̿'̿ ̿ ****************************** killer (⌐■_■)--︻╦╤─ - - - (╥﹏╥) ****************************** kilroy was here " Ü " ****************************** king -_- ****************************** kirby (つ -‘ _ ‘- )つ ****************************** kirby dance <(''<) <( ' ' )> (> '')> ****************************** kiss (o'3'o) ****************************** kiss my ass (_x_) ****************************** kiss2 (︶ε︶メ) ****************************** kiss3 ╮(︶ε︶メ)╭ ****************************** kissing ( ˘ ³˘)♥ ****************************** kissing2 ( ˘з˘)ε˘`) ****************************** kissing3 (~˘з˘)~~(˘ε˘~) ****************************** kissing4 (っ˘з(O.O )♥ ****************************** kissing5 (`˘з(•˘⌣˘•) ****************************** kissing6 (っ˘з(˘⌣˘ ) ****************************** kitty =^. .^= ****************************** kitty emote ᵒᴥᵒ# ****************************** knife1 )xxxxx[;;;;;;;;;> ****************************** knife2 )xxx[::::::::::> ****************************** koala @( * O * )@ ****************************** kokain ̿ ̿' ̿'\̵͇̿̿\з=(•̪●)=ε/̵͇̿̿/'̿''̿ ̿ ****************************** kyubey /人 ⌒ ‿‿ ⌒ 人\ ****************************** kyubey2 /人 ◕‿‿◕ 人\ ****************************** laughing (^▽^) ****************************** lenny ( ͡° ͜ʖ ͡°) ****************************** licking lips :-9 ****************************** line brack ●▬▬▬▬๑۩۩๑▬▬▬▬▬● ****************************** linqan :Q___ ****************************** listening to headphones ◖ᵔᴥᵔ◗ ♪ ♫ ****************************** loading1 █▒▒▒▒▒▒▒▒▒ ****************************** loading2 ███▒▒▒▒▒▒▒ ****************************** loading3 █████▒▒▒▒▒ ****************************** loading4 ███████▒▒▒ ****************************** loading5 █████████▒ ****************************** loading6 ██████████ ****************************** loch ness monster _mmmP ****************************** long rose ---------------------{{---<((@) ****************************** looking down (._.) ****************************** looking face ô¿ô ****************************** love ⓛⓞⓥⓔ ****************************** love in my eye1 (♥_♥) ****************************** love in my eye2 (。❤◡❤。) ****************************** love in my eye3 (❤◡❤) ****************************** love2 ~♡ⓛⓞⓥⓔ♡~ ****************************** love3 ♥‿♥ ****************************** love4 (Ɔ ˘⌣˘)♥(˘⌣˘ C) ****************************** machinegun ,==,-- ****************************** mad òÓ ****************************** mad10 (•ˋ _ ˊ•) ****************************** mad2 (ノ`Д ́)ノ ****************************** mad3 >_< ****************************** mad4 ~_~ ****************************** mad5 Ծ_Ծ ****************************** mad6 ⋋_⋌ ****************************** mad7 (ノ≥∇≤)ノ ****************************** mad8 {{{(>_<)}}} ****************************** mad9 ƪ(`▿▿▿▿´ƪ) ****************************** mail box |M|/ ****************************** man spider /╲/\༼ *ಠ 益 ಠ* ༽/\╱\ ****************************** man tears ಥ_ಥ ****************************** mango ) _ _ __/°°¬ ****************************** marge simpson ()()():| ****************************** marshmallows -()_)--()_)--- ****************************** med ب_ب ****************************** med man (⋗_⋖) ****************************** meditation ‿( ́ ̵ _-`)‿ ****************************** meep \(°^°)/ ****************************** melp1 (<>..<>) ****************************** melp2 (<(<>(<>.(<>..<>).<>)<>)>) ****************************** meow ฅ^•ﻌ•^ฅ ****************************** metal \m/_(>_<)_\m/ ****************************** mini penis =D ****************************** monkey @('_')@ ****************************** monocle (╭ರ_•́) ****************************** monster ٩(̾●̮̮̃̾•̃̾)۶ ****************************** monster2 ٩(- ̮̮̃-̃)۶ ****************************** mouse1 ----{,_,"> ****************************** mouse2 . ~~(__^·> ****************************** mouse3 <·^__)~~ . ****************************** mouse4 —-{,_,”><",_,}---- ****************************** mouse5 <:3 )~~~~ ****************************** mouse6 <^__)~ ****************************** mouse7 ~(__^> ****************************** mtmtika :o + :p = 69 ****************************** myancat mmmyyyyy<⦿⽘⦿>aaaannn ****************************** nathan ♪└( ̄◇ ̄)┐♪└( ̄◇ ̄)┐♪└( ̄◇ ̄)┐♪ ****************************** needle1 ┣▇▇▇═─ ****************************** needle2 |==|iiii|>----- ****************************** neo (⌐■_■)--︻╦╤─ - - - ****************************** nerd ::( ****************************** no support 乁( ◔ ౪◔)「 ┑( ̄Д  ̄)┍ ****************************** nope t(-_-t) ****************************** nose \˚ㄥ˚\ ****************************** nose2 |'L'| ****************************** oar ===========(8888) ****************************** old lady boobs |\o/\o/| ****************************** opera ヾ(´〇`)ノ♪♪♪ ****************************** owlkin (ᾢȍˬȍ)ᾢ ļ ļ ļ ļ ļ ****************************** pac man ᗧ···ᗣ···ᗣ·· ****************************** palm tree 'T` ****************************** panda ヽ( ̄(エ) ̄)ノ ****************************** party time ┏(-_-)┛┗(-_- )┓┗(-_-)┛┏(-_-)┓ ****************************** peace yo! (‾⌣‾)♉ ****************************** peepers ಠಠ ****************************** penis 8===D ****************************** penis2 ○○)=======o) ****************************** perky ( ๏ Y ๏ ) ****************************** pictou |\_______(#*#)_______/| ****************************** pie fight ---=======[} ****************************** pig1 ^(*(oo)*)^ ****************************** pig2 ༼☉ɷ⊙༽ ****************************** piggy (∩◕(oo)◕∩) ****************************** ping pong ( •_•)O*¯`·.¸.·´¯`°Q(•_• ) ****************************** pipe ====\_/ ****************************** pirate ✌(◕‿-)✌ ****************************** pistols1 ¯¯̿̿¯̿̿'̿̿̿̿̿̿̿'̿̿'̿̿̿̿̿'̿̿̿)͇̿̿)̿̿̿̿ '̿̿̿̿̿̿\̵͇̿̿\=(•̪̀●́)=o/̵͇̿̿/'̿̿ ̿ ̿̿ ****************************** pistols2 ̿' ̿'\̵͇̿̿\з=(◕_◕)=ε/̵͇̿̿/'̿'̿ ̿ ****************************** pistols3 ̿̿ ̿̿ ̿’̿̿’̿\̵͇̿̿\з=( ͡ °_̯͡° )=ε/̵͇̿̿/’̿̿’̿ ̿ ̿̿ ̿̿ ****************************** pistols4 (•̪●)=ε/̵͇̿̿/’̿’̿ ̿ ̿̿ ̿ ̿”” ****************************** pistols5 ̿̿ ̿̿ ̿̿ ̿'̿'\̵͇̿̿\з=( ͠° ͟ʖ ͡°)=ε/̵͇̿̿/'̿̿ ̿ ̿ ̿ ̿ ̿ ****************************** playing cards [♥]]] [♦]]] [♣]]] [♠]]] ****************************** playing cards clubs [♣]]] ****************************** playing cards diamonds [♦]]] ****************************** playing cards hearts [♥]]] ****************************** playing cards spades [♠]]] ****************************** playing in snow (╯^□^)╯︵ ❄☃❄ ****************************** point (☞゚ヮ゚)☞ ****************************** polar bear ˁ˚ᴥ˚ˀ ****************************** possessed <>_<> ****************************** power lines TTT ****************************** pretty eyes ఠ_ఠ ****************************** professor """⌐(ಠ۾ಠ)¬""" ****************************** puls ––•–√\/––√\/––•–– ****************************** punch O=('-'Q) ****************************** pursing lips :-" ****************************** put the table back ┬─┬ ノ( ゜-゜ノ) ****************************** rak /⦿L⦿\ ****************************** rare ┌ಠ_ಠ)┌∩┐ ᶠᶸᶜᵏ♥ᵧₒᵤ ****************************** ready to cry :-} ****************************** real face ( ͡° ͜ʖ ͡°) ****************************** really mad >:-I ****************************** really sad :-C ****************************** regular ass (_!_) ****************************** religious ☪ ✡ † ☨ ✞ ✝ ☥ ☦ ☓ ♁ ☩ ****************************** resting my eyes ᴖ̮ ̮ᴖ ****************************** roadblock X+X+X+X+X ****************************** robber -╤╗_(◙◙)_╔╤- - - - \o/ \o/ \o/ ****************************** robot boy ◖(◣☩◢)◗ ****************************** robot1 d[ o_0 ]b ****************************** robot2 c[○┬●]כ ****************************** robot3 {•̃_•̃} ****************************** rock on1 \,,/(^_^)\,,/ ****************************** rock on2 \m/(-_-)\m/ ****************************** rocket ∙∙∙∙∙·▫▫ᵒᴼᵒ▫ₒₒ▫ᵒᴼᵒ▫ₒₒ▫ᵒᴼᵒ☼)===> ****************************** roke _\m/ ****************************** rope ╚(▲_▲)╝ ****************************** rose1 --------{---(@ ****************************** rose2 @}}>----- ****************************** rose3 @-->--->--- ****************************** rose4 @}~}~~~ ****************************** rose5 @-}-- ****************************** rose6 @)}---^----- ****************************** rose7 @->-->--- ****************************** round bird ,(u°)> ****************************** round cat ~(^._.) ****************************** running ε=ε=ε=┌(;*´Д`)ノ ****************************** russian boobs [.][.] ****************************** ryans dick 8======D ****************************** sad and confused ¯\_(⊙︿⊙)_/¯ ****************************** sad and crying (ᵟຶ︵ ᵟຶ) ****************************** sad face (ಥ⌣ಥ) ****************************** sad1 ε(´סּ︵סּ`)з ****************************** sad2 (✖╭╮✖) ****************************** sad3 (◑﹏◐) ****************************** sad4 (◕_◕) ****************************** sad5 (´ᗣ`) ****************************** sad6 Y_Y ****************************** sat '(◣_◢)' ****************************** satan ↑_(ΦwΦ;)Ψ ****************************** satisfied (◠﹏◠) ****************************** scissors ✄ ****************************** screaming :-@ ****************************** seal (ᵔᴥᵔ) ****************************** sean the sheep <('--')> ****************************** sex symbol ◢♂◣◥♀◤◢♂◣◥♀◤ ****************************** shark ~~~~~~^~~~~~ ****************************** shark attack ~~~~~~\o/~~~~~/\~~~~~ ****************************** shark face ( ˇ෴ˇ ) ****************************** sheep °l°(,,,,); ****************************** shocked1 (∩╹□╹∩) ****************************** shocked2 :-O ****************************** shrug ¯\_(ツ)_/¯ ****************************** shy (๑•́ ₃ •̀๑) ****************************** singing d(^o^)b¸¸♬·¯·♩¸¸♪·¯·♫¸¸ ****************************** singing2 ♪└( ̄◇ ̄)┐♪ ****************************** sky free ѧѦ ѧ ︵͡︵ ̢ ̱ ̧̱ι̵̱̊ι̶̨̱ ̶̱ ︵ Ѧѧ ︵͡ ︵ ѧ Ѧ ̵̗̊o̵̖ ︵ ѦѦ ѧ ****************************** sleeping (-.-)Zzz... ****************************** sleeping baby [{-_-}] ZZZzz zz z... ****************************** sleepy 눈_눈 ****************************** sleepy coffee ( -_-)旦~ ****************************** slenderman ϟƖΣNd€RMαN ****************************** smile :-) ****************************** smirk :-, ****************************** smooth (づ  ̄ ³ ̄)づ ⓈⓂⓄⓄⓉⒽ ****************************** smug bastard (‾⌣‾) ****************************** snail1 '-'_@_ ****************************** snail2 '\Q___ ****************************** sniper rifle ︻デ┳═ー ****************************** sniperstars ✯╾━╤デ╦︻✯ ****************************** snowing ✲´*。.❄¨¯`*✲。❄。*。 ****************************** snowman1 ☃ ****************************** snowman2 { }( : ^ )( """" )( ) ****************************** sophie <XX""XX> ****************************** sorreh bro (◢_◣) ****************************** spade bold ♠ ****************************** spade regular ♤ ****************************** sparkling heart -`ღ´- ****************************** spear >>-;;;------;;--> ****************************** spell cast ╰( ⁰ ਊ ⁰ )━☆゚.*・。゚ ****************************** sperm ~~o ****************************** spider1 //O\ ****************************** spider2 /\oo/\ ****************************** spider3 ///\oo/\\\ ****************************** spider4 /╲/\╭ºoꍘoº╮/\╱\ ****************************** spot ( . Y . ) ****************************** squee ヾ(◎o◎,,;)ノ ****************************** squid くコ:彡 ****************************** squigle with spirals 6\9 ****************************** srs face (ಠ_ಠ) ****************************** star in my eyes <*_*> ****************************** staring ٩(๏_๏)۶ ****************************** stars ✌⊂(✰‿✰)つ✌ ****************************** stars2 ⋆ ✢ ✣ ✤ ✥ ✦ ✧ ✩ ✪ ✫ ✬ ✭ ✮ ✯ ✰ ★ ****************************** stealth fighter -^- ****************************** stranger danger (づ。◕‿‿◕。)づ ****************************** strut ᕕ( ᐛ )ᕗ ****************************** stunna shades (っ▀¯▀)つ ****************************** sunglasses1 (•_•)>⌐■-■ (⌐■_■) ****************************** sunglasses2 B-) ****************************** sunny day ☁ ▅▒░☼‿☼░▒▅ ☁ ****************************** superman -^mOm^- ****************************** superman logo /s\ ****************************** surprised1 =:-o ****************************** surprised10 (⊙.◎) ****************************** surprised11 ๏_๏ ****************************** surprised12 (˚-˚) ****************************** surprised13 ˚o˚ ****************************** surprised14 (O.O) ****************************** surprised15 ( ゚o゚) ****************************** surprised16 ◉_◉ ****************************** surprised17 【•】_【•】 ****************************** surprised18 (•ิ_•) ****************************** surprised19 ⊙⊙ ****************************** surprised2 ( ゚Д゚) ****************************** surprised20 ͡๏_͡๏ ****************************** surprised3 (O_o) ****************************** surprised4 (º_•) ****************************** surprised5 (º.º) ****************************** surprised6 ⊙▃⊙ ****************************** surprised7 O.o ****************************** surprised8 ●_● ****************************** surprised9 (⊙̃.o) ****************************** swim _(ッ)>_// ****************************** swim2 ー(ッ)」 ****************************** swim3 _(ッ)へ ****************************** sword1 (===||:::::::::::::::> ****************************** sword10 O==I======> ****************************** sword2 ▬▬ι═══════ﺤ -═══════ι▬▬ ****************************** sword3 ס₪₪₪₪§|(Ξ≥≤≥≤≥≤ΞΞΞΞΞΞΞΞΞΞ> ****************************** sword4 |O/////[{:;:;:;:;:;:;:;:;> ****************************** sword5 <%%%%|==========> ****************************** sword6 o()xxxx[{::::::::::::::::::::::::::::::::::> ****************************** sword7 o==[]::::::::::::::::> ****************************** sword8 ▬▬ι═══════> ****************************** sword9 <═══════ι▬▬ ****************************** table flip (╯°□°)╯︵ ┻━┻ ****************************** table flip10 (/ .□.)\ ︵╰(゜Д゜)╯︵ /(.□. \) ****************************** table flip2 (ノಥ益ಥ)ノ ┻━┻ ****************************** table flip3 ┬─┬ノ( º _ ºノ) ****************************** table flip4 (ノಠ益ಠ)ノ彡┻━┻ ****************************** table flip5 ┬──┬ ¯\_(ツ) ****************************** table flip6 ┻━┻ ︵ ¯\(ツ)/¯ ︵ ┻━┻ ****************************** table flip7 (╯°□°)╯︵ ┻━┻ ︵ ╯(°□° ╯) ****************************** table flip8 (╯°Д°)╯︵ /(.□ . \) ****************************** table flip9 (ノ^_^)ノ┻━┻ ┬─┬ ノ( ^_^ノ) ****************************** taking a dump (⩾﹏⩽) ****************************** teddy ˁ(⦿ᴥ⦿)ˀ ****************************** teepee /|\ ****************************** telephone ε(๏_๏)з】 ****************************** tent1 //\ ****************************** tent2 /\\ ****************************** tgif “ヽ(´▽`)ノ” ****************************** thanks \(^-^)/ ****************************** things that can_t be unseen ♨_♨ ****************************** this is areku d(^o^)b ****************************** tidy up ┬─┬⃰͡ (ᵔᵕᵔ͜ ) ****************************** tie-fighter |—O—| ****************************** tired ( ͡ಠ ʖ̯ ͡ಠ) ****************************** touchy feely ԅ(≖‿≖ԅ) ****************************** toungue out1 :-Þ ****************************** toungue out2 :-P ****************************** train /˳˳_˳˳\!˳˳X˳˳!(˳˳_˳˳)[˳˳_˳˳] ****************************** tree stump J"l ****************************** tripping out q(❂‿❂)p ****************************** trolling ༼∵༽ ༼⍨༽ ༼⍢༽ ༼⍤༽ ****************************** tron (\/)(;,,;)(\/) ****************************** trumpet -=iii=<() ****************************** ufo1 .-=-. ****************************** ufo2 .-=o=-. ****************************** ukulele { o }==(::) ****************************** umadbro ¯\_(ツ)_/¯ ****************************** up (◔/‿\◔) ****************************** upset ◤(¬‿¬)◥ ****************************** upsidedown ( ͜。 ͡ʖ ͜。) ****************************** vagina (:) ****************************** victory V(-.o)V ****************************** volcano1 /"\ ****************************** volcano2 /W\ ****************************** volcano3 /V\ ****************************** wat ಠ_ಠ ****************************** wat-wat Σ(‘◉⌓◉’) ****************************** wave dance ~(^-^)~ ****************************** waves °º¤ø,¸¸,ø¤º°`°º¤ø,¸,ø¤°º¤ø,¸¸,ø¤º°`°º¤ø,¸ ****************************** weather ☼ ☀ ☁ ☂ ☃ ☄ ☾ ☽ ❄ ☇ ☈ ⊙ ☉ ℃ ℉ ° ❅ ✺ ϟ ****************************** westbound fish < )))) >< ****************************** what? ة_ة ****************************** what?? (Ͼ˳Ͽ)..!!! ****************************** whisling (っ•́。•́)♪♬ ****************************** why ლ( `Д’ ლ) ****************************** wink ;-) ****************************** winnie the pooh ʕ •́؈•̀) ****************************** winning (•̀ᴗ•́)و ̑̑ ****************************** wizard (∩ ͡° ͜ʖ ͡°)⊃━☆゚. * ****************************** wizard2 (∩`-´)⊃━☆゚.*・。゚ ****************************** woman ▓⚗_⚗▓ ****************************** woops :-* ****************************** worm _/\__/\__0> ****************************** worried (´・_・`) ****************************** wtf dude? \(◑д◐)>∠(◑д◐) ****************************** yawning \(^o^)/ ****************************** yessir ∠(・`_´・ ) ****************************** yo __o000o__(o)(o)__o000o__ ****************************** yolo Yᵒᵘ Oᶰˡʸ Lᶤᵛᵉ Oᶰᶜᵉ ****************************** yun (っ˘ڡ˘ς) ****************************** zable ಠ_ರೃ ****************************** zoidberg (\/)(Ö,,,,Ö)(\/) ****************************** zombie 'º_º' ****************************** zombie2 [¬º-°]¬ ****************************** zoned (⊙_◎) ****************************** ###Markdown ART Version : 5.2 ###Code from art import * ###Output _____no_output_____ ###Markdown Art Counter ###Code ART_COUNTER ###Output _____no_output_____ ###Markdown Art List ###Code art_list() ###Output 3 ᕙ༼ ,,ԾܫԾ,, ༽ᕗ ****************************** 5 ᕙ༼ ,,இܫஇ,, ༽ᕗ ****************************** 9/11 truth ✈__✈ █ █ ▄ ****************************** acid ⊂(◉‿◉)つ ****************************** afraid ( ゚ Д゚) ****************************** airplane1 ‛¯¯٭٭¯¯(▫▫)¯¯٭٭¯¯’ ****************************** airplane2 ✈ ****************************** airplane3 ✈ ✈ ———- ♒✈ ****************************** ak-47 ︻┳デ═— ****************************** alien ::) ****************************** almost cared ╰╏ ◉ 〜 ◉ ╏╯ ****************************** american money1 [($)] ****************************** american money2 [̲̅$̲̅(̲̅1̲̅)̲̅$̲̅] ****************************** american money3 [̲̅$̲̅(̲̅5̲̅)̲̅$̲̅] ****************************** american money4 [̲̅$̲̅(̲̅ιοο̲̅)̲̅$̲̅] ****************************** american money5 [̲̅$̲̅(̲̅2οο̲̅)̲̅$̲̅] ****************************** angel1 ^i^ ****************************** angel2 O:-) ****************************** angry ლ(ಠ益ಠ)ლ ****************************** angry birds ( ఠൠఠ )ノ ****************************** angry face (⋟﹏⋞) ****************************** angry face2 (╬ ಠ益ಠ) ****************************** angry troll ヽ༼ ಠ益ಠ ༽ノ ****************************** angry2 ( ͠° ͟ʖ ͡°) ****************************** ankush ︻デ┳═ー*----* ****************************** arrow1 »»---------------------► ****************************** arrow2 XXX--------> ****************************** arrowhead ⤜(ⱺ ʖ̯ⱺ)⤏ ****************************** at what cost ლ(ಠ益ಠლ) ****************************** atish (| - _ - |) ****************************** awesome <:3 )~~~ ****************************** awkward •͡˘㇁•͡˘ ****************************** bad hair1 =:-) ****************************** bad hair2 =:-( ****************************** bagel nln >_< nln ****************************** band aid (̲̅:̲̅:̲̅:̲̅[̲̅ ̲̅]̲̅:̲̅:̲̅:̲̅ ) ****************************** barbell ▐━━━━━▌ ****************************** barcode1 █║▌│ █│║▌ ║││█║▌ │║║█║ │║║█║ ****************************** barcode2 ║█║▌║█║▌│║▌║▌█║ ****************************** barf (´ж`ς) ****************************** baseball fan q:o) ****************************** basking in glory ヽ(´ー`)ノ ****************************** bat1 ^O^ ****************************** bat2 ^v^ ****************************** bautista (╯°_°)╯︵ ━━━ ****************************** bear ʕ•ᴥ•ʔ ****************************** bear GTFO ʕ •`ᴥ•´ʔ ****************************** bear squiting ʕᵔᴥᵔʔ ****************************** bear2 (ʳ ´º㉨º) ****************************** because ∵ ****************************** bee ¸.·´¯`·¸¸.·´¯`·.¸.-<\^}0=: ****************************** being draged ╰(◣﹏◢)╯ ****************************** bender ¦̵̱ ̵̱ ̵̱ ̵̱ ̵̱(̢ ̡͇̅└͇̅┘͇̅ (▤8כ−◦ ****************************** big eyes ⺌∅‿∅⺌ ****************************** big kiss :-X ****************************** big nose ˚∆˚ ****************************** big smile :-D ****************************** bird (⌒▽⌒) ****************************** birds ~(‾▿‾)~ ****************************** blackeye 0__# ****************************** bomb !!( ’ ‘)ノノ⌒●~* ****************************** boobies (. )( .) ****************************** boobs (.)(.) ****************************** boobs2 (·人·) ****************************** boombox1 ♫♪.ılılıll|̲̅̅●̲̅̅|̲̅̅=̲̅̅|̲̅̅●̲̅̅|llılılı.♫♪ ****************************** boombox2 ♫♪ |̲̅̅●̲̅̅|̲̅̅=̲̅̅|̲̅̅●̲̅̅| ♫♪ ****************************** boxing ლ(•́•́ლ) ****************************** breakdown ಥ﹏ಥ ****************************** british money [£::] ****************************** buck teeth :-B ****************************** bugs bunny E=B ****************************** bullshit |3ᵕᶦᶦᶳᶣᶨᶵ ****************************** bunny (\_/) ****************************** butt (‿|‿) ****************************** butterfly Ƹ̵̡Ӝ̵̨̄Ʒ ****************************** camera [◉"] ****************************** canoe .,.,\______/,..,., ****************************** car `o##o> ****************************** car race ∙،°. ˘Ô≈ôﺣ » » » ****************************** care crowd (-(-_(-_-)_-)-) ****************************** careless ◔_◔ ****************************** carpet roll @__ ****************************** cassette1 |[●▪▪●]| ****************************** cassette2 [¯ↂ■■ↂ¯] ****************************** cat face ⦿⽘⦿ ****************************** cat smile ≧◔◡◔≦ ****************************** cat1 =^..^= ****************************** cat2 龴ↀ◡ↀ龴 ****************************** cat3 ^.--.^ ****************************** cat4 (=^ェ^=) ****************************** caterpillar ,/\,/\,/\,/\,/\,/\,o ****************************** catlenny ( ͡° ᴥ ͡°) ****************************** chair ╦╣ ****************************** charly +:) ****************************** chasing ''⌐(ಠ۾ಠ)¬'' ****************************** cheer ^(¤o¤)^ ****************************** cheers ( ^_^)o自自o(^_^ ) ****************************** chess ♞▀▄▀▄♝▀▄ ****************************** chess pieces ♚ ♛ ♜ ♝ ♞ ♟ ♔ ♕ ♖ ♗ ♘ ♙ ****************************** chicken ʚ(•` ****************************** chu (´ε` ) ****************************** cigarette1 (̅_̅_̅_̅(̅_̅_̅_̅_̅_̅_̅_̅_̅̅_̅()ڪے ****************************** cigarette2 (____((____________()~~~ ****************************** cigarette3 ()___)____________) ****************************** clowning *:o) ****************************** club bold ♣ ****************************** club regular ♧ ****************************** coffee now {zzz}°°°( -_-)>c[_] ****************************** coffee1 c[_] ****************************** coffee2 l_D ****************************** coffee3 l_P ****************************** coffee4 l_B ****************************** computer mouse [E} ****************************** concerned (@_@) ****************************** confused scratch (⊙.☉)7 ****************************** confused1 :-/ ****************************** confused10 (*'__'*) ****************************** confused2 :-\ ****************************** confused3 (°~°) ****************************** confused4 ^^' ****************************** confused5 é_è ****************************** confused6 (˚ㄥ_˚) ****************************** confused7 (; ͡°_ʖ ͡°) ****************************** confused8 (´•_•`) ****************************** confused9 (ˇ_ˇ’!l) ****************************** crab (\|) ._. (|/) ****************************** crayons ((̲̅ ̲̅(̲̅C̲̅r̲̅a̲̅y̲̅o̲̅l̲̲̅̅a̲̅( ̲̅((> ****************************** crazy ミ●﹏☉ミ ****************************** creeper ƪ(ړײ)‎ƪ​​ ****************************** crotch shot \*/ ****************************** cry (╯︵╰,) ****************************** cry face 。゚( ゚இ‸இ゚)゚。 ****************************** cry troll ༼ ༎ຶ ෴ ༎ຶ༽ ****************************** crying1 Ỏ̷͖͈̞̩͎̻̫̫̜͉̠̫͕̭̭̫̫̹̗̹͈̼̠̖͍͚̥͈̮̼͕̠̤̯̻̥̬̗̼̳̤̳̬̪̹͚̞̼̠͕̼̠̦͚̫͔̯̹͉͉̘͎͕̼̣̝͙̱̟̹̩̟̳̦̭͉̮̖̭̣̣̞̙̗̜̺̭̻̥͚͙̝̦̲̱͉͖͉̰̦͎̫̣̼͎͍̠̮͓̹̹͉̤̰̗̙͕͇͔̱͕̭͈̳̗̭͔̘̖̺̮̜̠͖̘͓̳͕̟̠̱̫̤͓͔̘̰̲͙͍͇̙͎̣̼̗̖͙̯͉̠̟͈͍͕̪͓̝̩̦̖̹̼̠̘̮͚̟͉̺̜͍͓̯̳̱̻͕̣̳͉̻̭̭̱͍̪̩̭̺͕̺̼̥̪͖̦̟͎̻̰_Ỏ̷͖͈̞̩͎̻̫̫̜͉̠̫͕̭̭̫̫̹̗̹͈̼̠̖͍͚̥͈̮̼͕̠̤̯̻̥̬̗̼̳̤̳̬̪̹͚̞̼̠͕̼̠̦͚̫͔̯̹͉͉̘͎͕̼̣̝͙̱̟̹̩̟̳̦̭͉̮̖̭̣̣̞̙̗̜̺̭̻̥͚͙̝̦̲̱͉͖͉̰̦͎̫̣̼͎͍̠̮͓̹̹͉̤̰̗̙͕͇͔̱͕̭͈̳̗̭͔̘̖̺̮̜̠͖̘͓̳͕̟̠̱̫̤͓͔̘̰̲͙͍͇̙͎̣̼̗̖͙̯͉̠̟͈͍͕̪͓̝̩̦̖̹̼̠̘̮͚̟͉̺̜͍͓̯̳̱̻͕̣̳͉̻̭̭̱͍̪̩̭̺͕̺̼̥̪͖̦̟͎̻̰ ****************************** crying2 :~( ****************************** cthulhu ^(;,;)^ ****************************** cthulhu2 ( ;,;) ****************************** cup1 (▓ ****************************** cup2 \̅_̅/̷̚ʾ ****************************** cussing :-# ****************************** cute cat ^⨀ᴥ⨀^ ****************************** cute face (。◕‿◕。) ****************************** cute face2 (ღ˘◡˘ღ) ****************************** cute face3 ✿◕ ‿ ◕✿ ****************************** cute face4 ❀◕ ‿ ◕❀ ****************************** cute face5 (✿◠‿◠) ****************************** cute face6 (◕‿◕✿) ****************************** cute face7 ☾˙❀‿❀˙☽ ****************************** cute face8 (◡‿◡✿) ****************************** cute face9 ლ(╹◡╹ლ) ****************************** dab ヽ( •_)ᕗ ****************************** dagger cxxx|;:;:;:;:;:;:;:;> ****************************** dalek ̵̄/͇̐\ ****************************** damnyou (ᕗ ͠° ਊ ͠° )ᕗ ****************************** dance (>'-')> <('_'<) ^('_')\- \m/(-_-)\m/ <( '-')> \_( .")> <(._.)-` ****************************** dance2 ♪♪ ヽ(ˇ∀ˇ )ゞ ****************************** dancee ♪┏(°.°)┛┗(°.°)┓┗(°.°)┛┏(°.°)┓ ♪ ****************************** dancing ┌(ㆆ㉨ㆆ)ʃ ****************************** dancing people ‎(/.__.)/ \(.__.\) ****************************** dead child '-=,o ****************************** dead eyes ¿ⓧ_ⓧﮌ ****************************** dead girl '==>x\9 ****************************** dead guy '==xx\0 ****************************** dear god why щ(゚Д゚щ) ****************************** death star defense team |-o-| (-o-) |-o-| ****************************** decorate ▂▃▅▇█▓▒░۩۞۩ ۩۞۩░▒▓█▇▅▃▂ ****************************** depressed (︶︹︶) ****************************** derp ヘ(。□°)ヘ ****************************** devil ]:-> ****************************** devilish grin >:-D ****************************** devilish smile >:) ****************************** devious smile ಠ‿ಠ ****************************** dgaf ┌∩┐(◣ _ ◢)┌∩┐ ****************************** diamond bold ♦ ****************************** diamond regular ♢ ****************************** dice [: :] ****************************** dick 8====D ****************************** disagree ٩◔̯◔۶ ****************************** discombobulated ⊙﹏⊙ ****************************** dislike1 (Ծ‸ Ծ) ****************************** dislike2 ( ಠ ʖ̯ ಠ) ****************************** dna sample ~ ****************************** do you even lift bro? ᕦ(ò_óˇ)ᕤ ****************************** domino [: :|:::] ****************************** don king ==8-) ****************************** double flip ┻━┻ ︵ヽ(`Д´)ノ︵ ┻━┻ ****************************** drowning 人人人ヾ( ;×o×)〃 人人人 ****************************** druling1 :-... ****************************** druling2 :-P~~~ ****************************** drunkenness ヽ(´ー`)┌ ****************************** dude glasses1 @[O],[O] ****************************** dude glasses2 @(o),(o) ****************************** dummy <-|-'_'-|-> ****************************** dunno ¯\(°_o)/¯ ****************************** dunno2 \_(シ)_/ ****************************** dunno3 └㋡┘ ****************************** dunno4 ╘㋡╛ ****************************** dunno5 ٩㋡۶ ****************************** eastbound fish ><((((> ****************************** eaten apple [===]-' ****************************** eds dick 8=D ****************************** eeriemob (-(-_-(-_(-_(-_-)_-)-_-)_-)_-)-) ****************************** electrocardiogram1 √v^√v^√v^√v^√♥ ****************************** electrocardiogram2 v^v^v^v^√\/♥ ****************************** electrocardiogram3 /\/\/\/\/\/\/\/\/\/\/\v^♥ ****************************** electrocardiogram4 √v^√v^♥√v^√v^√ ****************************** elephant °j°m ****************************** emo (///_ ;) ****************************** emo dance ヾ(-_- )ゞ ****************************** energy つ ◕_◕ ༽つ つ ◕_◕ ༽つ ****************************** envelope ✉ ****************************** epic gun ︻┳デ═— ****************************** equalizer ▇ ▅ █ ▅ ▇ ▂ ▃ ▁ ▁ ▅ ▃ ▅ ▅ ▄ ▅ ▇ ****************************** eric >--) ) ) )*> ****************************** error (╯°□°)╯︵ ɹoɹɹƎ ****************************** exchange (╯°□°)╯︵ ǝƃuɐɥɔxǝ ****************************** excited ☜(⌒▽⌒)☞ ****************************** exorcism ح(•̀ж•́)ง † ****************************** eye closed (╯_╰) ****************************** eye roll ⥀.⥀ ****************************** eyes ℃ↂ_ↂↃ ****************************** face •|龴◡龴|• ****************************** facepalm (>ლ) ****************************** fail o(╥﹏╥)o ****************************** fart (ˆ⺫ˆ๑)<3 ****************************** fat ass (__!__) ****************************** faydre (U) [^_^] (U) ****************************** feel perky (`・ω・´) ****************************** fido V•ᴥ•V ****************************** fight (ง'̀-'́)ง ****************************** finger1 ╭∩╮(Ο_Ο)╭∩╮ ****************************** finger2 ┌∩┐(◣_◢)┌∩┐ ****************************** finger3 ಠ︵ಠ凸 ****************************** finger4 ┌∩┐(>_<)┌∩┐ ****************************** finn | (• ◡•)| ****************************** fish invasion ›(̠̄:̠̄c ›(̠̄:̠̄c (¦Ҝ (¦Ҝ ҉ - - - ¦̺͆¦ ▪▌ ****************************** fish skeleton1 >-}-}-}-> ****************************** fish skeleton2 >++('> ****************************** fish swim ¸.·´¯`·.´¯`·.¸¸.·´¯`·.¸><(((º> ****************************** fish1 ><(((('> ****************************** fish2 ><> ****************************** fish3 `·.¸¸ ><((((º>.·´¯`·><((((º> ****************************** fish4 ><> ><> ****************************** fish5 <>< ****************************** fish6 <`)))>< ****************************** fisticuffs ლ(`ー´ლ) ****************************** flex ᕙ(⇀‸↼‶)ᕗ ****************************** flip friend (ノಠ ∩ಠ)ノ彡( \o°o)\ ****************************** fly away ⁽⁽ଘ( ˊᵕˋ )ଓ⁾⁾ ****************************** flying ح˚௰˚づ ****************************** fork ---= ****************************** formula1 car \ō͡≡o˞̶ ****************************** fox -^^,--,~ ****************************** french kiss :-XÞ ****************************** frown (ღ˘⌣˘ღ) ****************************** fu (ಠ_ಠ)┌∩┐ ****************************** fuck off t(-.-t) ****************************** fuck you nlm (-_-) mln ****************************** fuck you2 (° ͜ʖ͡°)╭∩╮ ****************************** fuckall ╭∩╮(︶︿︶)╭∩╮ ****************************** full mouth :-I ****************************** fungry Σ_(꒪ཀ꒪」∠)_ ****************************** ghost ‹’’›(Ͼ˳Ͽ)‹’’› ****************************** gimme ༼ つ ◕_◕ ༽つ ****************************** glasses -@-@- ****************************** glasses2 ᒡ◯ᵔ◯ᒢ ****************************** glitter (*・‿・)ノ⌒*:・゚✧ ****************************** go away bear ╭∩╮ʕ•ᴥ•ʔ╭∩╮ ****************************** gotit (☞゚∀゚)☞ ****************************** gtalk fit (•̪̀●́)=ε/̵͇̿̿/'̿̿ ̿ ̿̿ N --------{---(@ ****************************** guitar c====(=#O| ) ~~ ♬·¯·♩¸¸♪·¯·♫¸ ****************************** gun1 ︻╦╤─ ****************************** gun2 ︻デ═一 ****************************** gun3 ╦̵̵̿╤─ ҉ ~ • ****************************** gun4 ︻╦̵̵͇̿̿̿̿╤── ****************************** hacksaw [|^^^^^^^ ****************************** hairstyle ⨌⨀_⨀⨌ ****************************** hal @_'-' ****************************** hammer #== ****************************** happy ۜ\(סּںסּَ` )/ۜ ****************************** happy birthday 1 ዞᏜ℘℘Ꮍ ℬℹℛʈዞᗬᏜᎽ ****************************** happy face ヽ(´▽`)/ ****************************** happy hug \(ᵔᵕᵔ)/ ****************************** happy square 【ツ】 ****************************** happy10 (´ツ`) ****************************** happy11 ( ^◡^)っ ****************************** happy12 ┏(^0^)┛┗(^0^)┓ ****************************** happy13 (°⌣°) ****************************** happy14 ٩(^‿^)۶ ****************************** happy15 (•‿•) ****************************** happy16 ó‿ó ****************************** happy17 ٩◔‿◔۶ ****************************** happy18 ಠ◡ಠ ****************************** happy19 ●‿● ****************************** happy2 ⎦˚◡˚⎣ ****************************** happy20 ( '‿' ) ****************************** happy21 ^‿^ ****************************** happy22 ┌( ಠ‿ಠ)┘ ****************************** happy23 (˘◡˘) ****************************** happy24 ☯‿☯ ****************************** happy25 \(• ◡ •)/ ****************************** happy26 ( ͡ʘ ͜ʖ ͡ʘ) ****************************** happy27 ( ͡• ͜ʖ ͡• ) ****************************** happy3 ㋡ ****************************** happy4 ^_^ ****************************** happy5 [^_^] ****************************** happy6 (ツ) ****************************** happy7 【シ】 ****************************** happy8 ㋛ ****************************** happy9 (シ) ****************************** head shot ->~∑≥_≤) ****************************** headphone1 d[-_-]b ****************************** headphone2 d(-_-)b ****************************** headphone3 (W) ****************************** heart bold ♥ ****************************** heart regular ♡ ****************************** heart1 »-(¯`·.·´¯)-> ****************************** heart2 ♡♡ ****************************** heart3 <3 ****************************** hell yeah (òÓ,)_\,,/ ****************************** hello (ʘ‿ʘ)╯ ****************************** hello2 (ツ)ノ ****************************** help ٩(͡๏̯͡๏)۶ ****************************** high five ( ⌒o⌒)人(⌒-⌒ )v ****************************** hitchhicking (งツ)ว ****************************** homer (_8(|) ****************************** homer simpson =(:o) ****************************** honeycute ❤◦.¸¸. ◦✿ ****************************** house __̴ı̴̴̡̡̡ ̡͌l̡̡̡ ̡͌l̡*̡̡ ̴̡ı̴̴̡ ̡̡͡|̲̲̲͡͡͡ ̲▫̲͡ ̲̲̲͡͡π̲̲͡͡ ̲̲͡▫̲̲͡͡ ̲|̡̡̡ ̡ ̴̡ı̴̡̡ ̡͌l̡̡̡̡.___ ****************************** hoxom h(o x o )m ****************************** hug me (っ◕‿◕)っ ****************************** hugger (づ ̄ ³ ̄)づ ****************************** huhu █▬█ █▄█ █▬█ █▄█ ****************************** hybrix ʕʘ̅͜ʘ̅ʔ ****************************** i dont care ╭∩╮(︶︿︶)╭∩╮ ****************************** i kill you ̿ ̿̿'̿̿\̵͇̿̿\=(•̪●)=/̵͇̿̿/'̿̿ ̿ ̿ ****************************** im a hugger (⊃。•́‿•̀。)⊃ ****************************** infinity (X) ****************************** injured (҂◡_◡) ****************************** inlove (✿ ♥‿♥) ****************************** innocent face ʘ‿ʘ ****************************** japanese lion face °‿‿° ****************************** jaymz (•̪●)==ε/̵͇̿​̿/’̿’̿ ̿ ̿̿ `(•.°)~ ****************************** jazz musician ヽ(⌐■_■)ノ♪♬ ****************************** john lennon ((ºjº)) ****************************** joker1 🂠 ****************************** joker2 🃟 ****************************** joker3 🃏 ****************************** joker4 🃠 ****************************** jokeranonimous ╭∩╮ (òÓ,) ╭∩╮ ****************************** jokeranonimous2 ╭∩╮(ô¿ô)╭∩╮ ****************************** joy n_n ****************************** judgemental \{ಠʖಠ\} ****************************** judging ( ఠ ͟ʖ ఠ) ****************************** kablewee ̿' ̿'\̵͇̿̿\з=( ͡ °_̯͡° )=ε/̵͇̿̿/'̿'̿ ̿ ****************************** killer (⌐■_■)--︻╦╤─ - - - (╥﹏╥) ****************************** kilroy was here " Ü " ****************************** king -_- ****************************** kirby (つ -‘ _ ‘- )つ ****************************** kirby dance <(''<) <( ' ' )> (> '')> ****************************** kiss (o'3'o) ****************************** kiss my ass (_x_) ****************************** kiss2 (︶ε︶メ) ****************************** kiss3 ╮(︶ε︶メ)╭ ****************************** kissing ( ˘ ³˘)♥ ****************************** kissing2 ( ˘з˘)ε˘`) ****************************** kissing3 (~˘з˘)~~(˘ε˘~) ****************************** kissing4 (っ˘з(O.O )♥ ****************************** kissing5 (`˘з(•˘⌣˘•) ****************************** kissing6 (っ˘з(˘⌣˘ ) ****************************** kitty =^. .^= ****************************** kitty emote ᵒᴥᵒ# ****************************** knife1 )xxxxx[;;;;;;;;;> ****************************** knife2 )xxx[::::::::::> ****************************** koala @( * O * )@ ****************************** kokain ̿ ̿' ̿'\̵͇̿̿\з=(•̪●)=ε/̵͇̿̿/'̿''̿ ̿ ****************************** kyubey /人 ⌒ ‿‿ ⌒ 人\ ****************************** kyubey2 /人 ◕‿‿◕ 人\ ****************************** laughing (^▽^) ****************************** lenny ( ͡° ͜ʖ ͡°) ****************************** licking lips :-9 ****************************** line brack ●▬▬▬▬๑۩۩๑▬▬▬▬▬● ****************************** linqan :Q___ ****************************** listening to headphones ◖ᵔᴥᵔ◗ ♪ ♫ ****************************** loading1 █▒▒▒▒▒▒▒▒▒ ****************************** loading2 ███▒▒▒▒▒▒▒ ****************************** loading3 █████▒▒▒▒▒ ****************************** loading4 ███████▒▒▒ ****************************** loading5 █████████▒ ****************************** loading6 ██████████ ****************************** loch ness monster _mmmP ****************************** long rose ---------------------{{---<((@) ****************************** looking down (._.) ****************************** looking face ô¿ô ****************************** love ⓛⓞⓥⓔ ****************************** love in my eye1 (♥_♥) ****************************** love in my eye2 (。❤◡❤。) ****************************** love in my eye3 (❤◡❤) ****************************** love2 ~♡ⓛⓞⓥⓔ♡~ ****************************** love3 ♥‿♥ ****************************** love4 (Ɔ ˘⌣˘)♥(˘⌣˘ C) ****************************** machinegun ,==,-- ****************************** mad òÓ ****************************** mad10 (•ˋ _ ˊ•) ****************************** mad2 (ノ`Д ́)ノ ****************************** mad3 >_< ****************************** mad4 ~_~ ****************************** mad5 Ծ_Ծ ****************************** mad6 ⋋_⋌ ****************************** mad7 (ノ≥∇≤)ノ ****************************** mad8 {{{(>_<)}}} ****************************** mad9 ƪ(`▿▿▿▿´ƪ) ****************************** mail box |M|/ ****************************** man spider /╲/\༼ *ಠ 益 ಠ* ༽/\╱\ ****************************** man tears ಥ_ಥ ****************************** mango ) _ _ __/°°¬ ****************************** marge simpson ()()():| ****************************** marshmallows -()_)--()_)--- ****************************** med ب_ب ****************************** med man (⋗_⋖) ****************************** meditation ‿( ́ ̵ _-`)‿ ****************************** meep \(°^°)/ ****************************** melp1 (<>..<>) ****************************** melp2 (<(<>(<>.(<>..<>).<>)<>)>) ****************************** meow ฅ^•ﻌ•^ฅ ****************************** metal \m/_(>_<)_\m/ ****************************** mini penis =D ****************************** monkey @('_')@ ****************************** monocle (╭ರ_•́) ****************************** monster ٩(̾●̮̮̃̾•̃̾)۶ ****************************** monster2 ٩(- ̮̮̃-̃)۶ ****************************** mouse1 ----{,_,"> ****************************** mouse2 . ~~(__^·> ****************************** mouse3 <·^__)~~ . ****************************** mouse4 —-{,_,”><",_,}---- ****************************** mouse5 <:3 )~~~~ ****************************** mouse6 <^__)~ ****************************** mouse7 ~(__^> ****************************** mtmtika :o + :p = 69 ****************************** myancat mmmyyyyy<⦿⽘⦿>aaaannn ****************************** nathan ♪└( ̄◇ ̄)┐♪└( ̄◇ ̄)┐♪└( ̄◇ ̄)┐♪ ****************************** needle1 ┣▇▇▇═─ ****************************** needle2 |==|iiii|>----- ****************************** neo (⌐■_■)--︻╦╤─ - - - ****************************** nerd ::( ****************************** no support 乁( ◔ ౪◔)「 ┑( ̄Д  ̄)┍ ****************************** nope t(-_-t) ****************************** nose \˚ㄥ˚\ ****************************** nose2 |'L'| ****************************** oar ===========(8888) ****************************** old lady boobs |\o/\o/| ****************************** opera ヾ(´〇`)ノ♪♪♪ ****************************** owlkin (ᾢȍˬȍ)ᾢ ļ ļ ļ ļ ļ ****************************** pac man ᗧ···ᗣ···ᗣ·· ****************************** palm tree 'T` ****************************** panda ヽ( ̄(エ) ̄)ノ ****************************** party time ┏(-_-)┛┗(-_- )┓┗(-_-)┛┏(-_-)┓ ****************************** peace yo! (‾⌣‾)♉ ****************************** peepers ಠಠ ****************************** penis 8===D ****************************** penis2 ○○)=======o) ****************************** perky ( ๏ Y ๏ ) ****************************** pictou |\_______(#*#)_______/| ****************************** pie fight ---=======[} ****************************** pig1 ^(*(oo)*)^ ****************************** pig2 ༼☉ɷ⊙༽ ****************************** piggy (∩◕(oo)◕∩) ****************************** ping pong ( •_•)O*¯`·.¸.·´¯`°Q(•_• ) ****************************** pipe ====\_/ ****************************** pirate ✌(◕‿-)✌ ****************************** pistols1 ¯¯̿̿¯̿̿'̿̿̿̿̿̿̿'̿̿'̿̿̿̿̿'̿̿̿)͇̿̿)̿̿̿̿ '̿̿̿̿̿̿\̵͇̿̿\=(•̪̀●́)=o/̵͇̿̿/'̿̿ ̿ ̿̿ ****************************** pistols2 ̿' ̿'\̵͇̿̿\з=(◕_◕)=ε/̵͇̿̿/'̿'̿ ̿ ****************************** pistols3 ̿̿ ̿̿ ̿’̿̿’̿\̵͇̿̿\з=( ͡ °_̯͡° )=ε/̵͇̿̿/’̿̿’̿ ̿ ̿̿ ̿̿ ****************************** pistols4 (•̪●)=ε/̵͇̿̿/’̿’̿ ̿ ̿̿ ̿ ̿”” ****************************** pistols5 ̿̿ ̿̿ ̿̿ ̿'̿'\̵͇̿̿\з=( ͠° ͟ʖ ͡°)=ε/̵͇̿̿/'̿̿ ̿ ̿ ̿ ̿ ̿ ****************************** playing cards [♥]]] [♦]]] [♣]]] [♠]]] ****************************** playing cards clubs [♣]]] ****************************** playing cards clubs waterfall 🃏🃑🃒🃓🃔🃕🃖🃗🃘🃙🃚🃛🃜🃝🃞 ****************************** playing cards diamonds [♦]]] ****************************** playing cards diamonds waterfall 🃟🃁🃂🃃🃄🃅🃆🃇🃈🃉🃊🃋🃌🃍🃎 ****************************** playing cards hearts [♥]]] ****************************** playing cards hearts waterfall 🂠🂱🂲🂳🂴🂵🂶🂷🂸🂹🂺🂻🂼🂽🂾 ****************************** playing cards spades [♠]]] ****************************** playing cards spades waterfall 🃠🂡🂢🂣🂤🂥🂦🂧🂨🂩🂪🂫🂬🂭🂮 ****************************** playing cards waterfall 🂱🂲🂳🂴🂵🂶🂷🂸🂹🂺🂻🂼🂽🂾🃁🃂🃃🃄🃅🃆🃇🃈🃉🃊🃋🃌🃍🃎🃑🃒🃓🃔🃕🃖🃗🃘🃙🃚🃛🃜🃝🃞🂡🂢🂣🂤🂥🂦🂧🂨🂩🂪🂫🂬🂭🂮🂠🃏🃟 ****************************** playing cards waterfall (trump) 🃠🃡🃢🃣🃤🃥🃦🃧🃨🃩🃪🃫🃬🃭🃮🃯🃰🃱🃲🃳🃴🃵 ****************************** playing in snow (╯^□^)╯︵ ❄☃❄ ****************************** point (☞゚ヮ゚)☞ ****************************** polar bear ˁ˚ᴥ˚ˀ ****************************** possessed <>_<> ****************************** power lines TTT ****************************** pretty eyes ఠ_ఠ ****************************** professor """⌐(ಠ۾ಠ)¬""" ****************************** puls ––•–√\/––√\/––•–– ****************************** punch O=('-'Q) ****************************** pursing lips :-" ****************************** put the table back ┬─┬ ノ( ゜-゜ノ) ****************************** rak /⦿L⦿\ ****************************** rare ┌ಠ_ಠ)┌∩┐ ᶠᶸᶜᵏ♥ᵧₒᵤ ****************************** ready to cry :-} ****************************** real face ( ͡° ͜ʖ ͡°) ****************************** really mad >:-I ****************************** really sad :-C ****************************** regular ass (_!_) ****************************** religious ☪ ✡ † ☨ ✞ ✝ ☥ ☦ ☓ ♁ ☩ ****************************** resting my eyes ᴖ̮ ̮ᴖ ****************************** roadblock X+X+X+X+X ****************************** robber -╤╗_(◙◙)_╔╤- - - - \o/ \o/ \o/ ****************************** robot boy ◖(◣☩◢)◗ ****************************** robot1 d[ o_0 ]b ****************************** robot2 c[○┬●]כ ****************************** robot3 {•̃_•̃} ****************************** rock on1 \,,/(^_^)\,,/ ****************************** rock on2 \m/(-_-)\m/ ****************************** rocket ∙∙∙∙∙·▫▫ᵒᴼᵒ▫ₒₒ▫ᵒᴼᵒ▫ₒₒ▫ᵒᴼᵒ☼)===> ****************************** roke _\m/ ****************************** rope ╚(▲_▲)╝ ****************************** rose1 --------{---(@ ****************************** rose2 @}}>----- ****************************** rose3 @-->--->--- ****************************** rose4 @}~}~~~ ****************************** rose5 @-}-- ****************************** rose6 @)}---^----- ****************************** rose7 @->-->--- ****************************** round bird ,(u°)> ****************************** round cat ~(^._.) ****************************** running ε=ε=ε=┌(;*´Д`)ノ ****************************** russian boobs [.][.] ****************************** ryans dick 8======D ****************************** sad and confused ¯\_(⊙︿⊙)_/¯ ****************************** sad and crying (ᵟຶ︵ ᵟຶ) ****************************** sad face (ಥ⌣ಥ) ****************************** sad1 ε(´סּ︵סּ`)з ****************************** sad2 (✖╭╮✖) ****************************** sad3 (◑﹏◐) ****************************** sad4 (◕_◕) ****************************** sad5 (´ᗣ`) ****************************** sad6 Y_Y ****************************** sat '(◣_◢)' ****************************** satan ↑_(ΦwΦ;)Ψ ****************************** satisfied (◠﹏◠) ****************************** scissors ✄ ****************************** screaming :-@ ****************************** seal (ᵔᴥᵔ) ****************************** sean the sheep <('--')> ****************************** sex symbol ◢♂◣◥♀◤◢♂◣◥♀◤ ****************************** shark ~~~~~~^~~~~~ ****************************** shark attack ~~~~~~\o/~~~~~/\~~~~~ ****************************** shark face ( ˇ෴ˇ ) ****************************** sheep °l°(,,,,); ****************************** shocked1 (∩╹□╹∩) ****************************** shocked2 :-O ****************************** shrug ¯\_(ツ)_/¯ ****************************** shy (๑•́ ₃ •̀๑) ****************************** singing d(^o^)b¸¸♬·¯·♩¸¸♪·¯·♫¸¸ ****************************** singing2 ♪└( ̄◇ ̄)┐♪ ****************************** sky free ѧѦ ѧ ︵͡︵ ̢ ̱ ̧̱ι̵̱̊ι̶̨̱ ̶̱ ︵ Ѧѧ ︵͡ ︵ ѧ Ѧ ̵̗̊o̵̖ ︵ ѦѦ ѧ ****************************** sleeping (-.-)Zzz... ****************************** sleeping baby [{-_-}] ZZZzz zz z... ****************************** sleepy 눈_눈 ****************************** sleepy coffee ( -_-)旦~ ****************************** slenderman ϟƖΣNd€RMαN ****************************** smile :-) ****************************** smirk :-, ****************************** smooth (づ  ̄ ³ ̄)づ ⓈⓂⓄⓄⓉⒽ ****************************** smug bastard (‾⌣‾) ****************************** snail1 '-'_@_ ****************************** snail2 '\Q___ ****************************** sniper rifle ︻デ┳═ー ****************************** sniperstars ✯╾━╤デ╦︻✯ ****************************** snowing ✲´*。.❄¨¯`*✲。❄。*。 ****************************** snowman1 ☃ ****************************** snowman2 { }( : ^ )( """" )( ) ****************************** sophie <XX""XX> ****************************** sorreh bro (◢_◣) ****************************** spade bold ♠ ****************************** spade regular ♤ ****************************** sparkling heart -`ღ´- ****************************** spear >>-;;;------;;--> ****************************** spell cast ╰( ⁰ ਊ ⁰ )━☆゚.*・。゚ ****************************** sperm ~~o ****************************** spider1 //O\ ****************************** spider2 /\oo/\ ****************************** spider3 ///\oo/\\\ ****************************** spider4 /╲/\╭ºoꍘoº╮/\╱\ ****************************** spot ( . Y . ) ****************************** squee ヾ(◎o◎,,;)ノ ****************************** squid くコ:彡 ****************************** squigle with spirals 6\9 ****************************** srs face (ಠ_ಠ) ****************************** star in my eyes <*_*> ****************************** staring ٩(๏_๏)۶ ****************************** stars ✌⊂(✰‿✰)つ✌ ****************************** stars2 ⋆ ✢ ✣ ✤ ✥ ✦ ✧ ✩ ✪ ✫ ✬ ✭ ✮ ✯ ✰ ★ ****************************** stealth fighter -^- ****************************** stranger danger (づ。◕‿‿◕。)づ ****************************** strut ᕕ( ᐛ )ᕗ ****************************** stunna shades (っ▀¯▀)つ ****************************** sunglasses1 (•_•)>⌐■-■ (⌐■_■) ****************************** sunglasses2 B-) ****************************** sunny day ☁ ▅▒░☼‿☼░▒▅ ☁ ****************************** superman -^mOm^- ****************************** superman logo /s\ ****************************** surprised1 =:-o ****************************** surprised10 (⊙.◎) ****************************** surprised11 ๏_๏ ****************************** surprised12 (˚-˚) ****************************** surprised13 ˚o˚ ****************************** surprised14 (O.O) ****************************** surprised15 ( ゚o゚) ****************************** surprised16 ◉_◉ ****************************** surprised17 【•】_【•】 ****************************** surprised18 (•ิ_•) ****************************** surprised19 ⊙⊙ ****************************** surprised2 ( ゚Д゚) ****************************** surprised20 ͡๏_͡๏ ****************************** surprised3 (O_o) ****************************** surprised4 (º_•) ****************************** surprised5 (º.º) ****************************** surprised6 ⊙▃⊙ ****************************** surprised7 O.o ****************************** surprised8 ●_● ****************************** surprised9 (⊙̃.o) ****************************** swim _(ッ)>_// ****************************** swim2 ー(ッ)」 ****************************** swim3 _(ッ)へ ****************************** sword1 (===||:::::::::::::::> ****************************** sword10 O==I======> ****************************** sword2 ▬▬ι═══════ﺤ -═══════ι▬▬ ****************************** sword3 ס₪₪₪₪§|(Ξ≥≤≥≤≥≤ΞΞΞΞΞΞΞΞΞΞ> ****************************** sword4 |O/////[{:;:;:;:;:;:;:;:;> ****************************** sword5 <%%%%|==========> ****************************** sword6 o()xxxx[{::::::::::::::::::::::::::::::::::> ****************************** sword7 o==[]::::::::::::::::> ****************************** sword8 ▬▬ι═══════> ****************************** sword9 <═══════ι▬▬ ****************************** table flip (╯°□°)╯︵ ┻━┻ ****************************** table flip10 (/ .□.)\ ︵╰(゜Д゜)╯︵ /(.□. \) ****************************** table flip2 (ノಥ益ಥ)ノ ┻━┻ ****************************** table flip3 ┬─┬ノ( º _ ºノ) ****************************** table flip4 (ノಠ益ಠ)ノ彡┻━┻ ****************************** table flip5 ┬──┬ ¯\_(ツ) ****************************** table flip6 ┻━┻ ︵ ¯\(ツ)/¯ ︵ ┻━┻ ****************************** table flip7 (╯°□°)╯︵ ┻━┻ ︵ ╯(°□° ╯) ****************************** table flip8 (╯°Д°)╯︵ /(.□ . \) ****************************** table flip9 (ノ^_^)ノ┻━┻ ┬─┬ ノ( ^_^ノ) ****************************** taking a dump (⩾﹏⩽) ****************************** teddy ˁ(⦿ᴥ⦿)ˀ ****************************** teepee /|\ ****************************** telephone ε(๏_๏)з】 ****************************** tent1 //\ ****************************** tent2 /\\ ****************************** tgif “ヽ(´▽`)ノ” ****************************** thanks \(^-^)/ ****************************** things that can_t be unseen ♨_♨ ****************************** ###Markdown ART Version : 4.9 ###Code from art import * ###Output _____no_output_____ ###Markdown Art Counter ###Code ART_COUNTER ###Output _____no_output_____ ###Markdown Art List ###Code art_list() ###Output 3 ᕙ༼ ,,ԾܫԾ,, ༽ᕗ ****************************** 5 ᕙ༼ ,,இܫஇ,, ༽ᕗ ****************************** 9/11 truth ✈__✈ █ █ ▄ ****************************** acid ⊂(◉‿◉)つ ****************************** afraid ( ゚ Д゚) ****************************** airplane1 ‛¯¯٭٭¯¯(▫▫)¯¯٭٭¯¯’ ****************************** airplane2 ✈ ****************************** ak-47 ︻┳デ═— ****************************** alien ::) ****************************** almost cared ╰╏ ◉ 〜 ◉ ╏╯ ****************************** american money1 [($)] ****************************** american money2 [̲̅$̲̅(̲̅1̲̅)̲̅$̲̅] ****************************** american money3 [̲̅$̲̅(̲̅5̲̅)̲̅$̲̅] ****************************** american money4 [̲̅$̲̅(̲̅ιοο̲̅)̲̅$̲̅] ****************************** american money5 [̲̅$̲̅(̲̅2οο̲̅)̲̅$̲̅] ****************************** angel1 ^i^ ****************************** angel2 O:-) ****************************** angry ლ(ಠ益ಠ)ლ ****************************** angry birds ( ఠൠఠ )ノ ****************************** angry face (⋟﹏⋞) ****************************** angry face2 (╬ ಠ益ಠ) ****************************** angry troll ヽ༼ ಠ益ಠ ༽ノ ****************************** angry2 ( ͠° ͟ʖ ͡°) ****************************** ankush ︻デ┳═ー*----* ****************************** arrow1 »»---------------------► ****************************** arrow2 XXX--------> ****************************** arrowhead ⤜(ⱺ ʖ̯ⱺ)⤏ ****************************** at what cost ლ(ಠ益ಠლ) ****************************** atish (| - _ - |) ****************************** awesome <:3 )~~~ ****************************** awkward •͡˘㇁•͡˘ ****************************** bad hair1 =:-) ****************************** bad hair2 =:-( ****************************** bagel nln >_< nln ****************************** band aid (̲̅:̲̅:̲̅:̲̅[̲̅ ̲̅]̲̅:̲̅:̲̅:̲̅ ) ****************************** barbell ▐━━━━━▌ ****************************** barcode1 █║▌│ █│║▌ ║││█║▌ │║║█║ │║║█║ ****************************** barcode2 ║█║▌║█║▌│║▌║▌█║ ****************************** barf (´ж`ς) ****************************** baseball fan q:o) ****************************** basking in glory ヽ(´ー`)ノ ****************************** bat1 ^O^ ****************************** bat2 ^v^ ****************************** bautista (╯°_°)╯︵ ━━━ ****************************** bear ʕ•ᴥ•ʔ ****************************** bear GTFO ʕ •`ᴥ•´ʔ ****************************** bear squiting ʕᵔᴥᵔʔ ****************************** because ∵ ****************************** bee ¸.·´¯`·¸¸.·´¯`·.¸.-<\^}0=: ****************************** being draged ╰(◣﹏◢)╯ ****************************** bender ¦̵̱ ̵̱ ̵̱ ̵̱ ̵̱(̢ ̡͇̅└͇̅┘͇̅ (▤8כ−◦ ****************************** big eyes ⺌∅‿∅⺌ ****************************** big kiss :-X ****************************** big nose ˚∆˚ ****************************** big smile :-D ****************************** bird (⌒▽⌒) ****************************** birds ~(‾▿‾)~ ****************************** blackeye 0__# ****************************** bomb !!( ’ ‘)ノノ⌒●~* ****************************** boobies (. )( .) ****************************** boobs (.)(.) ****************************** boombox1 ♫♪.ılılıll|̲̅̅●̲̅̅|̲̅̅=̲̅̅|̲̅̅●̲̅̅|llılılı.♫♪ ****************************** boombox2 ♫♪ |̲̅̅●̲̅̅|̲̅̅=̲̅̅|̲̅̅●̲̅̅| ♫♪ ****************************** boxing ლ(•́•́ლ) ****************************** breakdown ಥ﹏ಥ ****************************** british money [£::] ****************************** buck teeth :-B ****************************** bugs bunny E=B ****************************** bullshit |3ᵕᶦᶦᶳᶣᶨᶵ ****************************** bunny (\_/) ****************************** butt (‿|‿) ****************************** butterfly Ƹ̵̡Ӝ̵̨̄Ʒ ****************************** camera [◉"] ****************************** canoe .,.,\______/,..,., ****************************** car `o##o> ****************************** car race ∙،°. ˘Ô≈ôﺣ » » » ****************************** care crowd (-(-_(-_-)_-)-) ****************************** careless ◔_◔ ****************************** carpet roll @__ ****************************** cassette1 |[●▪▪●]| ****************************** cassette2 [¯ↂ■■ↂ¯] ****************************** cat face ⦿⽘⦿ ****************************** cat smile ≧◔◡◔≦ ****************************** cat1 =^..^= ****************************** cat2 龴ↀ◡ↀ龴 ****************************** cat3 ^.--.^ ****************************** caterpillar ,/\,/\,/\,/\,/\,/\,o ****************************** catlenny ( ͡° ᴥ ͡°) ****************************** chair ╦╣ ****************************** charly +:) ****************************** chasing ''⌐(ಠ۾ಠ)¬'' ****************************** cheer ^(¤o¤)^ ****************************** cheers ( ^_^)o自自o(^_^ ) ****************************** chess ♞▀▄▀▄♝▀▄ ****************************** chess pieces ♚ ♛ ♜ ♝ ♞ ♟ ♔ ♕ ♖ ♗ ♘ ♙ ****************************** chicken ʚ(•` ****************************** chu (´ε` ) ****************************** cigarette1 (̅_̅_̅_̅(̅_̅_̅_̅_̅_̅_̅_̅_̅̅_̅()ڪے ****************************** cigarette2 (____((____________()~~~ ****************************** cigarette3 ()___)____________) ****************************** clowning *:o) ****************************** club bold ♣ ****************************** club regular ♧ ****************************** coffee now {zzz}°°°( -_-)>c[_] ****************************** coffee1 c[_] ****************************** coffee2 l_D ****************************** coffee3 l_P ****************************** coffee4 l_B ****************************** computer mouse [E} ****************************** concerned (@_@) ****************************** confused scratch (⊙.☉)7 ****************************** confused1 :-/ ****************************** confused2 :-\ ****************************** crab (\|) ._. (|/) ****************************** crayons ((̲̅ ̲̅(̲̅C̲̅r̲̅a̲̅y̲̅o̲̅l̲̲̅̅a̲̅( ̲̅((> ****************************** crazy ミ●﹏☉ミ ****************************** creeper ƪ(ړײ)‎ƪ​​ ****************************** crotch shot \*/ ****************************** cry (╯︵╰,) ****************************** cry face 。゚( ゚இ‸இ゚)゚。 ****************************** cry troll ༼ ༎ຶ ෴ ༎ຶ༽ ****************************** crying1 Ỏ̷͖͈̞̩͎̻̫̫̜͉̠̫͕̭̭̫̫̹̗̹͈̼̠̖͍͚̥͈̮̼͕̠̤̯̻̥̬̗̼̳̤̳̬̪̹͚̞̼̠͕̼̠̦͚̫͔̯̹͉͉̘͎͕̼̣̝͙̱̟̹̩̟̳̦̭͉̮̖̭̣̣̞̙̗̜̺̭̻̥͚͙̝̦̲̱͉͖͉̰̦͎̫̣̼͎͍̠̮͓̹̹͉̤̰̗̙͕͇͔̱͕̭͈̳̗̭͔̘̖̺̮̜̠͖̘͓̳͕̟̠̱̫̤͓͔̘̰̲͙͍͇̙͎̣̼̗̖͙̯͉̠̟͈͍͕̪͓̝̩̦̖̹̼̠̘̮͚̟͉̺̜͍͓̯̳̱̻͕̣̳͉̻̭̭̱͍̪̩̭̺͕̺̼̥̪͖̦̟͎̻̰_Ỏ̷͖͈̞̩͎̻̫̫̜͉̠̫͕̭̭̫̫̹̗̹͈̼̠̖͍͚̥͈̮̼͕̠̤̯̻̥̬̗̼̳̤̳̬̪̹͚̞̼̠͕̼̠̦͚̫͔̯̹͉͉̘͎͕̼̣̝͙̱̟̹̩̟̳̦̭͉̮̖̭̣̣̞̙̗̜̺̭̻̥͚͙̝̦̲̱͉͖͉̰̦͎̫̣̼͎͍̠̮͓̹̹͉̤̰̗̙͕͇͔̱͕̭͈̳̗̭͔̘̖̺̮̜̠͖̘͓̳͕̟̠̱̫̤͓͔̘̰̲͙͍͇̙͎̣̼̗̖͙̯͉̠̟͈͍͕̪͓̝̩̦̖̹̼̠̘̮͚̟͉̺̜͍͓̯̳̱̻͕̣̳͉̻̭̭̱͍̪̩̭̺͕̺̼̥̪͖̦̟͎̻̰ ****************************** crying2 :~( ****************************** cthulhu ^(;,;)^ ****************************** cup1 (▓ ****************************** cup2 \̅_̅/̷̚ʾ ****************************** cussing :-# ****************************** cute cat ^⨀ᴥ⨀^ ****************************** cute face (。◕‿◕。) ****************************** dab ヽ( •_)ᕗ ****************************** dagger cxxx|;:;:;:;:;:;:;:;> ****************************** dalek ̵̄/͇̐\ ****************************** damnyou (ᕗ ͠° ਊ ͠° )ᕗ ****************************** dance (>'-')> <('_'<) ^('_')\- \m/(-_-)\m/ <( '-')> \_( .")> <(._.)-` ****************************** dance2 ♪♪ ヽ(ˇ∀ˇ )ゞ ****************************** dancee ♪┏(°.°)┛┗(°.°)┓┗(°.°)┛┏(°.°)┓ ♪ ****************************** dancing ┌(ㆆ㉨ㆆ)ʃ ****************************** dancing people ‎(/.__.)/ \(.__.\) ****************************** dead child '-=,o ****************************** dead eyes ¿ⓧ_ⓧﮌ ****************************** dead girl '==>x\9 ****************************** dead guy '==xx\0 ****************************** dear god why щ(゚Д゚щ) ****************************** death star defense team |-o-| (-o-) |-o-| ****************************** decorate ▂▃▅▇█▓▒░۩۞۩ ۩۞۩░▒▓█▇▅▃▂ ****************************** depressed (︶︹︶) ****************************** derp ヘ(。□°)ヘ ****************************** devil ]:-> ****************************** devilish grin >:-D ****************************** devilish smile >:) ****************************** devious smile ಠ‿ಠ ****************************** dgaf ┌∩┐(◣ _ ◢)┌∩┐ ****************************** diamond bold ♦ ****************************** diamond regular ♢ ****************************** dice [: :] ****************************** dick 8====D ****************************** disagree ٩◔̯◔۶ ****************************** discombobulated ⊙﹏⊙ ****************************** dislike1 (Ծ‸ Ծ) ****************************** dislike2 ( ಠ ʖ̯ ಠ) ****************************** dna sample ~ ****************************** do you even lift bro? ᕦ(ò_óˇ)ᕤ ****************************** domino [: :|:::] ****************************** don king ==8-) ****************************** double flip ┻━┻ ︵ヽ(`Д´)ノ︵ ┻━┻ ****************************** drowning 人人人ヾ( ;×o×)〃 人人人 ****************************** druling1 :-... ****************************** druling2 :-P~~~ ****************************** drunkenness ヽ(´ー`)┌ ****************************** dude glasses1 @[O],[O] ****************************** dude glasses2 @(o),(o) ****************************** dummy <-|-'_'-|-> ****************************** dunno ¯\(°_o)/¯ ****************************** eastbound fish ><((((> ****************************** eaten apple [===]-' ****************************** eds dick 8=D ****************************** eeriemob (-(-_-(-_(-_(-_-)_-)-_-)_-)_-)-) ****************************** electrocardiogram1 √v^√v^√v^√v^√♥ ****************************** electrocardiogram2 v^v^v^v^√\/♥ ****************************** electrocardiogram3 /\/\/\/\/\/\/\/\/\/\/\v^♥ ****************************** electrocardiogram4 √v^√v^♥√v^√v^√ ****************************** elephant °j°m ****************************** emo (///_ ;) ****************************** emo dance ヾ(-_- )ゞ ****************************** energy つ ◕_◕ ༽つ つ ◕_◕ ༽つ ****************************** envelope ✉ ****************************** epic gun ︻┳デ═— ****************************** equalizer ▇ ▅ █ ▅ ▇ ▂ ▃ ▁ ▁ ▅ ▃ ▅ ▅ ▄ ▅ ▇ ****************************** eric >--) ) ) )*> ****************************** error (╯°□°)╯︵ ɹoɹɹƎ ****************************** exchange (╯°□°)╯︵ ǝƃuɐɥɔxǝ ****************************** excited ☜(⌒▽⌒)☞ ****************************** exorcism ح(•̀ж•́)ง † ****************************** eye closed (╯_╰) ****************************** eye roll ⥀.⥀ ****************************** eyes ℃ↂ_ↂↃ ****************************** face •|龴◡龴|• ****************************** facepalm (>ლ) ****************************** fail o(╥﹏╥)o ****************************** fart (ˆ⺫ˆ๑)<3 ****************************** fat ass (__!__) ****************************** faydre (U) [^_^] (U) ****************************** feel perky (`・ω・´) ****************************** fido V•ᴥ•V ****************************** fight (ง'̀-'́)ง ****************************** finger1 ╭∩╮(Ο_Ο)╭∩╮ ****************************** finger2 ┌∩┐(◣_◢)┌∩┐ ****************************** finn | (• ◡•)| ****************************** fish invasion ›(̠̄:̠̄c ›(̠̄:̠̄c (¦Ҝ (¦Ҝ ҉ - - - ¦̺͆¦ ▪▌ ****************************** fish skeleton1 >-}-}-}-> ****************************** fish skeleton2 >++('> ****************************** fish swim ¸.·´¯`·.´¯`·.¸¸.·´¯`·.¸><(((º> ****************************** fish1 ><(((('> ****************************** fish2 ><> ****************************** fish3 `·.¸¸ ><((((º>.·´¯`·><((((º> ****************************** fish4 ><> ><> ****************************** fish5 <>< ****************************** fish6 <`)))>< ****************************** fisticuffs ლ(`ー´ლ) ****************************** flex ᕙ(⇀‸↼‶)ᕗ ****************************** flip friend (ノಠ ∩ಠ)ノ彡( \o°o)\ ****************************** fly away ⁽⁽ଘ( ˊᵕˋ )ଓ⁾⁾ ****************************** flying ح˚௰˚づ ****************************** fork ---= ****************************** formula1 car \ō͡≡o˞̶ ****************************** fox -^^,--,~ ****************************** french kiss :-XÞ ****************************** frown (ღ˘⌣˘ღ) ****************************** fu (ಠ_ಠ)┌∩┐ ****************************** fuck off t(-.-t) ****************************** fuck you nlm (-_-) mln ****************************** fuck you2 (° ͜ʖ͡°)╭∩╮ ****************************** fuckall ╭∩╮(︶︿︶)╭∩╮ ****************************** full mouth :-I ****************************** fungry Σ_(꒪ཀ꒪」∠)_ ****************************** ghost ‹’’›(Ͼ˳Ͽ)‹’’› ****************************** gimme ༼ つ ◕_◕ ༽つ ****************************** glasses -@-@- ****************************** glasses2 ᒡ◯ᵔ◯ᒢ ****************************** glitter (*・‿・)ノ⌒*:・゚✧ ****************************** go away bear ╭∩╮ʕ•ᴥ•ʔ╭∩╮ ****************************** gotit (☞゚∀゚)☞ ****************************** gtalk fit (•̪̀●́)=ε/̵͇̿̿/'̿̿ ̿ ̿̿ N --------{---(@ ****************************** guitar c====(=#O| ) ~~ ♬·¯·♩¸¸♪·¯·♫¸ ****************************** gun1 ︻╦╤─ ****************************** gun2 ︻デ═一 ****************************** gun3 ╦̵̵̿╤─ ҉ ~ • ****************************** hacksaw [|^^^^^^^ ****************************** hairstyle ⨌⨀_⨀⨌ ****************************** hal @_'-' ****************************** hammer #== ****************************** happy ۜ\(סּںסּَ` )/ۜ ****************************** happy birthday 1 ዞᏜ℘℘Ꮍ ℬℹℛʈዞᗬᏜᎽ ****************************** happy face ヽ(´▽`)/ ****************************** happy hug \(ᵔᵕᵔ)/ ****************************** happy square 【ツ】 ****************************** happy2 ⎦˚◡˚⎣ ****************************** happy3 ㋡ ****************************** happy4 ^_^ ****************************** happy5 [^_^] ****************************** head shot ->~∑≥_≤) ****************************** headphone1 d[-_-]b ****************************** headphone2 d(-_-)b ****************************** headphone3 (W) ****************************** heart bold ♥ ****************************** heart regular ♡ ****************************** heart1 »-(¯`·.·´¯)-> ****************************** heart2 ♡♡ ****************************** heart3 <3 ****************************** hell yeah (òÓ,)_\,,/ ****************************** hello (ʘ‿ʘ)╯ ****************************** help ٩(͡๏̯͡๏)۶ ****************************** high five ( ⌒o⌒)人(⌒-⌒ )v ****************************** hitchhicking (งツ)ว ****************************** homer (_8(|) ****************************** homer simpson =(:o) ****************************** honeycute ❤◦.¸¸. ◦✿ ****************************** house __̴ı̴̴̡̡̡ ̡͌l̡̡̡ ̡͌l̡*̡̡ ̴̡ı̴̴̡ ̡̡͡|̲̲̲͡͡͡ ̲▫̲͡ ̲̲̲͡͡π̲̲͡͡ ̲̲͡▫̲̲͡͡ ̲|̡̡̡ ̡ ̴̡ı̴̡̡ ̡͌l̡̡̡̡.___ ****************************** hoxom h(o x o )m ****************************** hug me (っ◕‿◕)っ ****************************** hugger (づ ̄ ³ ̄)づ ****************************** huhu █▬█ █▄█ █▬█ █▄█ ****************************** hybrix ʕʘ̅͜ʘ̅ʔ ****************************** i dont care ╭∩╮(︶︿︶)╭∩╮ ****************************** i kill you ̿ ̿̿'̿̿\̵͇̿̿\=(•̪●)=/̵͇̿̿/'̿̿ ̿ ̿ ****************************** im a hugger (⊃。•́‿•̀。)⊃ ****************************** infinity (X) ****************************** injured (҂◡_◡) ****************************** inlove (✿ ♥‿♥) ****************************** innocent face ʘ‿ʘ ****************************** japanese lion face °‿‿° ****************************** jaymz (•̪●)==ε/̵͇̿​̿/’̿’̿ ̿ ̿̿ `(•.°)~ ****************************** jazz musician ヽ(⌐■_■)ノ♪♬ ****************************** john lennon ((ºjº)) ****************************** jokeranonimous ╭∩╮ (òÓ,) ╭∩╮ ****************************** jokeranonimous2 ╭∩╮(ô¿ô)╭∩╮ ****************************** joy n_n ****************************** judgemental \{ಠʖಠ\} ****************************** judging ( ఠ ͟ʖ ఠ) ****************************** kablewee ̿' ̿'\̵͇̿̿\з=( ͡ °_̯͡° )=ε/̵͇̿̿/'̿'̿ ̿ ****************************** killer (⌐■_■)--︻╦╤─ - - - (╥﹏╥) ****************************** kilroy was here " Ü " ****************************** king -_- ****************************** kirby (つ -‘ _ ‘- )つ ****************************** kirby dance <(''<) <( ' ' )> (> '')> ****************************** kiss (o'3'o) ****************************** kiss my ass (_x_) ****************************** kissing ( ˘ ³˘)♥ ****************************** kitty =^. .^= ****************************** kitty emote ᵒᴥᵒ# ****************************** knife1 )xxxxx[;;;;;;;;;> ****************************** knife2 )xxx[::::::::::> ****************************** koala @( * O * )@ ****************************** kokain ̿ ̿' ̿'\̵͇̿̿\з=(•̪●)=ε/̵͇̿̿/'̿''̿ ̿ ****************************** kyubey /人 ⌒ ‿‿ ⌒ 人\ ****************************** kyubey2 /人 ◕‿‿◕ 人\ ****************************** laughing (^▽^) ****************************** lenny ( ͡° ͜ʖ ͡°) ****************************** licking lips :-9 ****************************** line brack ●▬▬▬▬๑۩۩๑▬▬▬▬▬● ****************************** linqan :Q___ ****************************** listening to headphones ◖ᵔᴥᵔ◗ ♪ ♫ ****************************** loading1 █▒▒▒▒▒▒▒▒▒ ****************************** loading2 ███▒▒▒▒▒▒▒ ****************************** loading3 █████▒▒▒▒▒ ******************************
StudentsSolutions_v1.0.0/13-Assignement_GuillermoAstiazaran.ipynb
###Markdown Assignment:Beat the performance of my Lasso regression by **using different feature engineering steps ONLY!!**.The performance of my current model, as shown in this notebook is:- test mse: 1063016789.3316755- test rmse: 32603.938248801718- test r2: 0.8453144708738004To beat my model you will need a test r2 bigger than 0.85 and a rmse smaller than 32603.===================================================================================================== Conditions:- You MUST NOT change the hyperparameters of the Lasso.- You MUST use the same seeds in Lasso and train_test_split as I show in this notebook (random_state)- You MUST use all the features of the dataset (except Id) - you MUST NOT select features===================================================================================================== If you beat my model:Make a pull request with your notebook to this github repo:https://github.com/solegalli/udemy-feml-challengeAnd add your notebook to the folder:-StudentsSolutions_v1.0.0 How to make the PR1) fork the repo:Go to https://github.com/solegalli/udemy-feml-challenge, and click on the **fork** button at the top-right2) clone your forked repo into your local computer:- Go to www.github.com/yourusername/udemy-feml-challenge- Click the green button that says clone or download- copy the url that opens up- power up a git console- type: git clone (paste the url you copied from github)- done3) Make a copy of the jupyter notebook and add your name:- Open up the Jupyter notebook called 13-Assignement.ipynb- Click the "File" button at the top-right and then click "Make a copy"- **Work your solution in the Copy** and not in the original assignment (otherwise there will be conflicts when making the PR)- Change the name of the copy of the notebook to: 13-Assignement_yourname.ipynb- Move the notebook to the folder **StudentsSolutions_v1.0.0**- doneWhen you finish, just commit the new notebook to your fork and then make a PR to my repo.- git add StudentsSolutions_v1.0.0/13-Assignement_yourname.ipynb- git commit -m "your commit message"- git push origin master or git push origin yourfeaturebranch- go to your repo and make a pull request. But i have a notebook ready and I haven't cloned the repo yet, how can I make the PR?If you worked in the copy you downloaded from Udemy before forking and cloning this repo, then follow this steps:1) fork the repo:Go to https://github.com/solegalli/udemy-feml-challenge, and click on the fork button at the top-right2) clone your forked repo into your local computer:Go to www.github.com/yourusername/udemy-feml-challenge- Click the green button that says clone or download- Copy the url that opens up- Power up a git console- Type: git clone (paste the url you copied from github)- Done3) Rename your solution as follows and copy it into your cloned repo:- Rename your solution notebook to: 13-Assignement_yourname.ipynb- Copy this file into the cloned repo, inside the folder **StudentsSolutions_v1.0.0**- DoneWhen you finish, just commit the new notebook to your fork and then make a PR to my repo- git add StudentsSolutions_v1.0.0/13-Assignement_yourname.ipynb- git commit -m "your commit message"- git push origin master or git push origin yourfeaturebranch- go to your repo and make a pull request.**Good luck!!** House Prices dataset ###Code import pandas as pd import numpy as np import matplotlib.pyplot as plt # for the model from sklearn.model_selection import train_test_split from sklearn.linear_model import Lasso from sklearn.pipeline import Pipeline from sklearn.metrics import mean_squared_error, r2_score # for feature engineering from sklearn.preprocessing import StandardScaler from feature_engine import imputation as mdi from feature_engine import discretisation as dsc from feature_engine import encoding as ce ###Output _____no_output_____ ###Markdown Load Datasets ###Code # load dataset data = pd.read_csv('../houseprice.csv') # make lists of variable types categorical = [var for var in data.columns if data[var].dtype == 'O'] year_vars = [var for var in data.columns if 'Yr' in var or 'Year' in var] discrete = [ var for var in data.columns if data[var].dtype != 'O' and len(data[var].unique()) < 20 and var not in year_vars ] numerical = [ var for var in data.columns if data[var].dtype != 'O' if var not in discrete and var not in ['Id', 'SalePrice'] and var not in year_vars ] print('There are {} continuous variables'.format(len(numerical))) print('There are {} discrete variables'.format(len(discrete))) print('There are {} temporal variables'.format(len(year_vars))) print('There are {} categorical variables'.format(len(categorical))) ###Output There are 18 continuous variables There are 14 discrete variables There are 4 temporal variables There are 43 categorical variables ###Markdown Separate train and test set ###Code # IMPORTANT: keep the random_state to zero for reproducibility # Let's separate into train and test set X_train, X_test, y_train, y_test = train_test_split(data.drop( ['Id', 'SalePrice'], axis=1), data['SalePrice'], test_size=0.1, random_state=0) # calculate elapsed time def elapsed_years(df, var): # capture difference between year variable and # year the house was sold df[var] = df['YrSold'] - df[var] return df for var in ['YearBuilt', 'YearRemodAdd', 'GarageYrBlt']: X_train = elapsed_years(X_train, var) X_test = elapsed_years(X_test, var) # drop YrSold X_train.drop('YrSold', axis=1, inplace=True) X_test.drop('YrSold', axis=1, inplace=True) # capture the column names for use later in the notebook final_columns = X_train.columns ###Output _____no_output_____ ###Markdown Feature Engineering Pipeline ###Code # I will treat discrete variables as if they were categorical # to treat discrete as categorical using Feature-engine # we need to re-cast them as object X_train[discrete] = X_train[discrete].astype('O') X_test[discrete] = X_test[discrete].astype('O') house_pipe = Pipeline([ # missing data imputation - section 4 ('missing_ind', mdi.AddMissingIndicator( variables=['LotFrontage', 'MasVnrArea', 'GarageYrBlt'])), ('imputer_num', mdi.MeanMedianImputer( imputation_method='mean', variables=['LotFrontage', 'MasVnrArea', 'GarageYrBlt'])), ('imputer_cat', mdi.CategoricalImputer(variables=categorical)), # categorical encoding - section 6 ('rare_label_enc', ce.RareLabelEncoder(tol=0.01, n_categories=1, variables=categorical + discrete)), # newly available categorical encoder, uses trees predictions ('categorical_enc', ce.DecisionTreeEncoder(random_state=2909, variables=categorical + discrete)), # discretisation - section 8 ('discretisation', dsc.DecisionTreeDiscretiser(random_state=2909, variables=numerical)), # feature Scaling - section 10 ('scaler', StandardScaler()), # regression ('lasso', Lasso(random_state=0)) ]) # let's fit the pipeline house_pipe.fit(X_train, y_train) # let's get the predictions X_train_preds = house_pipe.predict(X_train) X_test_preds = house_pipe.predict(X_test) # check model performance: print('train mse: {}'.format(mean_squared_error(y_train, X_train_preds, squared=True))) print('train rmse: {}'.format(mean_squared_error(y_train, X_train_preds, squared=False))) print('train r2: {}'.format(r2_score(y_train, X_train_preds))) print() print('test mse: {}'.format(mean_squared_error(y_test, X_test_preds,squared=True))) print('test rmse: {}'.format(mean_squared_error(y_test, X_test_preds, squared=False))) print('test r2: {}'.format(r2_score(y_test, X_test_preds))) # plot predictions vs real value plt.scatter(y_test,X_test_preds) plt.xlabel('True Price') plt.ylabel('Predicted Price') # let's explore the importance of the features # the importance is given by the absolute value of the coefficient # assigned by the Lasso importance = pd.Series(np.abs(house_pipe.named_steps['lasso'].coef_)) importance.index = list(final_columns)+['LotFrontage_na', 'MasVnrArea_na', 'GarageYrBlt_na'] importance.sort_values(inplace=True, ascending=False) importance.plot.bar(figsize=(18,6)) ###Output _____no_output_____
Project Notes/Kaggle Learn/01 Python/exercise06 working with external libraries.ipynb
###Markdown **[Python Micro-Course Home Page](https://www.kaggle.com/learn/python)**--- These exercises accompany the tutorial on [imports](https://www.kaggle.com/colinmorris/working-with-external-libraries).There are only four problems in this last set of exercises, but they're all pretty tricky, so be on guard! If you get stuck, don't hesitate to head to the [Learn Forum](https://kaggle.com/learn-forum) to discuss.Run the setup code below before working on the questions (and run it again if you leave this notebook and come back later). ###Code from learntools.core import binder; binder.bind(globals()) from learntools.python.ex7 import * print('Setup complete.') ###Output Setup complete. ###Markdown Exercises 1.After completing [the exercises on lists and tuples](https://www.kaggle.com/kernels/fork/1275177), Jimmy noticed that, according to his `estimate_average_slot_payout` function, the slot machines at the Learn Python Casino are actually rigged *against* the house, and are profitable to play in the long run.Starting with $200 in his pocket, Jimmy has played the slots 500 times, recording his new balance in a list after each spin. He used Python's `matplotlib` library to make a graph of his balance over time: ###Code # Import the jimmy_slots submodule from learntools.python import jimmy_slots # Call the get_graph() function to get Jimmy's graph graph = jimmy_slots.get_graph() graph ###Output _____no_output_____ ###Markdown As you can see, he's hit a bit of bad luck recently. He wants to tweet this along with some choice emojis, but, as it looks right now, his followers will probably find it confusing. He's asked if you can help him make the following changes:1. Add the title "Results of 500 slot machine pulls"2. Make the y-axis start at 0. 3. Add the label "Balance" to the y-axisAfter calling `type(graph)` you see that Jimmy's graph is of type `matplotlib.axes._subplots.AxesSubplot`. Hm, that's a new one. By calling `dir(graph)`, you find three methods that seem like they'll be useful: `.set_title()`, `.set_ylim()`, and `.set_ylabel()`. Use these methods to complete the function `prettify_graph` according to Jimmy's requests. We've already checked off the first request for you (setting a title).(Remember: if you don't know what these methods do, use the `help()` function!) ###Code def prettify_graph(graph): """Modify the given graph according to Jimmy's requests: add a title, make the y-axis start at 0, label the y-axis. (And, if you're feeling ambitious, format the tick marks as dollar amounts using the "$" symbol.) """ graph.set_title("Results of 500 slot machine pulls") # Complete steps 2 and 3 here graph.set_ylabel("Balance") graph.set_ylim(bottom=0) # Label the y-axis ticks = graph.get_yticks() new_labels = ['${}'.format(int(amt)) for amt in ticks] graph.set_yticklabels(new_labels) graph = jimmy_slots.get_graph() prettify_graph(graph) graph ###Output _____no_output_____ ###Markdown **Bonus:** Can you format the numbers on the y-axis so they look like dollar amounts? e.g. $200 instead of just 200.(We're not going to tell you what method(s) to use here. You'll need to go digging yourself with `dir(graph)` and/or `help(graph)`.) ###Code #q1.solution() ###Output _____no_output_____ ###Markdown 2. 🌶️🌶️This is a very hard problem. Feel free to skip it if you are short on time:Luigi is trying to perform an analysis to determine the best items for winning races on the Mario Kart circuit. He has some data in the form of lists of dictionaries that look like... [ {'name': 'Peach', 'items': ['green shell', 'banana', 'green shell',], 'finish': 3}, {'name': 'Bowser', 'items': ['green shell',], 'finish': 1}, Sometimes the racer's name wasn't recorded {'name': None, 'items': ['mushroom',], 'finish': 2}, {'name': 'Toad', 'items': ['green shell', 'mushroom'], 'finish': 1}, ]`'items'` is a list of all the power-up items the racer picked up in that race, and `'finish'` was their placement in the race (1 for first place, 3 for third, etc.).He wrote the function below to take a list like this and return a dictionary mapping each item to how many times it was picked up by first-place finishers. ###Code def best_items(racers): """Given a list of racer dictionaries, return a dictionary mapping items to the number of times those items were picked up by racers who finished in first place. """ winner_item_counts = {} for i in range(len(racers)): # The i'th racer dictionary racer = racers[i] # We're only interested in racers who finished in first if racer['finish'] == 1: for i in racer['items']: # Add one to the count for this item (adding it to the dict if necessary) if i not in winner_item_counts: winner_item_counts[i] = 0 winner_item_counts[i] += 1 # Data quality issues :/ Print a warning about racers with no name set. We'll take care of it later. if racer['name'] is None: print("WARNING: Encountered racer with unknown name on iteration {}/{} (racer = {})".format( i+1, len(racers), racer['name']) ) return winner_item_counts ###Output _____no_output_____ ###Markdown He tried it on a small example list above and it seemed to work correctly: ###Code sample = [ {'name': 'Peach', 'items': ['green shell', 'banana', 'green shell',], 'finish': 3}, {'name': 'Bowser', 'items': ['green shell',], 'finish': 1}, {'name': None, 'items': ['mushroom',], 'finish': 2}, {'name': 'Toad', 'items': ['green shell', 'mushroom'], 'finish': 1}, ] best_items(sample) ###Output WARNING: Encountered racer with unknown name on iteration 3/4 (racer = None) ###Markdown However, when he tried running it on his full dataset, the program crashed with a `TypeError`.Can you guess why? Try running the code cell below to see the error message Luigi is getting. Once you've identified the bug, fix it in the cell below (so that it runs without any errors).Hint: Luigi's bug is similar to one we encountered in the [tutorial](https://www.kaggle.com/colinmorris/working-with-external-libraries) when we talked about star imports. ###Code # Import luigi's full dataset of race data from learntools.python.luigi_analysis import full_dataset # Fix me! def best_items(racers): winner_item_counts = {} for i in range(len(racers)): # The i'th racer dictionary racer = racers[i] # We're only interested in racers who finished in first if racer['finish'] == 1: for i2 in racer['items']: # Add one to the count for this item (adding it to the dict if necessary) if i not in winner_item_counts: winner_item_counts[i2] = 0 winner_item_counts[i2] += 1 # Data quality issues :/ Print a warning about racers with no name set. We'll take care of it later. if racer['name'] is None: print(i) print("WARNING: Encountered racer with unknown name on iteration {}/{} (racer = {})".format( i+1, len(racers), racer['name'])) return winner_item_counts # Try analyzing the imported full dataset best_items(full_dataset) q2.hint() q2.solution() ###Output _____no_output_____ ###Markdown 3. 🌶️Suppose we wanted to create a new type to represent hands in blackjack. One thing we might want to do with this type is overload the comparison operators like `>` and `<=` so that we could use them to check whether one hand beats another. e.g. it'd be cool if we could do this:```python>>> hand1 = BlackjackHand(['K', 'A'])>>> hand2 = BlackjackHand(['7', '10', 'A'])>>> hand1 > hand2True```Well, we're not going to do all that in this question (defining custom classes is a bit beyond the scope of these lessons), but the code we're asking you to write in the function below is very similar to what we'd have to write if we were defining our own `BlackjackHand` class. (We'd put it in the `__gt__` magic method to define our custom behaviour for `>`.)Fill in the body of the `blackjack_hand_greater_than` function according to the docstring. ###Code def blackjack_hand_greater_than(hand_1, hand_2): """ Return True if hand_1 beats hand_2, and False otherwise. In order for hand_1 to beat hand_2 the following must be true: - The total of hand_1 must not exceed 21 - The total of hand_1 must exceed the total of hand_2 OR hand_2's total must exceed 21 Hands are represented as a list of cards. Each card is represented by a string. When adding up a hand's total, cards with numbers count for that many points. Face cards ('J', 'Q', and 'K') are worth 10 points. 'A' can count for 1 or 11. When determining a hand's total, you should try to count aces in the way that maximizes the hand's total without going over 21. e.g. the total of ['A', 'A', '9'] is 21, the total of ['A', 'A', '9', '3'] is 14. Examples: >>> blackjack_hand_greater_than(['K'], ['3', '4']) True >>> blackjack_hand_greater_than(['K'], ['10']) False >>> blackjack_hand_greater_than(['K', 'K', '2'], ['3']) False """ value_hand_1=calculating(hand_1) value_hand_2=calculating(hand_2) return (value_hand_1 > value_hand_2 and value_hand_1<=21) or (value_hand_1<=21 and value_hand_2>21) def calculating(hand): value=0 aces=0 for card in hand: if card.isdigit(): value += int(card) elif card == 'A': value+=11 aces+=1 else: value+=10 for ace in range(aces): if value>21: value-=10 aces-=1 return(value) q3.check() q3.hint() q3.solution() ###Output _____no_output_____
src/exercises/ex1.ipynb
###Markdown Exercício 1 ###Code #constantes dadas pelo execício e = 2.71828182845 pi = 3.1415 u0 = 4*pi/10**(7) l1 = l4 = l6 = 4*10**(-2) l3 = l5 = l7 = 20*10**(-2) l9 = 6*10**(-2) l2 = l8 = 4*10**(-2) k = 15000 N = 200 B = 0.4 ###Output _____no_output_____ ###Markdown a) ###Code R = lambda u,l,s : (1/u)*(l/s) L = lambda N,R: (N**2)/R l_R1 = 2*(l3+((2*pi/4)*(l1/2))+(l4/2))+l7 s_R1 = l1*l9 R1 = R(u0*k,l_R1,s_R1) l_R2 = ((l2/2)+(l7)+(l8/2)) s_R2 = l9*l4 R2 = R(u0*k,l_R2,s_R2) l_R3 = 2*(l5 + l4/2)+l7+2*((2*pi/4)*(l6/2)) s_R3 = l9*l6 R3 = R(u0*k,l_R3,s_R3) Req = (R1)+(R2*R3/(R2+R3)) Leq = L(N,Req) phi_eq = B*s_R1 i_eq = N*phi_eq/Leq print(i_eq) ###Output 0.09355799588103081 ###Markdown b) ###Code e1 = phi_eq*(Req - R1) phi_2 = e1/R2 phi_3 = e1/R3 B2 = phi_2/s_R1 B3 = phi_3/s_R1 print(B2) print(B3) ###Output 0.2981788869679583 0.10182111303204186 ###Markdown c) ###Code print(Leq) ###Output 2.052203001912834 ###Markdown D) ###Code print("k= 15000 i=",i_eq) k = 10000 l_R1 = 2*(l3+((2*pi/4)*(l1/2))+(l4/2))+l7 s_R1 = l1*l9 R1 = R(u0*k,l_R1,s_R1) l_R2 = ((l2/2)+(l7)+(l8/2)) s_R2 = l9*l4 R2 = R(u0*k,l_R2,s_R2) l_R3 = 2*(l5 + l4/2)+l7+2*((2*pi/4)*(l6/2)) s_R3 = l9*l6 R3 = R(u0*k,l_R3,s_R3) Req = (R1)+(R2*R3/(R2+R3)) Leq = L(N,Req) phi_eq = B*s_R1 i_eq = N*phi_eq/Leq print("k= 10000 i=",i_eq) k = 5000 l_R1 = 2*(l3+((2*pi/4)*(l1/2))+(l4/2))+l7 s_R1 = l1*l9 R1 = R(u0*k,l_R1,s_R1) l_R2 = ((l2/2)+(l7)+(l8/2)) s_R2 = l9*l4 R2 = R(u0*k,l_R2,s_R2) l_R3 = 2*(l5 + l4/2)+l7+2*((2*pi/4)*(l6/2)) s_R3 = l9*l6 R3 = R(u0*k,l_R3,s_R3) Req = (R1)+(R2*R3/(R2+R3)) Leq = L(N,Req) phi_eq = B*s_R1 i_eq = N*phi_eq/Leq print("k= 5000 i= ",i_eq) ###Output k= 15000 i= 0.28067398764309237 k= 10000 i= 0.14033699382154619 k= 5000 i= 0.28067398764309237
gs_energy-prediction/notebooks/Guide_to_the_data.ipynb
###Markdown Generating data All the code for data generation is in the module `generate_lib`.The quantum chemistry data used in this project are produced using the quantum chemistry packagePsi4 (http://www.psicode.org/).Thanks to the interface package `openfermionpsi4`, the resulting molecular data can be stored in a molecule `.hdf5` file, to be directly loaded inside an `openfermion.MolecularData` object.The molecule files are stored in this repository's `data/molecules/` directory.The name of each file is determined by univocally converting the molecule's geometry to a string (see `MoleculeDataGenerator._generate_filename`).For each molecule, the relevant data for the ML model are also saved as a dictionary in a `.json` with the same name, in the structured directory `data/json/`.The data in the json files is accessible and usable without dependency on any of the aforementioned packages.To generate and save a molecule and relative data, given the geometry, it is sufficient to instantiate the object `MoleculeDataGenerator(geometry)`. ###Code from convoQC.utils.generate_lib import MoleculeDataGenerator # Set the geometry and generate the molecule geometry = ( ('H', (0., 0., 0.)), ('H', (1., 0., 0.)), ('H', (0., 1., 0.)), ('H', (0., 0., 1.)) ) # instantiating a MoleculeDataGenerator is enough to create molecule+data files gen = MoleculeDataGenerator(geometry) # the object contains the molecule and the relative data dictionary print(gen.molecule) print(gen.data_dict.keys()) print(gen.filename) ###Output H,0,0,0;H,1,0,0;H,0,1,0;H,0,0,1 ###Markdown Check that degenerate ground state subspace basis vectors are orthonormal ###Code import numpy as np from itertools import combinations_with_replacement ground_states = gen.data_dict['ground_states'] for i, j in combinations_with_replacement(range(3), 2): print(f'|<{i}|{j}>|^2 = ', round(np.abs(ground_states[:,i].conj() @ ground_states[:,j])**2, 12)) ###Output |<0|0>|^2 = 1.0 |<0|1>|^2 = 0.0 |<0|2>|^2 = 0.0 |<1|1>|^2 = 1.0 |<1|2>|^2 = 0.0 |<2|2>|^2 = 1.0 ###Markdown Activate and use the following cell to erase the `H,0,0,0;H,1,0,0;H,0,1,0;H,0,0,1` files and test molecule and data generation. ###Code from convoQC.utils import MOLECULES_DIR, JSON_DIR os.remove(MOLECULES_DIR + gen.filename + '.hdf5') os.remove(JSON_DIR + gen.filename + '.json') ###Output _____no_output_____ ###Markdown Chosen molecule familyThe first chosen molecule family for this project is $\mathrm{H}_4$ in various geometries.Some physical limits for the geometry are set:- For each pair of H atoms, the interatomic distance is not smaller than $0.4Å$. This avoids exaggerate orbital ovelaps- For each pair of adjacent atoms (in the ordered in which they're listed in the geometry) the interatomic distance is no more than $1.5Å$. This avoids completely dissociated molecules.Additionally, parameters that are irrelevant for the calculation of translation-invariant and rotation-invariant properties are (for now) fixed:- the fist atom is always at position $(0, 0, 0)$- the second atom is always on the positive X half-axis $(x_1, 0, 0)$, $x_1>0$- the third atom is always on the XY plane $(x_2, y_2, 0)$- the fourth can be anywhere in space, within the previously set limits $(x_3, y_3, z_3)$Finally, for convenience in file naming and data exchange, we keep only 4 decimals in all the $x_i, y_i, z_i$ values:- all positions are forced on a grid with resolution $0.001Å$ ###Code from convoQC.utils import H4_generate_random_molecule help(H4_generate_random_molecule) from convoQC.utils.generate_lib import H4_generate_valid_geometry H4_generate_valid_geometry() ###Output _____no_output_____ ###Markdown Generate random H4 data ###Code n_molecules_to_generate = 500 print(f'Do you really want to generate {n_molecules_to_generate} ' 'new molecules? [y/n]') inp = input() if inp == 'y': from tqdm import tqdm from convoQC.utils import H4_generate_random_molecule, FailedGeneration for _ in tqdm(range(n_molecules_to_generate)): for attempt in range(10): try: H4_generate_random_molecule() except FailedGeneration as exc: print('Failed to generate random molecule because of:\n' + str(exc)) else: break ###Output Do you really want to generate 500 new molecules? [y/n] n ###Markdown Loading data for QML modelOnly the function `load_data` in `load_lib` is needed to load the relevant data for the QML model.`load_lib` also defined `JSON_DIR` and `MOLECULES_DIR` for convenience. ###Code from convoQC.utils import load_data, JSON_DIR ###Output _____no_output_____ ###Markdown To load all data: ###Code dataset = [load_data(filename) for filename in os.listdir(JSON_DIR) if filename.endswith('.json')] print('length of the dataset:', len(dataset)) ###Output length of the dataset: 501 ###Markdown **Example:** count how many of the molecules in the dataset have a singlet ground state and how many have a triplet ###Code multiplicities = [load_data(filename)['multiplicity'] for filename in os.listdir(JSON_DIR) if filename.endswith('.json')] from collections import Counter count = dict(Counter(multiplicities)) length = len(multiplicities) for k, v in count.items(): print(f'{v/length*100:0.1f}% with multiplicity {k} ({v}/{length})') hf_errors = [d['hf_energy'] - d['exact_energy'] for d in dataset] print(f'Average HF error: {np.mean(hf_errors):.3f} ' f'with stddev {np.std(hf_errors):.3f}') ###Output Average HF error: 0.051 with stddev 0.030 ###Markdown What are the saved data Let's take as an example one data dictionary: ###Code from convoQC.utils import load_data import os filename = 'H,0,0,0;H,1,0,0;H,0,1,0;H,0,0,1' data_dict = load_data(filename) print('\ncontent of each data dictionary\n') print(f"{'KEY:':20} {'VALUE TYPE:':20}\n{'-'*60}") for k, v in data_dict.items(): print(f'{k:20} {str(type(v)):20}', f'with shape {v.shape}' if isinstance(v, np.ndarray) else "") ###Output content of each data dictionary KEY: VALUE TYPE: ------------------------------------------------------------ geometry <class 'list'> multiplicity <class 'int'> canonical_orbitals <class 'numpy.ndarray'> with shape (4, 4) canonical_to_oao <class 'numpy.ndarray'> with shape (4, 4) orbital_energies <class 'numpy.ndarray'> with shape (4,) exact_energy <class 'float'> ground_states <class 'numpy.ndarray'> with shape (256, 3) hf_energy <class 'float'> ###Markdown Input `geometry` is a list of tuples ('atom_symbol', (x, y, z)): ###Code data_dict['geometry'] ###Output _____no_output_____ ###Markdown `multiplicity` indicates wether the ground state of this molecule is a singlet (1) or triplet (3).All the ground states are saved as complex **column vectors** in a matrix of shape ($2^n$, `multiplicity` ) ###Code print('multiplicity: ', data_dict['multiplicity']) print('ground states: \n', data_dict['ground_states'].round(3)) ###Output multiplicity: 3 ground states: [[ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [-0.089+0.j -0. +0.j 0.001+0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [-0.001+0.j 0.001+0.j -0.063+0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [-0.001+0.j 0.001+0.j -0.063+0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j -0.089+0.j -0.002+0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [-0.103+0.j -0. +0.j 0.001+0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0.09 +0.j 0. +0.j -0.001+0.j] [-0. +0.j 0. +0.j -0.009+0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0.09 +0.j 0. +0.j -0.001+0.j] [-0. +0.j 0. +0.j -0.009+0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0.001+0.j -0.003+0.j 0.128+0.j] [-0. +0.j 0.078+0.j 0.002+0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [-0.078+0.j -0. +0.j 0.001+0.j] [-0.001+0.j 0.003+0.j -0.128+0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j -0. +0.j 0.009+0.j] [ 0. +0.j -0.09 +0.j -0.002+0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j -0. +0.j 0.009+0.j] [ 0. +0.j -0.09 +0.j -0.002+0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [-0. +0.j 0.103+0.j 0.002+0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0.979+0.j 0.003+0.j -0.011+0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0.008+0.j -0.015+0.j 0.692+0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0.008+0.j -0.015+0.j 0.692+0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [-0.003+0.j 0.979+0.j 0.021+0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j] [ 0. +0.j 0. +0.j 0. +0.j]] ###Markdown output `exact_energy` encodes the exact ground-state energy of the molecule, i.e. the target output of the QML model for this first stage of the project. ###Code data_dict['exact_energy'] ###Output _____no_output_____ ###Markdown benchmarking `hf_energy` stores the Hartree-Fock (HF) energy, i.e. the minimum energy of any single slater-determinant state, under mean field interactions.This is the quantity that is minimized by the HF variational self-consistent mean field method.Hartree-Fock is the first, "roughest" estimate of the ground state energy provided by quantum chemistry calculations.The zeroth requirement of a post-HF method for esrimating the ground state energy, is for its result to be more precise than the HF energy. ###Code data_dict['hf_energy'] ###Output _____no_output_____ ###Markdown Atomic and molecular orbitals (eventual additional input) Atomic orbitals are a physically-motivated, non-complete and non-orthogonal parametrization of scalar funcions of space (fields $\phi(\vec{x})$).To each atom we can assign a set of spherical harmonics centered in the atom's position and with radii dependent on the atom's charge. Each (infinite) set of spherical harmonics forms a orthonormal base for fields.If we model an atom as a point charge, these spherical harmonics will be the eigen-wavefunctions for a single electron in the atom's field.Truncating the number of spherical harmonics allows us to construct a *finite* parametrization of space that can approximate the low-energy electron field, hopefully also in the interacting and poly-atomic case.The basis functions of these parametrization are called atomic orbitals. The *minimal basis* approximation, in the case of Hydrogen atoms, prescribes that we take a single spherically symmetric *atomic orbital* for each Hydrogen - to which we have to add spin information.This results in two *spin-orbitals* per each Hydrogen atom: a total of 4 atomic orbitals (i.e. 8 spin-orbitals) for an H4 molecule.To construct an orthonormal (of course still incomplete) parametrization of fields, we take linear combinations of these 4 orbitals. The *canonical orbitals* a a special orthonormal combination of atomic orbitals, obtained through the Hartree-Fock method.The ground-state Slater determinant constructed on the canonical orbitals minimizes the total energy, accounting for electron-electron interactions with a mean-field approach.The `canonical_orbitals` matrix encodes the linear combination of atomic orbitals taken to construct the canonical orbitals: ###Code data_dict['canonical_orbitals'].round(3) ###Output _____no_output_____ ###Markdown The rows relate to each atomic orbital, ordered as the respective atoms appear in `geometry`.The columns represent molecular orbitals, ordered by increasing single-particle energy.These are the energies saved in `orbital_energies`. ###Code data_dict['orbital_energies'].round(3) ###Output _____no_output_____ ###Markdown Typically on a quantum computer, under Jordan-Wigner encoding, each pair of qubits will represent a canonical orbital (2 qubits because for each orbital there are two spins, i.e. two spin-orbitals).The `ground_states` saved in these data are encoded in this way.The `canonical_to_oao` unitary matrix encodes which linear combination of atomic orbitals needs to be taken to construct a orthonormal version of the atomic orbitals (Orthogonal Atomic Orbitals, OAO).Columns encode the OAOs shape in the canonical orbital basis.This might be useful in later stages of the project: using a Givens rotations circuit we can change the state encoding such that each qubit corresponds to one spin-OAO.This would allow to directly connect the quantum state to the geometry, as each orbital would be "localized" at the position of the respective atom. ###Code data_dict['canonical_to_oao'].round(3) ###Output _____no_output_____ ###Markdown details on OAOSome details about the symmetric orthogonalization procedure (Lödwin's method) used to obtain the orthogonal atomic orbitals.The `canonical_orbitals` the matrix $M$ represents, as column vectors, the *canonical orbitals* in the (nonorthogonal) basis of the *atomic orbitals*.The inverse, $P$, transforms a column vector from the atomic orbital basis to the canonical orbitals basis.As canonical orbitals are orthogonal, in their basis inner product is simply vector dot product: we use the transformation $P$ to calculate the inner product of the atomic orbital basis functions. With this technique we compute the atomic orbital overlap matrix $S$.Lödwin's method prescribes to use the inverse of the Hermitian square root of the overlap, $C=\sqrt{S^{-1}}$ to orthonormalize the atomic orbital functions, obtaining the ***orthogonal atomic orbitals (OAO)***.Clearly, the resulting basis is orthonormal as$$\langle \mathrm{OAO}_i \vert \mathrm{OAO}_j \rangle = C_{il} \langle \mathrm{AO}_l \vert \mathrm{AO}_k \rangle C_{kj}= ( C \cdot S \cdot C )_{ij} = (C \cdot C^{-2} \cdot C)_{ij} = \delta_{ij}$$Finally, combining $C$ and $M$, we get the unitary matrix representing the canonical orbitals in the orthogonal atomic orbital basis, and its inverse.This last one, is the one saved as `canonical_to_oao` in the data dictionary ###Code from scipy.linalg import sqrtm, inv M = data_dict['canonical_orbitals'] print('M (CO column vectors in the AO basis)\n', M.round(3)) print() P = inv(M) print('P = M^-1 (AO column vectors in the CO basis)\n', P.round(3)) # CO to AO print() S = P.T @ P print('S (overlap matrix of AOs)\n', S.round(5)) print() C = inv(sqrtm(S)) print('C (OAO column vectors in AO basis)\n', C.round(3)) print() co_to_oao = (sqrtm(P.T @ P) @ M) print('U (CO column vectors in OAO basis)\n', co_to_oao.round(3)) # CO from OAO print() oao_to_co = (P @ sqrtm(M @ M.T)) # == P @ C print('U^dag == `canonical_to_oao` (OAO column vectors in CO basis)\n', oao_to_co.round(3)) # OAO from CO ###Output M (CO column vectors in the AO basis) [[ 0.474 -0. 0. 1.287] [ 0.282 0.105 0.963 -0.563] [ 0.282 -0.887 -0.391 -0.563] [ 0.282 0.782 -0.572 -0.563]] P = M^-1 (AO column vectors in the CO basis) [[ 0.894 0.681 0.681 0.681] [ 0. 0.074 -0.63 0.555] [ 0. 0.684 -0.278 -0.406] [ 0.448 -0.251 -0.251 -0.251]] S (overlap matrix of AOs) [[1. 0.49648 0.49648 0.49648] [0.49648 1. 0.2899 0.2899 ] [0.49648 0.2899 1. 0.2899 ] [0.49648 0.2899 0.2899 1. ]] C (OAO column vectors in AO basis) [[ 1.296 -0.258 -0.258 -0.258] [-0.258 1.123 -0.064 -0.064] [-0.258 -0.064 1.123 -0.064] [-0.258 -0.064 -0.064 1.123]] U (CO column vectors in OAO basis) [[ 0.632 -0. 0. 0.775] [ 0.447 0.088 0.812 -0.365] [ 0.447 -0.747 -0.329 -0.365] [ 0.447 0.659 -0.482 -0.365]] U^dag == `canonical_to_oao` (OAO column vectors in CO basis) [[ 0.632 0.447 0.447 0.447] [-0. 0.088 -0.747 0.659] [ 0. 0.812 -0.329 -0.482] [ 0.775 -0.365 -0.365 -0.365]] ###Markdown UCC state preparation data In a separated script (`../scripts/ucc_optimization`) we optimize a parametrized quantum circuit (PQC) to ground states of each molecule.The circuit we use is based on the unitary coupled cluster ansatz (UCC), a variational ansatz with foundations in perturbation theory.The optimal parameters for the PQC can be found in the folder `utils.UCC_DIR`, and are loaded from each file using `utils.load_ucc_data`.A utility function in `convoQC.utils.tfq_utils` takes care of the circuit generation. ###Code # choose file filename = 'H,0,0,0;H,1,0,0;H,0,1,0;H,0,0,1' # load parameters from convoQC.utils.tfq_utils import tensorable_ucc_circuit tfq_circuit = tensorable_ucc_circuit(filename) import tensorflow_quantum as tfq tensor = tfq.convert_to_tensor([tfq_circuit]) # check this circuit works as the actual UCC circuit import numpy as np import cirq from convoQC.utils import load_data, load_ucc_data ground_states = load_data(filename)['ground_states'] s = cirq.Simulator() state = s.simulate(tfq_circuit).final_state print('fidelity to GS of state prepared by tfq.util.exponential: ', np.sum(np.abs(state @ ground_states.conj())**2)) print('fidelity to GS from optimized UCC circuit: ', 1 - load_ucc_data(filename)['infidelity']) ###Output fidelity to GS of state prepared by tfq.util.exponential: 0.9990187419473717 fidelity to GS from optimized UCC circuit: 0.9991000951269119 ###Markdown Study of the difference between original UCC and TFQ circuitThis paragraph is used to analyse the discrepancy in these fidelities, and is not relevant for the user of the data (i.e. you can skip to the conclusion). ###Code from convoQC.ansatz_functions.ucc_functions import ( generate_ucc_amplitudes, generate_ucc_operators) from convoQC.scripts.optimize_ucc import (load_molecule, singlet_hf_generator, triplet_hf_generator) from convoQC.ansatz_functions.qubitoperator_to_paulistring_translator \ import qubitoperator_to_pauli_string import numpy as np import openfermion singles, doubles = generate_ucc_amplitudes(4, 8) ucc_ferop = generate_ucc_operators(singles, doubles) multiplicity = load_data(filename)['multiplicity'] params = load_ucc_data(filename)['params'] ucc_circuit = cirq.Circuit( singlet_hf_generator(4, 4) if multiplicity == 1 else triplet_hf_generator(4, 4) ) for fop, param in zip(ucc_ferop, params): qubit_op = openfermion.jordan_wigner(fop) for ps, val in qubit_op.terms.items(): pauli_string = qubitoperator_to_pauli_string( openfermion.QubitOperator(ps, np.sign(val))) ucc_circuit.append(cirq.PauliStringPhasor(pauli_string, exponent_neg=param)) ucc_layer = ucc_circuit[:2] tfq_layer = tfq_circuit[:12] print(ucc_layer, '\n\n') print(tfq_layer, '\n\n') state_tfq_layer = s.simulate(tfq_layer).final_state state_ucc_layer = s.simulate(ucc_layer).final_state print('|<UCC_layer|TFQ_layer>|^2 =', np.abs(state_tfq_layer @ state_ucc_layer.conj())**2) ucc_unitary = ucc_circuit.unitary() tfq_unitary = tfq_circuit.unitary() state_tfq = s.simulate(ucc_circuit).final_state state_ucc = s.simulate(tfq_circuit).final_state ground_state = load_data(filename)['ground_states'] print('|<UCC|TFQ>|^2 =', np.abs(state_tfq @ state_ucc.conj())**2) print('UCC GS fidelity =', np.sum(np.abs(state_ucc @ ground_states.conj())**2)) print('TFQ GS fidelity =', np.sum(np.abs(state_tfq @ ground_states.conj())**2)) print('unitary fidelity |Tr[U_UCC @ U_TFQ^dag]|/2^8 =', np.abs(np.trace(ucc_unitary @ tfq_unitary.conj().T))/2**8) ###Output |<UCC|TFQ>|^2 = 0.9999208465981155 UCC GS fidelity = 0.9990187419473717 TFQ GS fidelity = 0.9991000951269119 unitary fidelity |Tr[U_UCC @ U_TFQ^dag]|/2^8 = 1.0000000000000748 ###Markdown **CONCLUSION:** the discrepancy is due to numerical error. Utilities List files ###Code import os from convoQC.utils import MOLECULES_DIR, JSON_DIR, UCC_DIR molecule_files = sorted(os.listdir(MOLECULES_DIR)) data_files = sorted(os.listdir(JSON_DIR)) ucc_files = sorted(os.listdir(UCC_DIR)) print(f'MOLECULES_DIR content: {len(molecule_files)} files') print(*molecule_files[:5], '...' if len(molecule_files)>5 else '', sep='\n') print(f'\nJSON_DIR content: {len(data_files)} files') print(*data_files[:5], '...' if len(data_files)>5 else '', sep='\n') print(f'\nUCC_DIR content: {len(ucc_files)} files') print(*ucc_files[:5], '...' if len(ucc_files)>5 else '', sep='\n') ###Output MOLECULES_DIR content: 501 files H,0,0,0;H,0.401,0,0;H,-0.1196,0.9943,0;H,0.0526,0.2628,-0.3113.hdf5 H,0,0,0;H,0.4023,0,0;H,0.3393,1.0607,0;H,-0.0812,-0.0573,-0.426.hdf5 H,0,0,0;H,0.4028,0,0;H,-0.4404,0.4811,0;H,0.0828,-0.7849,-0.2394.hdf5 H,0,0,0;H,0.4059,0,0;H,0.3759,1.4515,0;H,0.4392,1.1007,0.7042.hdf5 H,0,0,0;H,0.4069,0,0;H,-0.3451,-0.2725,0;H,-1.0738,-0.6908,-0.3546.hdf5 ... JSON_DIR content: 501 files H,0,0,0;H,0.401,0,0;H,-0.1196,0.9943,0;H,0.0526,0.2628,-0.3113.json H,0,0,0;H,0.4023,0,0;H,0.3393,1.0607,0;H,-0.0812,-0.0573,-0.426.json H,0,0,0;H,0.4028,0,0;H,-0.4404,0.4811,0;H,0.0828,-0.7849,-0.2394.json H,0,0,0;H,0.4059,0,0;H,0.3759,1.4515,0;H,0.4392,1.1007,0.7042.json H,0,0,0;H,0.4069,0,0;H,-0.3451,-0.2725,0;H,-1.0738,-0.6908,-0.3546.json ... UCC_DIR content: 501 files H,0,0,0;H,0.401,0,0;H,-0.1196,0.9943,0;H,0.0526,0.2628,-0.3113.json H,0,0,0;H,0.4023,0,0;H,0.3393,1.0607,0;H,-0.0812,-0.0573,-0.426.json H,0,0,0;H,0.4028,0,0;H,-0.4404,0.4811,0;H,0.0828,-0.7849,-0.2394.json H,0,0,0;H,0.4059,0,0;H,0.3759,1.4515,0;H,0.4392,1.1007,0.7042.json H,0,0,0;H,0.4069,0,0;H,-0.3451,-0.2725,0;H,-1.0738,-0.6908,-0.3546.json ... ###Markdown Prompt to delete molecule and data files (CAUTION) ###Code from convoQC.utils import MOLECULES_DIR, JSON_DIR, UCC_DIR def clean_data(): print('remove all data files from MOLECULE_DIR? [y/n]') inp = input() if inp == 'y': for f in os.listdir(MOLECULES_DIR): os.remove(MOLECULES_DIR + f) print('remove all data files from JSON_DIR? [y/n]') inp = input() if inp == 'y': for f in os.listdir(JSON_DIR): os.remove(JSON_DIR + f) print('remove all data files from UCC_DIR? [y/n]') inp = input() if inp == 'y': for f in os.listdir(UCC_DIR): os.remove(UCC_DIR + f) clean_data() ###Output _____no_output_____
project/pre-processing.ipynb
###Markdown Decide which N – 2 categories your group would like to focus on. You are required to focus on the proprietary attributes – style and occasion. Beyond that, your group should pick N - 2 other groups to analyze.Examples of categories:- Embellishments- Category- Prints- Material ###Code Counter(clean_tag_df.name).most_common(10) attribute_list = ['category','fit','style','occasion'] for i in attribute_list: print(i) print(Counter(clean_tag_df[clean_tag_df.name==i].value.apply(tuple)).most_common(),'\n') ###Output category [(('top',), 963), (('bottom',), 899), (('shoe',), 672), (('one_piece',), 427), (('accessory',), 310), (('sweater',), 293), (('blazers_coats_jackets',), 280), (('sweatshirt_hoodie',), 118), (('accessory', 'top'), 3), (('bottom', 'top'), 1), (('blazers_coats_jackets', 'top'), 1), (('blazers_coats_jackets', 'sweater'), 1), (('top', 'sweatshirt_hoodie'), 1)] fit [(('relaxed',), 1093), (('semi_fitted',), 654), (('straight_regular',), 553), (('fitted_tailored',), 455), (('oversized',), 105), (('semi_fitted', 'relaxed'), 34), (('relaxed', 'straight_regular'), 20), (('relaxed', 'oversized'), 7), (('semi_fitted', 'straight_regular'), 7), (('fitted_tailored', 'semi_fitted'), 6), (('oversized', 'relaxed'), 5), (('semi_fitted', 'fitted_tailored'), 4), (('semi_fitted', 'oversized'), 3), (('relaxed', 'semi_fitted', 'straight_regular'), 2), (('fitted_tailored', 'relaxed'), 1)] style [(('classic', 'casual'), 353), (('edgy', 'modern', 'casual'), 175), (('classic', 'modern', 'casual'), 165), (('casual',), 161), (('business_casual', 'casual'), 152), (('modern', 'casual'), 147), (('classic', 'business_casual'), 140), (('edgy', 'glam', 'modern'), 103), (('athleisure', 'casual'), 85), (('classic', 'business_casual', 'modern', 'androgynous'), 78), (('glam', 'modern'), 76), (('classic', 'business_casual', 'casual'), 76), (('edgy', 'modern'), 76), (('edgy', 'casual'), 73), (('romantic', 'casual'), 71), (('business_casual', 'modern'), 70), (('classic', 'business_casual', 'modern'), 61), (('romantic', 'business_casual', 'casual'), 57), (('athleisure', 'modern', 'casual'), 55), (('classic', 'boho', 'casual'), 53), (('boho', 'modern', 'casual'), 53), (('androgynous', 'casual'), 47), (('modern', 'boho', 'casual'), 46), (('retro', 'casual'), 43), (('edgy', 'androgynous', 'casual'), 42), (('romantic', 'boho', 'casual'), 39), (('boho', 'casual'), 38), (('business_casual', 'modern', 'androgynous'), 32), (('business_casual', 'androgynous', 'casual'), 32), (('classic', 'business_casual', 'androgynous'), 31), (('androgynous', 'classic', 'modern', 'business_casual', 'casual'), 30), (('classic', 'athleisure', 'casual'), 29), (('edgy', 'modern', 'androgynous'), 28), (('classic', 'modern'), 27), (('classic', 'androgynous', 'casual'), 25), (('classic', 'romantic', 'casual'), 24), (('modern', 'androgynous', 'casual'), 24), (('athleisure',), 22), (('romantic', 'boho'), 21), (('business_casual',), 19), (('androgynous', 'modern', 'casual'), 19), (('edgy', 'modern', 'androgynous', 'casual'), 19), (('edgy', 'business_casual', 'modern'), 17), (('romantic', 'glam', 'modern'), 17), (('classic', 'modern', 'boho', 'casual'), 16), (('romantic', 'modern'), 15), (('classic', 'glam', 'modern'), 15), (('retro', 'romantic', 'casual'), 15), (('classic', 'modern', 'androgynous', 'casual'), 15), (('classic', 'boho', 'modern', 'casual'), 14), (('edgy', 'androgynous', 'modern', 'casual'), 14), (('romantic', 'business_casual'), 14), (('romantic', 'edgy', 'casual'), 14), (('romantic', 'glam'), 13), (('classic', 'androgynous', 'modern', 'casual'), 13), (('classic', 'romantic', 'business_casual'), 13), (('edgy', 'business_casual', 'casual'), 13), (('retro', 'casual', 'classic'), 12), (('classic', 'business_casual', 'modern', 'casual'), 11), (('romantic', 'business_casual', 'boho', 'casual'), 11), (('modern', 'boho'), 11), (('classic', 'modern', 'androgynous'), 11), (('business_casual', 'modern', 'casual'), 11), (('boho', 'modern'), 10), (('classic', 'business_casual', 'androgynous', 'casual'), 10), (('classic', 'romantic', 'business_casual', 'casual'), 10), (('edgy', 'glam'), 10), (('modern',), 10), (('athleisure', 'modern', 'androgynous', 'casual'), 10), (('romantic', 'modern', 'casual'), 10), (('romantic', 'edgy'), 10), (('classic',), 9), (('romantic', 'business_casual', 'modern'), 9), (('athleisure', 'androgynous', 'casual'), 9), (('edgy',), 9), (('edgy', 'boho', 'modern', 'casual'), 9), (('romantic', 'glam', 'casual'), 8), (('athleisure', 'androgynous', 'classic', 'modern', 'casual'), 8), (('modern', 'androgynous'), 8), (('athleisure', 'modern'), 8), (('retro', 'business_casual'), 8), (('classic', 'athleisure', 'modern', 'casual'), 8), (('classic', 'romantic', 'glam', 'modern'), 8), (('edgy', 'business_casual', 'modern', 'androgynous'), 8), (('athleisure', 'edgy', 'modern', 'casual'), 8), (('classic', 'edgy', 'casual'), 8), (('edgy', 'modern', 'boho', 'casual'), 8), (('classic', 'retro', 'casual'), 7), (('retro', 'romantic', 'boho', 'casual'), 7), (('glam', 'classic', 'modern', 'romantic', 'business_casual'), 7), (('edgy', 'glam', 'modern', 'androgynous'), 7), (('retro', 'romantic'), 7), (('classic', 'romantic', 'glam'), 7), (('romantic', 'edgy', 'glam'), 7), (('retro', 'edgy', 'casual'), 7), (('romantic', 'business_casual', 'glam'), 7), (('edgy', 'classic', 'modern', 'boho', 'casual'), 7), (('retro', 'business_casual', 'classic'), 6), (('romantic', 'business_casual', 'boho'), 6), (('classic', 'glam'), 6), (('retro', 'androgynous', 'casual'), 6), (('athleisure', 'androgynous', 'modern', 'casual'), 6), (('business_casual', 'glam'), 6), (('retro', 'boho', 'casual'), 6), (('classic', 'romantic', 'modern', 'casual'), 5), (('androgynous', 'classic', 'modern', 'business_casual', 'retro'), 5), (('classic', 'romantic', 'boho', 'casual'), 5), (('androgynous', 'boho', 'casual'), 5), (('classic', 'romantic', 'business_casual', 'modern'), 5), (('romantic', 'boho', 'modern'), 5), (('edgy', 'business_casual'), 5), (('business_casual', 'boho', 'casual'), 5), (('business_casual', 'androgynous'), 5), (('classic', 'edgy', 'glam', 'modern'), 5), (('classic', 'edgy', 'business_casual', 'modern'), 4), (('retro', 'androgynous', 'modern', 'casual'), 4), (('edgy', 'androgynous', 'classic', 'modern', 'business_casual', 'casual'), 4), (('edgy', 'androgynous', 'classic', 'modern', 'business_casual'), 4), (('business_casual', 'glam', 'modern'), 4), (('edgy', 'glam', 'androgynous', 'modern', 'business_casual'), 4), (('glam', 'modern', 'androgynous'), 4), (('retro', 'romantic', 'boho'), 4), (('classic', 'romantic'), 4), (('business_casual', 'modern', 'androgynous', 'casual'), 4), (('romantic', 'modern', 'boho'), 4), (('romantic', 'edgy', 'glam', 'modern'), 4), (('edgy', 'boho', 'casual'), 4), (('classic', 'business_casual', 'retro'), 4), (('retro', 'athleisure', 'casual'), 4), (('classic', 'edgy', 'modern', 'casual'), 4), (('classic', 'boho', 'androgynous', 'casual'), 3), (('edgy', 'glam', 'androgynous', 'classic', 'modern', 'business_casual'), 3), (('classic', 'androgynous', 'boho', 'casual'), 3), (('edgy', 'glam', 'androgynous', 'classic', 'modern', 'business_casual', 'casual'), 3), (('business_casual', 'modern', 'boho', 'casual'), 3), (('business_casual', 'androgynous', 'modern', 'casual'), 3), (('glam', 'modern', 'casual'), 3), (('classic', 'edgy', 'modern'), 3), (('edgy', 'business_casual', 'androgynous', 'casual'), 3), (('classic', 'business_casual', 'glam'), 3), (('classic', 'modern', 'romantic', 'business_casual', 'casual'), 3), (('boho', 'androgynous', 'modern', 'casual'), 3), (('edgy', 'modern', 'boho'), 3), (('classic', 'glam', 'casual'), 3), (('retro', 'glam'), 3), (('romantic', 'edgy', 'modern'), 3), (('retro', 'business_casual', 'casual', 'classic'), 2), (('edgy', 'glam', 'androgynous', 'classic', 'modern', 'romantic', 'business_casual'), 2), (('androgynous', 'modern', 'business_casual', 'retro', 'boho', 'casual'), 2), (('classic', 'business_casual', 'boho', 'casual'), 2), (('edgy', 'glam', 'classic', 'modern', 'business_casual'), 2), (('edgy', 'glam', 'modern', 'romantic', 'business_casual'), 2), (('athleisure', 'androgynous', 'classic', 'modern', 'business_casual', 'casual'), 2), (('glam',), 2), (('retro', 'romantic', 'glam'), 2), (('romantic', 'athleisure', 'casual'), 2), (('romantic', 'glam', 'modern', 'androgynous'), 2), (('athleisure', 'edgy', 'androgynous', 'modern', 'casual'), 2), (('edgy', 'business_casual', 'glam', 'modern'), 2), (('classic', 'romantic', 'retro'), 2), (('edgy', 'business_casual', 'modern', 'casual'), 2), (('romantic', 'business_casual', 'glam', 'modern'), 2), (('retro', 'romantic', 'business_casual', 'classic'), 2), (('classic', 'modern', 'romantic', 'business_casual', 'retro'), 2), (('retro', 'business_casual', 'androgynous', 'casual'), 2), (('classic', 'romantic', 'glam', 'casual'), 2), (('edgy', 'glam', 'classic', 'modern', 'romantic'), 2), (('romantic', 'business_casual', 'modern', 'androgynous'), 2), (('modern', 'romantic', 'business_casual', 'boho', 'casual'), 2), (('retro', 'edgy', 'androgynous', 'casual'), 2), (('classic', 'romantic', 'edgy', 'glam'), 2), (('androgynous', 'classic', 'modern', 'business_casual', 'boho', 'casual'), 2), (('classic', 'edgy', 'androgynous', 'casual'), 2), (('edgy', 'androgynous', 'classic', 'modern', 'casual'), 2), (('classic', 'romantic', 'boho'), 2), (('classic', 'modern', 'boho'), 2), (('classic', 'athleisure', 'androgynous', 'casual'), 2), (('retro', 'classic'), 2), (('romantic', 'glam', 'boho'), 2), (('romantic', 'edgy', 'glam', 'casual'), 2), (('retro', 'romantic', 'business_casual'), 2), (('athleisure', 'business_casual', 'casual'), 2), (('romantic', 'business_casual', 'glam', 'casual'), 2), (('glam', 'casual'), 2), (('edgy', 'business_casual', 'androgynous'), 2), (('edgy', 'boho', 'modern'), 2), (('athleisure', 'modern', 'androgynous'), 2), (('edgy', 'glam', 'classic', 'modern', 'boho', 'casual'), 2), (('romantic', 'athleisure', 'modern', 'casual'), 2), (('boho', 'glam', 'modern'), 2), (('classic', 'edgy', 'boho', 'casual'), 1), (('glam', 'classic', 'romantic', 'business_casual', 'casual'), 1), (('classic', 'modern', 'business_casual', 'retro', 'casual'), 1), (('edgy', 'glam', 'androgynous', 'classic', 'modern', 'romantic', 'business_casual', 'casual'), 1), (('androgynous', 'modern', 'business_casual', 'retro', 'boho'), 1), (('romantic', 'glam', 'modern', 'casual'), 1), (('business_casual', 'boho', 'androgynous', 'casual'), 1), (('edgy', 'glam', 'androgynous', 'modern', 'business_casual', 'retro', 'casual'), 1), (('androgynous', 'classic', 'modern', 'business_casual', 'retro', 'casual'), 1), (('classic', 'romantic', 'business_casual', 'retro', 'boho', 'casual'), 1), (('androgynous', 'modern', 'business_casual', 'boho', 'casual'), 1), (('androgynous', 'modern', 'retro', 'boho', 'casual'), 1), (('glam', 'androgynous', 'classic', 'modern', 'romantic'), 1), (('glam', 'classic', 'modern', 'business_casual', 'retro'), 1), (('glam', 'androgynous', 'classic', 'modern', 'business_casual'), 1), (('classic', 'business_casual', 'glam', 'casual'), 1), (('classic', 'business_casual', 'androgynous', 'retro'), 1), (('edgy', 'androgynous', 'classic', 'business_casual', 'casual'), 1), (('glam', 'classic', 'romantic', 'boho', 'casual'), 1), (('retro', 'modern', 'androgynous', 'classic'), 1), (('retro',), 1), (('androgynous', 'classic', 'modern', 'romantic', 'business_casual'), 1), (('androgynous', 'classic', 'business_casual', 'retro', 'casual'), 1), (('athleisure', 'edgy', 'glam', 'androgynous', 'classic', 'modern', 'romantic'), 1), (('classic', 'business_casual', 'retro', 'androgynous'), 1), (('athleisure', 'androgynous', 'classic', 'modern', 'retro', 'casual'), 1), (('retro', 'androgynous'), 1), (('glam', 'androgynous', 'classic', 'modern', 'romantic', 'boho'), 1), (('retro', 'romantic', 'business_casual', 'androgynous'), 1), (('boho', 'androgynous', 'casual'), 1), (('glam', 'androgynous', 'classic', 'modern', 'business_casual', 'retro'), 1), (('retro', 'romantic', 'glam', 'androgynous'), 1), (('glam', 'androgynous', 'classic', 'modern', 'retro', 'casual'), 1), (('romantic', 'modern', 'androgynous'), 1), (('athleisure', 'edgy', 'glam', 'androgynous', 'modern', 'business_casual', 'casual'), 1), (('edgy', 'androgynous', 'modern', 'business_casual', 'retro'), 1), (('retro', 'modern', 'androgynous'), 1), (('glam', 'androgynous', 'modern', 'business_casual', 'retro', 'boho'), 1), (('edgy', 'glam', 'androgynous', 'modern', 'romantic', 'business_casual', 'retro'), 1), (('glam', 'modern', 'romantic', 'boho', 'casual'), 1), (('athleisure', 'boho', 'casual'), 1), (('edgy', 'androgynous', 'classic', 'modern', 'romantic', 'retro', 'boho', 'casual'), 1), (('athleisure', 'edgy', 'androgynous', 'classic', 'modern'), 1), (('glam', 'androgynous', 'classic', 'modern', 'romantic', 'business_casual', 'boho'), 1), (('athleisure', 'edgy', 'androgynous', 'casual'), 1), (('romantic', 'glam', 'edgy', 'business_casual'), 1), (('athleisure', 'edgy', 'androgynous', 'classic', 'modern', 'retro', 'casual'), 1), (('glam', 'androgynous', 'classic', 'modern', 'romantic', 'retro'), 1), (('retro', 'business_casual', 'modern', 'androgynous'), 1), (('romantic', 'boho', 'androgynous', 'casual'), 1), (('glam', 'androgynous', 'classic', 'modern', 'romantic', 'business_casual', 'retro'), 1), (('boho', 'business_casual', 'modern'), 1), (('romantic', 'business_casual', 'androgynous'), 1), (('athleisure', 'androgynous', 'modern', 'business_casual', 'casual'), 1), (('athleisure', 'edgy'), 1), (('romantic', 'edgy', 'business_casual', 'modern'), 1), (('glam', 'androgynous', 'modern', 'romantic', 'business_casual', 'retro'), 1), (('androgynous', 'classic', 'modern', 'retro', 'casual'), 1), (('glam', 'edgy', 'business_casual', 'modern'), 1), (('edgy', 'androgynous', 'modern', 'business_casual', 'boho', 'casual'), 1), (('edgy', 'business_casual', 'glam', 'androgynous'), 1), (('classic', 'romantic', 'business_casual', 'boho', 'casual'), 1), (('classic', 'business_casual', 'glam', 'modern'), 1), (('romantic', 'boho', 'glam', 'modern'), 1), (('edgy', 'boho', 'modern', 'androgynous'), 1), (('androgynous', 'romantic', 'retro', 'boho', 'casual'), 1), (('edgy', 'androgynous', 'modern', 'boho', 'casual'), 1), (('classic', 'retro', 'androgynous', 'casual'), 1), (('androgynous', 'classic', 'modern', 'romantic', 'business_casual', 'boho'), 1), (('classic', 'modern', 'romantic', 'boho', 'casual'), 1), (('classic', 'modern', 'boho', 'androgynous'), 1), (('edgy', 'glam', 'androgynous', 'modern', 'romantic', 'business_casual'), 1), (('glam', 'classic', 'modern', 'romantic', 'business_casual', 'casual'), 1), (('romantic', 'boho', 'modern', 'edgy'), 1), (('androgynous', 'modern', 'romantic', 'business_casual', 'casual'), 1), (('glam', 'modern', 'androgynous', 'casual'), 1), (('glam', 'classic', 'modern', 'business_casual', 'casual'), 1), (('retro', 'business_casual', 'androgynous', 'classic'), 1), (('retro', 'business_casual', 'modern', 'casual'), 1), (('retro', 'edgy', 'modern', 'casual'), 1), (('boho', 'modern', 'androgynous', 'casual'), 1), (('edgy', 'glam', 'classic', 'modern', 'casual'), 1), (('glam', 'classic', 'modern', 'romantic', 'casual'), 1), (('edgy', 'glam', 'modern', 'casual'), 1), (('retro', 'boho', 'glam', 'modern'), 1), (('modern', 'boho', 'androgynous'), 1), (('edgy', 'androgynous', 'modern', 'business_casual', 'casual'), 1), (('edgy', 'glam', 'modern', 'business_casual', 'casual'), 1), (('classic', 'retro'), 1), (('androgynous', 'classic', 'modern', 'boho', 'casual'), 1), (('edgy', 'glam', 'classic', 'business_casual', 'casual'), 1), (('classic', 'romantic', 'modern'), 1), (('boho', 'glam', 'modern', 'casual'), 1), (('classic', 'edgy', 'business_casual', 'casual'), 1), (('romantic', 'glam', 'boho', 'casual'), 1), (('edgy', 'glam', 'modern', 'business_casual'), 1), (('modern', 'androgynous', 'boho', 'casual'), 1), (('classic', 'edgy', 'business_casual'), 1), (('classic', 'edgy', 'androgynous'), 1), (('romantic', 'androgynous', 'casual'), 1), (('business_casual', 'glam', 'modern', 'casual'), 1), (('retro', 'romantic', 'athleisure', 'casual'), 1), (('retro', 'romantic', 'androgynous', 'casual'), 1), (('retro', 'business_casual', 'casual'), 1), (('romantic', 'business_casual', 'casual', 'edgy'), 1), (('classic', 'romantic', 'business_casual', 'glam'), 1), (('retro', 'glam', 'casual'), 1), (('edgy', 'glam', 'casual'), 1), (('romantic', 'edgy', 'business_casual'), 1), (('business_casual', 'boho'), 1), (('classic', 'androgynous'), 1), (('retro', 'romantic', 'casual', 'classic'), 1), (('glam', 'edgy', 'business_casual', 'casual'), 1), (('glam', 'modern', 'boho', 'casual'), 1), (('classic', 'romantic', 'edgy', 'business_casual'), 1), (('edgy', 'glam', 'modern', 'business_casual', 'retro', 'casual'), 1), (('classic', 'romantic', 'retro', 'casual'), 1), (('classic', 'romantic', 'business_casual', 'retro'), 1), (('romantic', 'business_casual', 'modern', 'casual'), 1), (('classic', 'romantic', 'business_casual', 'boho'), 1), (('edgy', 'boho', 'glam', 'modern'), 1), (('classic', 'boho'), 1), (('edgy', 'glam', 'classic', 'modern', 'business_casual', 'retro', 'casual'), 1), (('athleisure', 'edgy', 'modern', 'androgynous'), 1), (('retro', 'athleisure', 'androgynous', 'casual'), 1), (('modern', 'glam', 'boho'), 1), (('edgy', 'glam', 'business_casual'), 1), (('retro', 'business_casual', 'androgynous'), 1), (('androgynous',), 1), (('retro', 'boho'), 1), (('classic', 'boho', 'retro', 'casual'), 1), (('classic', 'glam', 'boho', 'retro'), 1), (('classic', 'edgy', 'retro'), 1), (('romantic', 'boho', 'modern', 'casual'), 1), (('business_casual', 'modern', 'boho'), 1), (('athleisure', 'edgy', 'casual'), 1), (('retro', 'edgy', 'classic'), 1), (('classic', 'edgy', 'business_casual', 'androgynous'), 1), (('classic', 'boho', 'modern'), 1)] occasion [(('weekend', 'day_tonight'), 1073), (('work', 'day_tonight'), 387), (('weekend', 'work', 'day_tonight'), 249), (('weekend', 'vacation'), 243), (('weekend', 'vacation', 'day_tonight'), 205), (('weekend', 'night_out', 'day_tonight'), 192), (('weekend',), 186), (('weekend', 'night_out'), 185), (('weekend', 'work'), 118), (('weekend', 'workout'), 76), (('work', 'night_out', 'day_tonight'), 72), (('night_out', 'day_tonight'), 67), (('weekend', 'night_out', 'work', 'day_tonight'), 64), (('vacation', 'day_tonight'), 64), (('weekend', 'work', 'night_out', 'day_tonight'), 60), (('weekend', 'vacation', 'work'), 59), (('night_out', 'work', 'day_tonight'), 59), (('night_out',), 55), (('cold_weather', 'work', 'weekend', 'night_out', 'day_tonight'), 52), (('work',), 49), (('workout',), 49), (('day_tonight',), 34), (('cold_weather', 'weekend', 'work', 'day_tonight'), 26), (('cold_weather', 'weekend', 'work'), 26), (('weekend', 'work', 'cold_weather'), 23), (('cold_weather', 'weekend', 'day_tonight'), 17), (('weekend', 'vacation', 'work', 'day_tonight'), 16), (('cold_weather', 'weekend'), 13), (('vacation', 'work', 'weekend', 'night_out', 'day_tonight'), 12), (('weekend', 'workout', 'day_tonight'), 11), (('cold_weather', 'day_tonight'), 11), (('cold_weather', 'work', 'night_out', 'day_tonight'), 11), (('weekend', 'cold_weather'), 11), (('weekend', 'vacation', 'night_out', 'day_tonight'), 10), (('cold_weather', 'work', 'day_tonight'), 10), (('weekend', 'vacation', 'night_out'), 9), (('vacation',), 9), (('cold_weather', 'night_out', 'work', 'day_tonight'), 7), (('cold_weather', 'weekend', 'night_out', 'day_tonight'), 7), (('weekend', 'work', 'night_out'), 6), (('weekend', 'night_out', 'work'), 6), (('cold_weather', 'night_out'), 6), (('work', 'night_out'), 5), (('weekend', 'day_tonight', 'night_out', 'cold_weather'), 5), (('weekend', 'day_tonight', 'work', 'cold_weather'), 5), (('cold_weather', 'work'), 5), (('weekend', 'workout', 'vacation'), 4), (('night_out', 'work'), 4), (('weekend', 'day_tonight', 'cold_weather'), 4), (('workout', 'day_tonight'), 4), (('vacation', 'work', 'day_tonight'), 3), (('cold_weather', 'weekend', 'workout'), 3), (('cold_weather', 'night_out', 'day_tonight'), 3), (('weekend', 'night_out', 'cold_weather'), 3), (('cold_weather', 'weekend', 'night_out'), 3), (('vacation', 'night_out'), 3), (('vacation', 'night_out', 'day_tonight'), 2), (('weekend', 'workout', 'work', 'day_tonight'), 2), (('weekend', 'workout', 'cold_weather'), 2), (('vacation', 'weekend', 'workout', 'night_out', 'day_tonight'), 1), (('weekend', 'workout', 'vacation', 'work'), 1), (('vacation', 'work', 'weekend', 'workout', 'day_tonight'), 1), (('weekend', 'night_out', 'vacation', 'work'), 1), (('weekend', 'workout', 'vacation', 'day_tonight'), 1), (('vacation', 'work', 'weekend', 'workout', 'night_out', 'day_tonight'), 1), (('cold_weather', 'weekend', 'vacation'), 1), (('weekend', 'work', 'night_out', 'cold_weather'), 1), (('vacation', 'work'), 1)] ###Markdown Join the tagged_product_attributes table with full_data and investigate the details, descriptions, and tags used for each categoryyour goal is to get a sense for the business logic and rules used in tagging a certain product category. ###Code df = pd.merge(full_df, clean_tag_df, on = 'product_id') df['style'] = df[df.name=='style'].value.apply(list) df['occasion'] = df[df.name=='occasion'].value.apply(list) df['fit'] = df[df.name=='fit'].value.apply(list) df['category'] = df[df.name=='category'].value.apply(list) df = df.drop(['mpn','created_at','updated_at','deleted_at','bc_product_id','labels'], axis=1) def notna(x): for i in x: if isinstance(i, list): return i return '' df = df.groupby('product_id').agg({ 'brand':'first', 'product_full_name':'first', 'description':'first', 'brand_category':'first','brand_canonical_url':'first', 'details':'first', 'style':notna,'occasion':notna, 'fit':notna,'category':notna}) df input_cols = df.columns[:-4] input_df = df[input_cols].apply(lambda x: x.str.lower()).replace(np.nan,'').astype(str).copy() input_df def process_brand(x): return re.sub(r'[^a-z]', ' ', x) def process_text(x): ###Output _____no_output_____ ###Markdown Build a model that will takes as input:- product description (if any)- product name- product details (if any)- brand And outputs the predicted attributes of this product. For example, if the category you are using is fit,INPUT:- description: Blush linen Button fastenings along front 100% linen; lining: 100% cotton Dry clean Designer color: Shell Imported- brand: Zimmermann- brand_category: Clothing / Jumpsuits / Full LengthThe actual clothing’s product URL is here.OUTPUT:- predicted fit: RELAXED style ###Code style_df = df[df['style']!=np.nan].copy() style_list = style_df['style'].explode().unique() for i in style_list: style_df[i] = style_df['style'].apply(lambda x: 1 if i in x else 0) style_df input_cols = style_df.columns[:-13] input_df = style_df[input_cols].apply(lambda x: x.str.lower()) input_df = input_df.astype(str) input_df from sklearn.feature_extraction.text import CountVectorizer from sklearn.model_selection import train_test_split from sklearn.metrics import accuracy_score from sklearn.naive_bayes import GaussianNB from sklearn.metrics import confusion_matrix # creating bag of words model def dummy(doc): return doc cv = CountVectorizer(tokenizer=dummy,preprocessor=dummy) X = cv.fit_transform(input_df.brand.apply(lambda x: [x])).toarray() y = style_df.modern.values X_train, X_test, y_train, y_test = train_test_split( X, y, test_size = 0.25, random_state = 0) # fitting naive bayes to the training set from sklearn.naive_bayes import GaussianNB from sklearn.metrics import confusion_matrix classifier = GaussianNB(); classifier.fit(X_train, y_train) # predicting test set results y_pred = classifier.predict(X_test) # making the confusion matrix cm = confusion_matrix(y_test, y_pred) accuracy_score(y_test, y_pred) style_cols = style_df.columns[8:] style_cols clean_full_df = full_df.drop(['product_id','mpn','created_at','updated_at','deleted_at','bc_product_id','labels'], axis=1) clean_full_df = clean_full_df.apply(lambda x: x.str.lower()).astype(str) cv = CountVectorizer() X = cv.fit_transform(input_df.apply(lambda x: ' '.join(x), axis=1)).toarray() y = style_df.modern.values X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0) classifier = GaussianNB() classifier.fit(X_train, y_train) cv.transform('a.l.c').toarray() # predicting on full data clean_full_df[i] = classifier.predict_proba() ###Output _____no_output_____ ###Markdown occasion ###Code occasion_df = df[df.category=='occasion'].copy() occasion_list = occasion_df.value.apply(list).explode().unique() for i in occasion_list: occasion_df[i] = occasion_df.value.apply(lambda x: 1 if i in x else 0) occasion_df occasion_df.drop(['category','value'],axis=1).to_csv('occasion_data.csv',index=False) ###Output _____no_output_____ ###Markdown fit ###Code fit_df = df[df.category=='fit'].copy() fit_list = fit_df.value.apply(list).explode().unique() for i in fit_list: fit_df[i] = fit_df.value.apply(lambda x: 1 if i in x else 0) fit_df fit_df.drop(['category','value'],axis=1).to_csv('fit_data.csv',index=False) ###Output _____no_output_____ ###Markdown category ###Code category_df = df[df.category=='category'].copy() category_list = category_df.value.apply(list).explode().unique() for i in category_list: category_df[i] = category_df.value.apply(lambda x: 1 if i in x else 0) category_df category_df.drop(['category','value'],axis=1).to_csv('category_data.csv',index=False) ###Output _____no_output_____
00basics.ipynb
###Markdown Copyright 2020 The TensorFlow Authors. ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown TensorFlow basics View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This guide provides a quick overview of _TensorFlow basics_. Each section of this doc is an overview of a larger topic—you can find links to full guides at the end of each section.TensorFlow is an end-to-end platform for machine learning. It supports the following:* Multidimensional-array based numeric computation (similar to NumPy.)* GPU and distributed processing* Automatic differentiation* Model construction, training, and export* And more TensorsTensorFlow operates on multidimensional arrays or _tensors_ represented as `tf.Tensor` objects. Here is a two-dimensional tensor: ###Code import tensorflow as tf x = tf.constant([[1., 2., 3.], [4., 5., 6.]]) print(x) print(x.shape) print(x.dtype) ###Output _____no_output_____ ###Markdown The most important attributes of a `tf.Tensor` are its `shape` and `dtype`:* `Tensor.shape`: tells you the size of the tensor along each of its axes.* `Tensor.dtype`: tells you the type of all the elements in the tensor. TensorFlow implements standard mathematical operations on tensors, as well as many operations specialized for machine learning.For example: ###Code x + x 5 * x x @ tf.transpose(x) tf.concat([x, x, x], axis=0) tf.nn.softmax(x, axis=-1) tf.reduce_sum(x) ###Output _____no_output_____ ###Markdown Running large calculations on CPU can be slow. When properly configured, TensorFlow can use accelerator hardware like GPUs to execute operations very quickly. ###Code if tf.config.list_physical_devices('GPU'): print("TensorFlow **IS** using the GPU") else: print("TensorFlow **IS NOT** using the GPU") ###Output _____no_output_____ ###Markdown Refer to the [Tensor guide](tensor.ipynb) for details. VariablesNormal `tf.Tensor` objects are immutable. To store model weights (or other mutable state) in TensorFlow use a `tf.Variable`. ###Code var = tf.Variable([0.0, 0.0, 0.0]) var.assign([1, 2, 3]) var.assign_add([1, 1, 1]) ###Output _____no_output_____ ###Markdown Refer to the [Variables guide](variable.ipynb) for details. Automatic differentiation_Gradient descent_ and related algorithms are a cornerstone of modern machine learning.To enable this, TensorFlow implements automatic differentiation (autodiff), which uses calculus to compute gradients. Typically you'll use this to calculate the gradient of a model's _error_ or _loss_ with respect to its weights. ###Code x = tf.Variable(1.0) def f(x): y = x**2 + 2*x - 5 return y f(x) ###Output _____no_output_____ ###Markdown At `x = 1.0`, `y = f(x) = (1**2 + 2*1 - 5) = -2`.The derivative of `y` is `y' = f'(x) = (2*x + 2) = 4`. TensorFlow can calculate this automatically: ###Code with tf.GradientTape() as tape: y = f(x) g_x = tape.gradient(y, x) # g(x) = dy/dx g_x ###Output _____no_output_____ ###Markdown This simplified example only takes the derivative with respect to a single scalar (`x`), but TensorFlow can compute the gradient with respect to any number of non-scalar tensors simultaneously. Refer to the [Autodiff guide](autodiff.ipynb) for details. Graphs and tf.functionWhile you can use TensorFlow interactively like any Python library, TensorFlow also provides tools for:* **Performance optimization**: to speed up training and inference.* **Export**: so you can save your model when it's done training.These require that you use `tf.function` to separate your pure-TensorFlow code from Python. ###Code @tf.function def my_func(x): print('Tracing.\n') return tf.reduce_sum(x) ###Output _____no_output_____ ###Markdown The first time you run the `tf.function`, although it executes in Python, it captures a complete, optimized graph representing the TensorFlow computations done within the function. ###Code x = tf.constant([1, 2, 3]) my_func(x) ###Output _____no_output_____ ###Markdown On subsequent calls TensorFlow only executes the optimized graph, skipping any non-TensorFlow steps. Below, note that `my_func` doesn't print _tracing_ since `print` is a Python function, not a TensorFlow function. ###Code x = tf.constant([10, 9, 8]) my_func(x) ###Output _____no_output_____ ###Markdown A graph may not be reusable for inputs with a different _signature_ (`shape` and `dtype`), so a new graph is generated instead: ###Code x = tf.constant([10.0, 9.1, 8.2], dtype=tf.float32) my_func(x) ###Output _____no_output_____ ###Markdown These captured graphs provide two benefits:* In many cases they provide a significant speedup in execution (though not this trivial example).* You can export these graphs, using `tf.saved_model`, to run on other systems like a [server](https://www.tensorflow.org/tfx/serving/docker) or a [mobile device](https://www.tensorflow.org/lite/guide), no Python installation required. Refer to [Intro to graphs](intro_to_graphs.ipynb) for more details. Modules, layers, and models `tf.Module` is a class for managing your `tf.Variable` objects, and the `tf.function` objects that operate on them. The `tf.Module` class is necessary to support two significant features:1. You can save and restore the values of your variables using `tf.train.Checkpoint`. This is useful during training as it is quick to save and restore a model's state.2. You can import and export the `tf.Variable` values _and_ the `tf.function` graphs using `tf.saved_model`. This allows you to run your model independently of the Python program that created it.Here is a complete example exporting a simple `tf.Module` object: ###Code class MyModule(tf.Module): def __init__(self, value): self.weight = tf.Variable(value) @tf.function def multiply(self, x): return x * self.weight mod = MyModule(3) mod.multiply(tf.constant([1, 2, 3])) ###Output _____no_output_____ ###Markdown Save the `Module`: ###Code save_path = './saved' tf.saved_model.save(mod, save_path) ###Output _____no_output_____ ###Markdown The resulting SavedModel is independent of the code that created it. You can load a SavedModel from Python, other language bindings, or [TensorFlow Serving](https://www.tensorflow.org/tfx/serving/docker). You can also convert it to run with [TensorFlow Lite](https://www.tensorflow.org/lite/guide) or [TensorFlow JS](https://www.tensorflow.org/js/guide). ###Code reloaded = tf.saved_model.load(save_path) reloaded.multiply(tf.constant([1, 2, 3])) ###Output _____no_output_____ ###Markdown The `tf.keras.layers.Layer` and `tf.keras.Model` classes build on `tf.Module` providing additional functionality and convenience methods for building, training, and saving models. Some of these are demonstrated in the next section. Refer to [Intro to modules](intro_to_modules.ipynb) for details. Training loopsNow put this all together to build a basic model and train it from scratch.First, create some example data. This generates a cloud of points that loosely follows a quadratic curve: ###Code import matplotlib from matplotlib import pyplot as plt matplotlib.rcParams['figure.figsize'] = [9, 6] x = tf.linspace(-2, 2, 201) x = tf.cast(x, tf.float32) def f(x): y = x**2 + 2*x - 5 return y y = f(x) + tf.random.normal(shape=[201]) plt.plot(x.numpy(), y.numpy(), '.', label='Data') plt.plot(x, f(x), label='Ground truth') plt.legend(); ###Output _____no_output_____ ###Markdown Create a model: ###Code class Model(tf.keras.Model): def __init__(self, units): super().__init__() self.dense1 = tf.keras.layers.Dense(units=units, activation=tf.nn.relu, kernel_initializer=tf.random.normal, bias_initializer=tf.random.normal) self.dense2 = tf.keras.layers.Dense(1) def call(self, x, training=True): # For Keras layers/models, implement `call` instead of `__call__`. x = x[:, tf.newaxis] x = self.dense1(x) x = self.dense2(x) return tf.squeeze(x, axis=1) model = Model(64) plt.plot(x.numpy(), y.numpy(), '.', label='data') plt.plot(x, f(x), label='Ground truth') plt.plot(x, model(x), label='Untrained predictions') plt.title('Before training') plt.legend(); ###Output _____no_output_____ ###Markdown Write a basic training loop: ###Code variables = model.variables optimizer = tf.optimizers.SGD(learning_rate=0.01) for step in range(1000): with tf.GradientTape() as tape: prediction = model(x) error = (y-prediction)**2 mean_error = tf.reduce_mean(error) gradient = tape.gradient(mean_error, variables) optimizer.apply_gradients(zip(gradient, variables)) if step % 100 == 0: print(f'Mean squared error: {mean_error.numpy():0.3f}') plt.plot(x.numpy(),y.numpy(), '.', label="data") plt.plot(x, f(x), label='Ground truth') plt.plot(x, model(x), label='Trained predictions') plt.title('After training') plt.legend(); ###Output _____no_output_____ ###Markdown That's working, but remember that implementations of common training utilities are available in the `tf.keras` module. So consider using those before writing your own. To start with, the `Model.compile` and `Model.fit` methods implement a training loop for you: ###Code new_model = Model(64) new_model.compile( loss=tf.keras.losses.MSE, optimizer=tf.optimizers.SGD(learning_rate=0.01)) history = new_model.fit(x, y, epochs=100, batch_size=32, verbose=0) model.save('./my_model') plt.plot(history.history['loss']) plt.xlabel('Epoch') plt.ylim([0, max(plt.ylim())]) plt.ylabel('Loss [Mean Squared Error]') plt.title('Keras training progress'); ###Output _____no_output_____
Advanced Computer Vision and Deep Learning/LSTMs in Pytorch/Character-Level RNN/Chararacter-Level RNN, Exercise.ipynb
###Markdown Character-Level LSTM in PyTorchIn this notebook, I'll construct a character-level LSTM with PyTorch. The network will train character by character on some text, then generate new text character by character. As an example, I will train on Anna Karenina. **This model will be able to generate new text based on the text from the book!**This network is based off of Andrej Karpathy's [post on RNNs](http://karpathy.github.io/2015/05/21/rnn-effectiveness/) and [implementation in Torch](https://github.com/karpathy/char-rnn). Below is the general architecture of the character-wise RNN. First let's load in our required resources for data loading and model creation. ###Code import numpy as np import torch from torch import nn import torch.nn.functional as F ###Output _____no_output_____ ###Markdown Load in DataThen, we'll load the Anna Karenina text file and convert it into integers for our network to use. TokenizationIn the second cell, below, I'm creating a couple **dictionaries** to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network. ###Code # open text file and read in data as `text` with open('data/anna.txt', 'r') as f: text = f.read() ###Output _____no_output_____ ###Markdown Now we have the text, encode it as integers. ###Code # encode the text and map each character to an integer and vice versa # we create two dictonaries: # 1. int2char, which maps integers to characters # 2. char2int, which maps characters to unique integers chars = tuple(set(text)) int2char = dict(enumerate(chars)) char2int = {ch: ii for ii, ch in int2char.items()} encoded = np.array([char2int[ch] for ch in text]) ###Output _____no_output_____ ###Markdown Let's check out the first 100 characters, make sure everything is peachy. According to the [American Book Review](http://americanbookreview.org/100bestlines.asp), this is the 6th best first line of a book ever. ###Code text[:100] ###Output _____no_output_____ ###Markdown And we can see those same characters encoded as integers. ###Code encoded[:100] ###Output _____no_output_____ ###Markdown Pre-processing the dataAs you can see in our char-RNN image above, our LSTM expects an input that is **one-hot encoded** meaning that each character is converted into an intgere (via our created dictionary) and *then* converted into a column vector where only it's corresponsing integer index will have the value of 1 and the rest of the vector will be filled with 0's. Since we're one-hot encoding the data, let's make a function to do that! ###Code def one_hot_encode(arr, n_labels): # Initialize the the encoded array one_hot = np.zeros((np.multiply(*arr.shape), n_labels), dtype=np.float32) # Fill the appropriate elements with ones one_hot[np.arange(one_hot.shape[0]), arr.flatten()] = 1. # Finally reshape it to get back to the original array one_hot = one_hot.reshape((*arr.shape, n_labels)) return one_hot ###Output _____no_output_____ ###Markdown Making training mini-batchesTo train on this data, we also want to create mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this:In this example, we'll take the encoded characters (passed in as the `arr` parameter) and split them into multiple sequences, given by `n_seqs` (also refered to as "batch size" in other places). Each of those sequences will be `n_steps` long. Creating Batches**1. The first thing we need to do is discard some of the text so we only have completely full batches. **Each batch contains $N \times M$ characters, where $N$ is the batch size (the number of sequences) and $M$ is the number of steps. Then, to get the total number of batches, $K$, we can make from the array `arr`, you divide the length of `arr` by the number of characters per batch. Once you know the number of batches, you can get the total number of characters to keep from `arr`, $N * M * K$.**2. After that, we need to split `arr` into $N$ sequences. ** You can do this using `arr.reshape(size)` where `size` is a tuple containing the dimensions sizes of the reshaped array. We know we want $N$ sequences, so let's make that the size of the first dimension. For the second dimension, you can use `-1` as a placeholder in the size, it'll fill up the array with the appropriate data for you. After this, you should have an array that is $N \times (M * K)$.**3. Now that we have this array, we can iterate through it to get our batches. **The idea is each batch is a $N \times M$ window on the $N \times (M * K)$ array. For each subsequent batch, the window moves over by `n_steps`. We also want to create both the input and target arrays. Remember that the targets are the inputs shifted over one character. The way I like to do this window is use `range` to take steps of size `n_steps` from $0$ to `arr.shape[1]`, the total number of steps in each sequence. That way, the integers you get from `range` always point to the start of a batch, and each window is `n_steps` wide.> **TODO:** Write the code for creating batches in the function below. The exercises in this notebook _will not be easy_. I've provided a notebook with solutions alongside this notebook. If you get stuck, checkout the solutions. The most important thing is that you don't copy and paste the code into here, **type out the solution code yourself.** ###Code def get_batches(arr, n_seqs, n_steps): '''Create a generator that returns batches of size n_seqs x n_steps from arr. Arguments --------- arr: Array you want to make batches from n_seqs: Batch size, the number of sequences per batch n_steps: Number of sequence steps per batch ''' # Get the number of characters per batch batch_size = n_seqs * n_steps ## TODO: Get the number of batches we can make n_batches = len(arr)//batch_size ## TODO: Keep only enough characters to make full batches arr = arr[:n_batches * batch_size] ## TODO: Reshape into batch_size rows arr = arr.reshape((n_seqs, -1)) ## TODO: Make batches for n in range(0, arr.shape[1], n_steps): # The features x = arr[:, n:n+n_steps] # The targets, shifted by one y = np.zeros_like(x) try: y[:, :-1], y[:, -1] = x[:, 1:], arr[:, n+n_steps] except IndexError: y[:, :-1], y[:, -1] = x[:, 1:], arr[:, 0] yield x, y ###Output _____no_output_____ ###Markdown Test Your ImplementationNow I'll make some data sets and we can check out what's going on as we batch data. Here, as an example, I'm going to use a batch size of 10 and 50 sequence steps. ###Code batches = get_batches(encoded, 10, 50) x, y = next(batches) print('x\n', x[:10, :10]) print('\ny\n', y[:10, :10]) ###Output x [[29 60 38 47 76 25 28 15 68 18] [15 38 56 15 34 81 76 15 53 81] [ 2 57 34 50 18 18 52 0 25 26] [34 15 8 77 28 57 34 53 15 60] [15 57 76 15 57 26 33 15 26 57] [15 73 76 15 61 38 26 18 81 34] [60 25 34 15 59 81 56 25 15 14] [10 15 80 77 76 15 34 81 61 15] [76 15 57 26 34 62 76 50 15 3] [15 26 38 57 8 15 76 81 15 60]] y [[60 38 47 76 25 28 15 68 18 18] [38 56 15 34 81 76 15 53 81 57] [57 34 50 18 18 52 0 25 26 33] [15 8 77 28 57 34 53 15 60 57] [57 76 15 57 26 33 15 26 57 28] [73 76 15 61 38 26 18 81 34 44] [25 34 15 59 81 56 25 15 14 81] [15 80 77 76 15 34 81 61 15 26] [15 57 26 34 62 76 50 15 3 60] [26 38 57 8 15 76 81 15 60 25]] ###Markdown If you implemented `get_batches` correctly, the above output should look something like ```x [[55 63 69 22 6 76 45 5 16 35] [ 5 69 1 5 12 52 6 5 56 52] [48 29 12 61 35 35 8 64 76 78] [12 5 24 39 45 29 12 56 5 63] [ 5 29 6 5 29 78 28 5 78 29] [ 5 13 6 5 36 69 78 35 52 12] [63 76 12 5 18 52 1 76 5 58] [34 5 73 39 6 5 12 52 36 5] [ 6 5 29 78 12 79 6 61 5 59] [ 5 78 69 29 24 5 6 52 5 63]]y [[63 69 22 6 76 45 5 16 35 35] [69 1 5 12 52 6 5 56 52 29] [29 12 61 35 35 8 64 76 78 28] [ 5 24 39 45 29 12 56 5 63 29] [29 6 5 29 78 28 5 78 29 45] [13 6 5 36 69 78 35 52 12 43] [76 12 5 18 52 1 76 5 58 52] [ 5 73 39 6 5 12 52 36 5 78] [ 5 29 78 12 79 6 61 5 59 63] [78 69 29 24 5 6 52 5 63 76]] ``` although the exact numbers will be different. Check to make sure the data is shifted over one step for `y`. --- Defining the network with PyTorchBelow is where you'll define the network. We'll break it up into parts so it's easier to reason about each bit. Then we can connect them up into the whole network.Next, you'll use PyTorch to define the architecture of the network. We start by defining the layers and operations we want. Then, define a method for the forward pass. You've also been given a method for predicting characters. Model StructureIn `__init__` the suggested structure is as follows:* Create and store the necessary dictionaries (this has been done for you)* Define an LSTM layer that takes as params: an input size (the number of characters), a hidden layer size `n_hidden`, a number of layers `n_layers`, a dropout probability `drop_prob`, and a batch_first boolean (True, since we are batching)* Define a dropout layer with `dropout_prob`* Define a fully-connected layer with params: input size `n_hidden` and output size (the number of characters)* Finally, initialize the weights (again, this has been given)Note that some parameters have been named and given in the `__init__` function, and we use them and store them by doing something like `self.drop_prob = drop_prob`. --- LSTM Inputs/OutputsYou can create a basic LSTM cell as follows```pythonself.lstm = nn.LSTM(input_size, n_hidden, n_layers, dropout=drop_prob, batch_first=True)```where `input_size` is the number of characters this cell expects to see as sequential input, and `n_hidden` is the number of units in the hidden layers in the cell. And we can add dropout by adding a dropout parameter with a specified probability; this will automatically add dropout to the inputs or outputs. Finally, in the `forward` function, we can stack up the LSTM cells into layers using `.view`. With this, you pass in a list of cells and it will send the output of one cell into the next cell.We also need to create an initial cell state of all zeros. This is done like so```pythonself.init_weights()``` ###Code class CharRNN(nn.Module): def __init__(self, tokens, n_steps=100, n_hidden=256, n_layers=2, drop_prob=0.5, lr=0.001): super().__init__() self.drop_prob = drop_prob self.n_layers = n_layers self.n_hidden = n_hidden self.lr = lr # creating character dictionaries self.chars = tokens self.int2char = dict(enumerate(self.chars)) self.char2int = {ch: ii for ii, ch in self.int2char.items()} ## TODO: define the LSTM, self.lstm self.lstm = nn.LSTM(len(self.chars), n_hidden, n_layers, dropout=drop_prob, batch_first=True) ## TODO: define a dropout layer, self.dropout self.dropout = nn.Dropout(drop_prob) ## TODO: define the final, fully-connected output layer, self.fc self.fc = nn.Linear(n_hidden, len(self.chars)) # initialize the weights self.init_weights() def forward(self, x, hc): ''' Forward pass through the network. These inputs are x, and the hidden/cell state `hc`. ''' ## TODO: Get x, and the new hidden state (h, c) from the lstm x, (h, c) = self.lstm(x, hc) ## TODO: pass x through a droupout layer x = self.dropout(x) # Stack up LSTM outputs using view x = x.view(x.size()[0]*x.size()[1], self.n_hidden) ## TODO: put x through the fully-connected layer x = self.fc(x) # return x and the hidden state (h, c) return x, (h, c) def predict(self, char, h=None, cuda=False, top_k=None): ''' Given a character, predict the next character. Returns the predicted character and the hidden state. ''' if cuda: self.cuda() else: self.cpu() if h is None: h = self.init_hidden(1) x = np.array([[self.char2int[char]]]) x = one_hot_encode(x, len(self.chars)) inputs = torch.from_numpy(x) if cuda: inputs = inputs.cuda() h = tuple([each.data for each in h]) out, h = self.forward(inputs, h) p = F.softmax(out, dim=1).data if cuda: p = p.cpu() if top_k is None: top_ch = np.arange(len(self.chars)) else: p, top_ch = p.topk(top_k) top_ch = top_ch.numpy().squeeze() p = p.numpy().squeeze() char = np.random.choice(top_ch, p=p/p.sum()) return self.int2char[char], h def init_weights(self): ''' Initialize weights for fully connected layer ''' initrange = 0.1 # Set bias tensor to all zeros self.fc.bias.data.fill_(0) # FC weights as random uniform self.fc.weight.data.uniform_(-1, 1) def init_hidden(self, n_seqs): ''' Initializes hidden state ''' # Create two new tensors with sizes n_layers x n_seqs x n_hidden, # initialized to zero, for hidden state and cell state of LSTM weight = next(self.parameters()).data return (weight.new(self.n_layers, n_seqs, self.n_hidden).zero_(), weight.new(self.n_layers, n_seqs, self.n_hidden).zero_()) ###Output _____no_output_____ ###Markdown A note on the `predict` functionThe output of our RNN is from a fully-connected layer and it outputs a **distribution of next-character scores**.To actually get the next character, we apply a softmax function, which gives us a *probability* distribution that we can then sample to predict the next character. ###Code ## ---- keep notebook from crashing during training --- ## import os import requests import time def train(net, data, epochs=10, n_seqs=10, n_steps=50, lr=0.001, clip=5, val_frac=0.1, cuda=False, print_every=10): ''' Training a network Arguments --------- net: CharRNN network data: text data to train the network epochs: Number of epochs to train n_seqs: Number of mini-sequences per mini-batch, aka batch size n_steps: Number of character steps per mini-batch lr: learning rate clip: gradient clipping val_frac: Fraction of data to hold out for validation cuda: Train with CUDA on a GPU print_every: Number of steps for printing training and validation loss ''' net.train() opt = torch.optim.Adam(net.parameters(), lr=lr) criterion = nn.CrossEntropyLoss() # create training and validation data val_idx = int(len(data)*(1-val_frac)) data, val_data = data[:val_idx], data[val_idx:] if cuda: net.cuda() counter = 0 n_chars = len(net.chars) old_time = time.time() for e in range(epochs): h = net.init_hidden(n_seqs) for x, y in get_batches(data, n_seqs, n_steps): if time.time() - old_time > 60: old_time = time.time() requests.request("POST", "https://nebula.udacity.com/api/v1/remote/keep-alive", headers={'Authorization': "STAR " + response.text}) counter += 1 # One-hot encode our data and make them Torch tensors x = one_hot_encode(x, n_chars) inputs, targets = torch.from_numpy(x), torch.from_numpy(y) if cuda: inputs, targets = inputs.cuda(), targets.cuda() # Creating new variables for the hidden state, otherwise # we'd backprop through the entire training history h = tuple([each.data for each in h]) net.zero_grad() output, h = net.forward(inputs, h) loss = criterion(output, targets.view(n_seqs*n_steps)) loss.backward() # `clip_grad_norm` helps prevent the exploding gradient problem in RNNs / LSTMs. nn.utils.clip_grad_norm_(net.parameters(), clip) opt.step() if counter % print_every == 0: # Get validation loss val_h = net.init_hidden(n_seqs) val_losses = [] for x, y in get_batches(val_data, n_seqs, n_steps): # One-hot encode our data and make them Torch tensors x = one_hot_encode(x, n_chars) x, y = torch.from_numpy(x), torch.from_numpy(y) # Creating new variables for the hidden state, otherwise # we'd backprop through the entire training history val_h = tuple([each.data for each in val_h]) inputs, targets = x, y if cuda: inputs, targets = inputs.cuda(), targets.cuda() output, val_h = net.forward(inputs, val_h) val_loss = criterion(output, targets.view(n_seqs*n_steps)) val_losses.append(val_loss.item()) print("Epoch: {}/{}...".format(e+1, epochs), "Step: {}...".format(counter), "Loss: {:.4f}...".format(loss.item()), "Val Loss: {:.4f}".format(np.mean(val_losses))) ###Output _____no_output_____ ###Markdown Time to trainNow we can actually train the network. First we'll create the network itself, with some given hyperparameters. Then, define the mini-batches sizes (number of sequences and number of steps), and start the training. With the train function, we can set the number of epochs, the learning rate, and other parameters. Also, we can run the training on a GPU by setting `cuda=True`. ###Code if 'net' in locals(): del net # define and print the net net = CharRNN(chars, n_hidden=512, n_layers=2) print(net) n_seqs, n_steps = 128, 100 # you may change cuda to True if you plan on using a GPU! # also, if you do, please INCREASE the epochs to 25 # Open the training log file. log_file = 'training_log.txt' f = open(log_file, 'w') response = requests.request("GET", "http://metadata.google.internal/computeMetadata/v1/instance/attributes/keep_alive_token", headers={"Metadata-Flavor":"Google"}) # TRAIN train(net, encoded, epochs=1, n_seqs=n_seqs, n_steps=n_steps, lr=0.001, cuda=True, print_every=10) # Close the training log file. f.close() ###Output Epoch: 1/1... Step: 10... Loss: 3.3359... Val Loss: 3.3120 Epoch: 1/1... Step: 20... Loss: 3.1744... Val Loss: 3.1940 Epoch: 1/1... Step: 30... Loss: 3.0805... Val Loss: 3.0678 Epoch: 1/1... Step: 40... Loss: 2.8895... Val Loss: 2.8990 Epoch: 1/1... Step: 50... Loss: 2.7607... Val Loss: 2.7260 Epoch: 1/1... Step: 60... Loss: 2.6045... Val Loss: 2.6203 Epoch: 1/1... Step: 70... Loss: 2.5346... Val Loss: 2.5522 Epoch: 1/1... Step: 80... Loss: 2.4702... Val Loss: 2.4991 Epoch: 1/1... Step: 90... Loss: 2.4491... Val Loss: 2.4548 Epoch: 1/1... Step: 100... Loss: 2.3878... Val Loss: 2.4197 Epoch: 1/1... Step: 110... Loss: 2.3420... Val Loss: 2.3876 Epoch: 1/1... Step: 120... Loss: 2.2809... Val Loss: 2.3534 Epoch: 1/1... Step: 130... Loss: 2.3019... Val Loss: 2.3386 ###Markdown Getting the best modelTo set your hyperparameters to get the best performance, you'll want to watch the training and validation losses. If your training loss is much lower than the validation loss, you're overfitting. Increase regularization (more dropout) or use a smaller network. If the training and validation losses are close, you're underfitting so you can increase the size of the network. HyperparametersHere are the hyperparameters for the network.In defining the model:* `n_hidden` - The number of units in the hidden layers.* `n_layers` - Number of hidden LSTM layers to use.We assume that dropout probability and learning rate will be kept at the default, in this example.And in training:* `n_seqs` - Number of sequences running through the network in one pass.* `n_steps` - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.* `lr` - Learning rate for trainingHere's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to [where it originally came from](https://github.com/karpathy/char-rnntips-and-tricks).> Tips and Tricks> Monitoring Validation Loss vs. Training Loss>If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular:> - If your training loss is much lower than validation loss then this means the network might be **overfitting**. Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on.> - If your training/validation loss are about equal then your model is **underfitting**. Increase the size of your model (either number of layers or the raw number of neurons per layer)> Approximate number of parameters> The two most important parameters that control the model are `n_hidden` and `n_layers`. I would advise that you always use `n_layers` of either 2/3. The `n_hidden` can be adjusted based on how much data you have. The two important quantities to keep track of here are:> - The number of parameters in your model. This is printed when you start training.> - The size of your dataset. 1MB file is approximately 1 million characters.>These two should be about the same order of magnitude. It's a little tricky to tell. Here are some examples:> - I have a 100MB dataset and I'm using the default parameter settings (which currently print 150K parameters). My data size is significantly larger (100 mil >> 0.15 mil), so I expect to heavily underfit. I am thinking I can comfortably afford to make `n_hidden` larger.> - I have a 10MB dataset and running a 10 million parameter model. I'm slightly nervous and I'm carefully monitoring my validation loss. If it's larger than my training loss then I may want to try to increase dropout a bit and see if that helps the validation loss.> Best models strategy>The winning strategy to obtaining very good models (if you have the compute time) is to always err on making the network larger (as large as you're willing to wait for it to compute) and then try different dropout values (between 0,1). Whatever model has the best validation performance (the loss, written in the checkpoint filename, low is good) is the one you should use in the end.>It is very common in deep learning to run many different models with many different hyperparameter settings, and in the end take whatever checkpoint gave the best validation performance.>By the way, the size of your training and validation splits are also parameters. Make sure you have a decent amount of data in your validation set or otherwise the validation performance will be noisy and not very informative. After training, we'll save the model so we can load it again later if we need too. Here I'm saving the parameters needed to create the same architecture, the hidden layer hyperparameters and the text characters. ###Code # change the name, for saving multiple files model_name = 'rnn_1_epoch.net' checkpoint = {'n_hidden': net.n_hidden, 'n_layers': net.n_layers, 'state_dict': net.state_dict(), 'tokens': net.chars} with open(model_name, 'wb') as f: torch.save(checkpoint, f) ###Output _____no_output_____ ###Markdown SamplingNow that the model is trained, we'll want to sample from it. To sample, we pass in a character and have the network predict the next character. Then we take that character, pass it back in, and get another predicted character. Just keep doing this and you'll generate a bunch of text! Top K samplingOur predictions come from a categorcial probability distribution over all the possible characters. We can make the sample text and make it more reasonable to handle (with less variables) by only considering some $K$ most probable characters. This will prevent the network from giving us completely absurd characters while allowing it to introduce some noise and randomness into the sampled text.Typically you'll want to prime the network so you can build up a hidden state. Otherwise the network will start out generating characters at random. In general the first bunch of characters will be a little rough since it hasn't built up a long history of characters to predict from. ###Code def sample(net, size, prime='The', top_k=None, cuda=False): if cuda: net.cuda() else: net.cpu() net.eval() # First off, run through the prime characters chars = [ch for ch in prime] h = net.init_hidden(1) for ch in prime: char, h = net.predict(ch, h, cuda=cuda, top_k=top_k) chars.append(char) # Now pass in the previous character and get a new one for ii in range(size): char, h = net.predict(chars[-1], h, cuda=cuda, top_k=top_k) chars.append(char) return ''.join(chars) print(sample(net, 2000, prime='Anna', top_k=5, cuda=True)) ###Output _____no_output_____ ###Markdown Loading a checkpoint ###Code # Here we have loaded in a model that trained over 1 epoch `rnn_1_epoch.net` with open('rnn_1_epoch.net', 'rb') as f: checkpoint = torch.load(f) loaded = CharRNN(checkpoint['tokens'], n_hidden=checkpoint['n_hidden'], n_layers=checkpoint['n_layers']) loaded.load_state_dict(checkpoint['state_dict']) # Change cuda to True if you are using GPU! print(sample(loaded, 2000, cuda=True, top_k=5, prime="And Levin said")) ###Output _____no_output_____
kardioml/models/deepecg/notebooks/2_create_cross_validation_splits.ipynb
###Markdown PhysioNet/Computing in Cardiology Challenge 2020 Classification of 12-lead ECGs 2. Create Cross-Validation Dataset Setup Noteboook ###Code # Import 3rd party libraries import os import sys import json import random import numpy as np import pandas as pd from sklearn.model_selection import StratifiedKFold from iterstrat.ml_stratifiers import MultilabelStratifiedKFold # Import local Libraries sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(os.getcwd())))))) from kardioml import DATA_PATH # Configure Notebook import warnings warnings.filterwarnings('ignore') %matplotlib inline %load_ext autoreload %autoreload 2 ###Output _____no_output_____ ###Markdown Split Physionet 2020 Training Data Create Training Lookup File ###Code # Set datasets datasets = ['A', 'B', 'C', 'D', 'E', 'F'] # Create list data = list() # Loop through datasets for dataset in datasets: # Get filenames filenames = [filename.split('.')[0] for filename in os.listdir(os.path.join(DATA_PATH, dataset, 'formatted')) if 'json' in filename] # Loop through filenames for filename in filenames: # Import meta data meta_data = json.load(open(os.path.join(DATA_PATH, dataset, 'formatted', '{}.json'.format(filename)))) # Save label if meta_data['labels_training']: data.append({'filename': filename, 'labels': meta_data['labels_training'], 'dataset': dataset, 'labels_merged': meta_data['labels_training_merged']}) else: data.append({'filename': filename, 'labels': [0 for _ in range(27)], 'dataset': dataset, 'labels_merged': [0 for _ in range(27)]}) # Create DataFrame data = pd.DataFrame(data) # View DataFrame data.head() ###Output _____no_output_____ ###Markdown Cross-Validation 1 iterative-stratification ###Code # Split dataset into train/evaluate rmskf = MultilabelStratifiedKFold(n_splits=6, random_state=0) for cv_fold, (train_index, val_index) in enumerate(rmskf.split(np.stack(data['labels_merged'].values), np.stack(data['labels_merged'].values))): # Lookup file training_lookup = {'train': data.loc[train_index, 'filename'].tolist(), 'val': data.loc[val_index, 'filename'].tolist()} # Save file os.makedirs(os.path.join(DATA_PATH, 'training', 'deepecg', 'cross_validation', 'iterative_stratification'), exist_ok=True) with open(os.path.join(DATA_PATH, 'training', 'deepecg', 'cross_validation', 'iterative_stratification', 'cv_{}.json'.format(cv_fold)), 'w') as file: json.dump(training_lookup, file, sort_keys=False, indent=4) ###Output C:\Users\sebig\Documents\Code\physionet-challenge-2020\data\training\deepecg\cross_validation\iterative_stratification C:\Users\sebig\Documents\Code\physionet-challenge-2020\data\training\deepecg\cross_validation\iterative_stratification C:\Users\sebig\Documents\Code\physionet-challenge-2020\data\training\deepecg\cross_validation\iterative_stratification C:\Users\sebig\Documents\Code\physionet-challenge-2020\data\training\deepecg\cross_validation\iterative_stratification C:\Users\sebig\Documents\Code\physionet-challenge-2020\data\training\deepecg\cross_validation\iterative_stratification C:\Users\sebig\Documents\Code\physionet-challenge-2020\data\training\deepecg\cross_validation\iterative_stratification ###Markdown Cross-Validation 2 Split by dataset ###Code # Set cv splits cv_splits = [{'train': ['D', 'E', 'F'], 'val': ['A', 'B']}, {'train': ['A', 'B', 'F'], 'val': ['D', 'E']}, {'train': ['A', 'B', 'D', 'E'], 'val': ['F']}] # Loop through sample frequencies for fs in sample_frequencies: # Filter by sample frequency df = data[data['fs'] == fs].reset_index() # Split dataset into train/evaluate for cv_fold, cv_split in enumerate(cv_splits): # Filter tain and val df_train = df[df['dataset'].isin(cv_split['train'])] df_val = df[df['dataset'].isin(cv_split['val'])] # Lookup file training_lookup = {'train': df_train['path'].tolist(), 'val': df_val['path'].tolist()} # Save file os.makedirs(os.path.join(DATA_PATH, 'training', 'deepecg', 'cross_validation', 'dataset_split', str(fs), str(cv_fold)), exist_ok=True) with open(os.path.join(DATA_PATH, 'training', 'deepecg', 'cross_validation', 'dataset_split', str(fs), str(cv_fold), 'training_lookup.json'), 'w') as file: json.dump(training_lookup, file, sort_keys=False, indent=4) ###Output _____no_output_____
notebooks/era5_workflow/validate_zarrs_and_write_to_azure.ipynb
###Markdown Validation code for zarr stores ###Code def test_for_nans(ds, var): """ test for presence of NaNs """ assert ds[var].isnull().sum() == 0, "there are nans!" def test_date_range(ds, var): """ test that first date and last date in zarrs are correct """ start_date = datetime.strptime('01 01 1994', '%d %m %Y') end_date = datetime.strptime('31 12 2015', '%d %m %Y') ds_dates = ds.indexes['time'].to_datetimeindex() assert ds_dates[0] == start_date, "1994 is not the start date" assert ds_dates[-1] == end_date, "zarr store does not contain the full time series" def test_lat_lon_length(ds, var): """ tests that full lat/lon arrays were written to zarr store """ assert len(ds.latitude) == 640, "the full latitude array did not get written" assert len(ds.longitude) == 1280, "the full longitude array did not get written" def validate_zarr_store(ds, var): """ validate zarr store by checking for NaNs and that full time series is present """ test_for_nans(ds, var) test_date_range(ds, var) test_lat_lon_length(ds, var) ###Output _____no_output_____ ###Markdown Validate zarr stores by checking a) NaNs, b) valid date range (1994 - 2015 so we can slice the additional +/- 15 days), c) valid lat/lon lengths. Other validation was covered in previous validation steps. ###Code variables = ["tasmax", "tasmin", "dtr", "pr"] for var in variables: print("validating {}".format(var)) if var == 'pr': version = 'v3' else: version = 'v2' zarr_storepath = 'gs://impactlab-data/climate/source_data/ERA-5/downscaling/{}.1995-2014.F320.{}.zarr' store = fs.get_mapper(zarr_storepath.format(var, version), check=False) with xr.open_zarr(store, consolidated=False) as ds: validate_zarr_store(ds, var) print("finished validating zarr store for {}".format(var)) ###Output validating pr finished validating zarr store for pr ###Markdown write zarr stores to Azure (account key excluded for privacy purposes) ###Code fs_az = AzureBlobFileSystem( account_name='dc6', account_key='', client_id=os.environ.get("AZURE_CLIENT_ID", None), client_secret=os.environ.get("AZURE_CLIENT_SECRET", None), tenant_id=os.environ.get("AZURE_TENANT_ID", None)) for var in variables: if var == 'pr': version = 'v3' else: version = 'v2' zarr_storepath = 'gs://impactlab-data/climate/source_data/ERA-5/downscaling/{}.1995-2014.F320.{}.zarr' store = fs.get_mapper(zarr_storepath.format(var, version), check=False) with xr.open_zarr(store, consolidated=False) as ds: zarr_path = "clean-dev/ERA-5/F320/{}.1995-2015.F320.v2.zarr" az_zarr_direct_path = "az://clean-dev/ERA-5/{}.1995-2015.F320.v2.zarr" az_zarr_store = fs_az.get_mapper(zarr_path.format(var), check=False) ds.to_zarr(az_zarr_store, consolidated=True, mode="w") print("wrote zarr store to Azure for {}".format(var)) if 'dtr' in ds.variables: print("yes") if 'tmax' in ds.variables: print("why is tmax here") ###Output _____no_output_____
machine-learning/Covid-19_Pandemic_Study.ipynb
###Markdown Fetch the daily increase number from wikidata. This is done with best effort, so if some countries has missing datapoint we will just skip it.The school closure dates in European countries are quite similar to each other. They were all scheduled to start around the weekend of March 15. And for the sake of this exercise, I will just pick 2020 March 17 as the starting date and fetch daily increases for next 10 consecutive days. If some data were missing I will skip that country. ###Code import time for country, country_info in covid19Data.items(): # instead of using the school closure date, I picked a fixed date, as most EU countries closed school # around the same time frame, i.e. before/after the weekend of 2020-03.15 print(f"working on {country}") daily_increase = fetchCovid19Case(covid19Data[country]['wd_id'], '2020-03-17', 11) covid19Data[country]['daily_increase'] = daily_increase # sleep a little bit before sending the next request. Otherwise wikidata query service may reject my request time.sleep(5) ###Output working on Austria working on Belgium failed to fetch enough datapoints for Q84446340. got 5 working on Bulgaria failed to fetch enough datapoints for Q87486535. got 1 working on Croatia failed to fetch enough datapoints for Q87250732. got 2 working on Cyprus failed to fetch enough datapoints for Q87580938. got 2 working on Czech working on Denmark failed to fetch enough datapoints for Q86597685. got 6 working on Estonia failed to fetch enough datapoints for Q87204911. got 3 working on Finland failed to fetch enough datapoints for Q84055415. got 4 working on France working on Germany working on Greece failed to fetch enough datapoints for Q87068864. got 4 working on Hungary working on Ireland failed to parse {'nrcases': {'type': 'bnode', 'value': 't1958263443'}, 'timepoint': {'datatype': 'http://www.w3.org/2001/XMLSchema#dateTime', 'type': 'literal', 'value': '2020-03-17T00:00:00Z'}} failed to parse {'nrcases': {'type': 'bnode', 'value': 't1958263433'}, 'timepoint': {'datatype': 'http://www.w3.org/2001/XMLSchema#dateTime', 'type': 'literal', 'value': '2020-03-18T00:00:00Z'}} failed to parse {'nrcases': {'type': 'bnode', 'value': 't1958263441'}, 'timepoint': {'datatype': 'http://www.w3.org/2001/XMLSchema#dateTime', 'type': 'literal', 'value': '2020-03-19T00:00:00Z'}} failed to parse {'nrcases': {'type': 'bnode', 'value': 't1958263432'}, 'timepoint': {'datatype': 'http://www.w3.org/2001/XMLSchema#dateTime', 'type': 'literal', 'value': '2020-03-20T00:00:00Z'}} failed to parse {'nrcases': {'type': 'bnode', 'value': 't1958263457'}, 'timepoint': {'datatype': 'http://www.w3.org/2001/XMLSchema#dateTime', 'type': 'literal', 'value': '2020-03-21T00:00:00Z'}} no data reported on 2020-03-27 working on Italy working on Latvia failed to fetch enough datapoints for Q87066621. got 5 working on Lithuania failed to fetch enough datapoints for Q87250838. got 2 working on Luxembourg working on Malta failed to fetch enough datapoints for Q87587760. got 1 working on Netherlands working on Norway working on Poland working on Portugal working on Romania failed to fetch enough datapoints for Q87250752. got 2 working on Slovakia failed to fetch enough datapoints for Q87200954. got 3 working on Slovenia failed to fetch enough datapoints for Q87250948. got 4 working on Spain working on Sweden failed to fetch enough datapoints for Q84081576. got 10 working on Switzerland failed to fetch enough datapoints for Q86717788. got 9 working on UK working on USA working on China working on South Korea working on Japan working on Singapore failed to fetch enough datapoints for Q83873387. got 3 ###Markdown Find the countries with meaningful daily increase number from wikidata. We will only analyze thoseWe use the number of new Covid-19 cases per 100K population in the feature vector. i.e. the feature vector for each countryis the number of new cases per 100K population from 2020-03-18 to 2020-03-27. ###Code # country names countryLabels = [] # daily increase of Covid-19 cases per 100K population dailyIncreasePer100K = [] for country, countryInfo in covid19Data.items(): if countryInfo['daily_increase'] is not None: countryLabels.append(country) dailyIncreasePer100K.append(np.divide(countryInfo['daily_increase'], countryInfo['population']/100000 )) Z = np.array(dailyIncreasePer100K) Z.shape ###Output _____no_output_____ ###Markdown **So in the end, we got 17 countries left for analysis.** ###Code for country, increase in zip(countryLabels, dailyIncreasePer100K): print(f"{country}: {increase}") ###Output Austria: [ 3.56445049 4.1660934 4.25690743 4.83584684 4.88125385 7.71919214 10.80686899 7.76459915 9.51276913 11.3631049 ] Czech: [1.03288325 1.92491878 1.16434111 1.48359594 1.10800203 1.14556142 1.73712182 2.73244568 2.43197055 3.50241319] France: [2.10722219 2.79312001 2.42690761 2.77210782 3.34694123 4.75325689 3.67112925 4.39905145 5.88641412 5.71681575] Germany: [1.2531675 3.3686393 8.80825214 3.77633967 3.98199384 5.33738709 2.81662022 5.95795755 6.95135136 7.56951652] Hungary: [0.08281273 0.15527386 0.12421909 0.18632863 0.28984454 0.37265727 0.20703181 0.40371204 0.36230568 0.40371204] Italy: [ 6.97481639 8.82338313 9.92423363 10.87089875 9.21796508 7.93971849 8.70235589 8.6376975 10.20110417 9.87947013] Luxembourg: [10.06425146 21.08700306 23.80275345 29.71350431 20.44800296 12.30075178 35.78400519 37.38150542 19.17000278 24.28200352] Netherlands: [2.02330721 2.39171286 3.12267645 3.72499045 3.35073709 3.18700125 4.74249176 4.98224782 5.95881517 6.8535146 ] Norway: [2.14249252 2.4033177 3.53977025 3.42798803 3.83785617 4.45265837 3.6329221 6.52062941 4.47128874 7.91790714] Poland: [0.21335498 0.17692852 0.03122268 0.22116065 0.47354398 0.29921735 0.39548728 0.3902835 0.4423213 0.43711752] Portugal: [1.83018868 1.3490566 2.21698113 2.45283019 3.01886792 4.33962264 2.8490566 5.97169811 5.17924528 6.83018868] Spain: [ 5.42832221 7.33828744 6.05927377 10.57859798 7.79813349 9.6610447 14.08198324 16.97580513 18.346788 16.83464308] UK: [1.02389689 0.97997232 1.0693361 1.56765278 1.01329441 1.46465724 2.16139181 2.19925782 3.22466935 4.29854937] USA: [0.90789994 1.37876539 1.71861276 1.48394892 2.89100929 3.13366954 3.40616254 4.10738607 5.29731319 5.74849518] China: [0.00411488 0.00893923 0.00822977 0.00581759 0.00730747 0.01035816 0.00716557 0.00801693 0.00830071 0.01078383] South Korea: [0.18070112 0.29533946 0.16904298 0.47604058 0.12435346 0.14766973 0.19430228 0.20207437 0.17681507 0.28368132] Japan: [0. 0.0347042 0.06073235 0.03628167 0.03943659 0.03391547 0.03076054 0.05126757 0.07729572 0.07571826] ###Markdown Cluster analysis First, let's try to find the number of clusters ###Code m = Z.shape[0] # Number of data points err_clustering = np.zeros((8,1)) # Array for storing clustering errors L=100 for k_minus_1 in range(8): k_means = KMeans(n_clusters = k_minus_1+1, max_iter = L).fit(Z) err_clustering[k_minus_1] = 1/m * k_means.inertia_ print(f'Clustering errors: \n{err_clustering}') # Plot the clustering error as a function of the number k of clusters plt.figure(figsize=(8,6)) plt.plot(range(1,9), err_clustering) plt.xlabel('Number of clusters') plt.ylabel('Clustering error') plt.title("The number of clusters vs clustering error") plt.show() ###Output Clustering errors: [[372.98790041] [116.44131822] [ 42.71794933] [ 19.22049211] [ 10.61083353] [ 6.82897086] [ 3.80305313] [ 1.42176723]] ###Markdown From the above graph, It seems 3~4 cluster are the most appropriate. Let's do the K-Means clustering on Z, with 4 clusters ###Code k = 4 # Define number of clusters to use k_means = KMeans(n_clusters = k, max_iter = 100).fit(Z) # Apply k-means with k=4 cluster and using maximum 100 iterations cluster_means = k_means.cluster_centers_ # Get cluster means (centers) cluster_indices = k_means.labels_ # Get the cluster labels for each data point #print the cluster labels and the relevant countries clusters={} for country, idx in zip(countryLabels, cluster_indices): if idx not in clusters: clusters[idx] = [country] else: clusters[idx].append(country) import pprint pp = pprint.PrettyPrinter(indent=4) pp.pprint(clusters) ###Output { 0: ['Czech', 'Hungary', 'Poland', 'UK', 'China', 'South Korea', 'Japan'], 1: ['Luxembourg'], 2: ['Austria', 'Italy', 'Spain'], 3: ['France', 'Germany', 'Netherlands', 'Norway', 'Portugal', 'USA']} ###Markdown Let's do PCA to transform the original data Z with 10 features into X with 2 features and then plot these countries ###Code from sklearn.decomposition import PCA from sklearn.metrics import mean_squared_error n = 2 # Define number of principal components pca = PCA(n_components=n) # Define pca object with n components pca.fit(Z) # Fit pca object on Z W_pca = pca.components_ # Now the principle components becomes X X = pca.transform(Z) # reconstruct from X to get Z_hat Z_hat = pca.inverse_transform(X) # Now calculating err_pca err_pca = mean_squared_error(Z, Z_hat)*len(Z[0]) #raise NotImplementedError() print(f'Shape of Z: {Z.shape}') print(f'Shape of X: {X.shape}') print(f'Shape of compression matrix W: {W_pca.shape}') print(f'PCA error: {err_pca}') def plottingCluster(data, centroids=None, clusters=None): # This function will later on be used for plotting the clusters and centroids. But now we use it to just make a scatter plot of the data # Input: the data as an array, cluster means (centroids), cluster assignemnts in {0,1,...,k-1} # Output: a scatter plot of the data in the clusters with cluster means plt.figure(figsize=(12,12)) data_colors = ['orangered', 'dodgerblue', 'springgreen', 'blueviolet', 'gold'] centroid_colors = ['red', 'darkblue', 'limegreen', 'violet', 'yellow'] # Colors for the centroids plt.style.use('ggplot') plt.title("PCA transformed data") plt.xlabel("$x_1$") plt.ylabel("$x_2$") alp = 0.5 # data points alpha dt_sz = 20 # marker size for data points cent_sz = 130 # centroid sz if centroids is None and clusters is None: plt.scatter(data[:,0], data[:,1],s=dt_sz,alpha=alp ,c=data_colors[0]) if centroids is not None and clusters is None: plt.scatter(data[:,0], data[:,1],s=dt_sz,alpha=alp, c=data_colors[0]) plt.scatter(centroids[:,0], centroids[:,1], marker="x", s=cent_sz, c=centroid_colors[:len(centroids)]) if centroids is not None and clusters is not None: plt.scatter(data[:,0], data[:,1], c=[data_colors[i] for i in clusters], s=dt_sz, alpha=alp) plt.scatter(centroids[:,0], centroids[:,1], marker="x", c=centroid_colors[:len(centroids)], s=cent_sz) if centroids is None and clusters is not None: plt.scatter(data[:,0], data[:,1], c=[data_colors[i-1] for i in clusters], s=dt_sz, alpha=alp) for x,y,label in zip(data[:,0],data[:,1],countryLabels): plt.annotate(label, # this is the text (x,y), # this is the point to label textcoords="offset points", # how to position the text xytext=(0,5), # distance from text to points (x,y) ha='center',fontsize=10) # horizontal alignment can be left, right or center plt.show() m, n = X.shape # Get the number of data points m and number of features n k = 4 # Define number of clusters to use cluster_means = np.zeros((k,n)) # Store the resulting clustering means in the rows of this np array cluster_labels = np.zeros(m) # Store here the resulting cluster indices (one for each data point) k_means = KMeans(n_clusters = k, max_iter = 100).fit(X) # Apply k-means with k=3 cluster and using maximum 100 iterations cluster_means = k_means.cluster_centers_ # Get cluster means (centers) cluster_indices = k_means.labels_ # Get the cluster labels for each data point # Plot the clustered data plottingCluster(X, centroids=cluster_means, clusters=cluster_indices) #print the cluster labels clusters={} for country, idx in zip(countryLabels, cluster_indices): if idx not in clusters: clusters[idx] = [country] else: clusters[idx].append(country) import pprint pp = pprint.PrettyPrinter(indent=4) pp.pprint(clusters) ###Output _____no_output_____
DC Solutions 2017/Jupyter Notebooks/.ipynb_checkpoints/DC_Python_Tutorial_2_Solution-checkpoint.ipynb
###Markdown --- Functions When it becomes necessary to do the same calculation multiple times, it is useful to create a function to facilitate the calculation in the future.- Function blocks begin with the keyword def followed by the function name and parentheses ( ).- Any input parameters or arguments should be placed within these parentheses. - The code block within every function starts with a colon (:) and is indented.- The statement return [expression] exits a function and returns an expression to the user. A return statement with no arguments is the same as return None.- (Optional) The first statement of a function can the documentation string of the function or docstring, writeen with apostrophes ' '.Below is an example of a function that takes three inputs, pressure, volume, and temperature, and returns the number of moles. ###Code # Creating a function is easy in Python def nmoles(P,V,T): return (P*V/(u.R*T.to(u.kelvin))).to_base_units() ###Output _____no_output_____ ###Markdown Try using the new function to solve the same problem as above. You can reuse the variables. You can use the new function call inside the print statement. ###Code print('There are '+ut.sig(nmoles(P,V,T),3)+' of methane in the container.') ###Output There are 1.62 mol of methane in the container. ###Markdown --- Density FunctionWe will create and graph functions describing density and viscosity of water as a function of temperature. We will use the [scipy 1D interpolate function](https://docs.scipy.org/doc/scipy/reference/tutorial/interpolate.htmld-interpolation-interp1d) to create smooth interpolation between the known data points to generate a smooth function. `density_water`, defined in [`physchem`](https://github.com/AguaClara/AguaClara_design/blob/master/physchem.py), is a function that returns a fluid's density at a given temperature. It has one input parameter, temperature (in Celsius). ###Code # Here is an example of how you could define the function yourself if you chose. # Below are corresponding arrays of temperature and water density with appropriate units attached. # The 1d interpolation function will use a cubic spline. Tarray = u.Quantity([0,5,10,20,30,40,50,60,70,80,90,100],u.degC) rhoarray = [999.9,1000,999.7,998.2,995.7,992.2,988.1,983.2,977.8,971.8,965.3,958.4]*u.kg/u.m**3 def DensityWater(T): rhointerpolated=interpolate.interp1d(Tarray, rhoarray, kind='cubic') rho=rhointerpolated(T.to(u.degC)) return rho*u.kg/u.m**3 # You can get the density of water for any temperature using this function call. print('The density of water at '+ut.sig(u.Quantity(20,u.degC),3) +' is '+ut.sig(DensityWater(u.Quantity(20,u.degC)),4)+'.') ###Output The density of water at 20.0 celsius is 998.2 kg/m³. ###Markdown --- Pipe DatabaseThe [`pipedatabase`](https://github.com/AguaClara/AguaClara_design/blob/master/pipedatabase.py) file in the `AguaClara_design` has many useful functions concerning pipe sizing. It provides functions that calculate actual pipe inner and outer diameters given the nominal diameter of the pipe. Note that nominal diameter just means the diameter that it is called (hence the discriptor "nominal") and thus a 1 inch nominal diameter pipe might not have any dimensions that are actually 1 inch! ###Code # The OD function in pipedatabase returns the outer diameter of a pipe given the nominal diameter, ND. pipe.OD(6*u.inch) ###Output _____no_output_____ ###Markdown The ND_SDR_available function returns the nominal diameter of a pipe that has an inner diameter equal to or greater than the requested inner diameter [SDR, standard diameter ratio](http://www.engineeringtoolbox.com/sdr-standard-dimension-ratio-d_318.html). Below we find the smallest available pipe that has an inner diameter of at least 7 cm ###Code IDmin = 7 * u.cm SDR = 26 ND_my_pipe = pipe.ND_SDR_available(IDmin,SDR) ND_my_pipe ###Output _____no_output_____ ###Markdown The actual inner diameter of this pipe is ###Code ID_my_pipe = pipe.ID_SDR(ND_my_pipe,SDR) print(ut.sig(ID_my_pipe.to(u.cm),2)) ###Output 8.2 cm ###Markdown We can display the available nominal pipe sizes that are in our database. ###Code pipe.ND_all_available() ###Output _____no_output_____ ###Markdown --- PhyschemThe 'AguaClara_design' [physchem](https://github.com/AguaClara/AguaClara_design/blob/master/physchem.py) has many useful fluids functions including Reynolds number, head loss equation, orifice equations, viscosity etc. --- Viscosity Functions ###Code #Define the temperature of the fluid so that we can calculate the kinematic viscosity temperature = u.Quantity(20,u.degC) #Calculate the kinematic viscosity using the function in physchem which we access using "pc" nu=pc.viscosity_kinematic(temperature) print('The kinematic viscosity of water at '+ut.sig(temperature,2)+' is '+ut.sig(nu,3)) ###Output The kinematic viscosity of water at 20 celsius is 1.00e-6 m²/s ###Markdown --- Our First Graph!We will use [matplotlib](https://matplotlib.org/) to create a graph of water density as a function of temperature. [Here](https://matplotlib.org/users/pyplot_tutorial.html) is a quick tutorial on graphing. ###Code # Create a list of 100 numbers between 0 and 100 and then assign the units of degC to the array. # This array will be the x values of the graph. GraphTarray = u.Quantity(np.arange(100),u.degC) #Note the use of the .to method below to display the results in a particular set of units. plt.plot(GraphTarray, pc.viscosity_kinematic(GraphTarray).to(u.mm**2/u.s), '-') plt.xlabel('Temperature (degrees Celcius)') plt.ylabel('Viscosity (mm^2/s)') plt.show() ###Output _____no_output_____ ###Markdown Reynolds numberWe will use the physchem functions to calculate the Reynolds number for flow through a pipe. ###Code Q = 5*u.L/u.s D = pipe.ID_SDR(4*u.inch,26) Reynolds_pipe = pc.re_pipe(Q,D,nu) Reynolds_pipe ###Output _____no_output_____ ###Markdown Now use the sig function to display calulated values to a user specified number of significant figures. ###Code print('The Reynolds number is '+ut.sig(pc.re_pipe(Q,D,nu),3)) ###Output The Reynolds number is 6.01e+4 ###Markdown Here is a table of a few of the equations describing pipe flow and their physchem function counterparts. Assorted Fluids Functions| Equation Name | Equation | Physchem function ||---------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------:|| Reynolds Number | $Re= \frac{{4Q}}{{\pi D\nu }}$ | `re_pipe(FlowRate, Diam, Nu)` || Swamee-Jain Turbulent Friction factor | ${\rm{f}} = \frac{{0.25}}{{{{\left[ {\log \left( {\frac{\varepsilon }{{3.7D}} + \frac{{5.74}}{{{{{\mathop{\rm Re}\nolimits} }^{0.9}}}}} \right)} \right]}^2}}}$ | `fric(FlowRate, Diam, Nu, PipeRough)` || Laminar Friction factor | ${\rm{f}} = \frac{64}{Re}$ | || Hagen Pousille laminar flow head loss | ${h_{\rm{f}}} = \frac{{32\mu LV}}{{\rho g{D^2}}} = \frac{{128\mu LQ}}{{\rho g\pi {D^4}}}$ | || Darcy Weisbach head loss | ${h_{\rm{f}}} = {\rm{f}}\frac{8}{{g{\pi ^2}}}\frac{{L{Q^2}}}{{{D^5}}}$ | `headloss_fric(FlowRate, Diam, Length, Nu, PipeRough)` || Swamee-Jain equation for diameter | $0.66\left ( \varepsilon ^{1.25}\left ( \frac{LQ^{2}}{gh_{f}} \right )^{4.75}+\nu Q^{9.4}\left ( \frac{L}{gh_{f}} \right )^{5.2} \right )^{0.04}$| `diam_swamee(FlowRate, HeadLossFric, Length, Nu, PipeRough)` | ###Code # create a plot that shows both the original data values (plotted as points) # and the smooth curve that shows the density function. # Note that Tarray and rhoarray were defined much earlier in this tutorial. #We will plot the data points using circles 'o' and the smooth function using a line '-'. plt.plot(Tarray, rhoarray, 'o', GraphTarray, (DensityWater(GraphTarray)), '-') # For an x axis log scale use plt.semilogx(Tarray, rhoarray, 'o', xnew, f2(xnew), '-') # For a y axis log scale use plt.semilogy(Tarray, rhoarray, 'o', xnew, f2(xnew), '-') # For both axis log scale use plt.loglog(Tarray, rhoarray, 'o', xnew, f2(xnew), '-') #Below we create the legend and axis labels plt.legend(['data', 'cubic'], loc='best') plt.xlabel('Temperature (degrees Celcius)', fontsize=20) plt.ylabel('Density (kg/m^3)', fontsize=20) #Now we show the graph and we are done! plt.show() ###Output _____no_output_____ ###Markdown Design Challenge 1, learning Python, Jupyter, and some AguaClara Design Functions 1) Calculate the minimum inner diameter of a PVC pipe that can carry a flow of at least 10 L/s for the town of Ojojona. The population is 4000 people. The water source is a dam with a surface elevation of 1500 m. The pipeline connects the reservoir to the discharge into a distribution tank at an elevation of 1440 m. The pipeline length is 2.5 km. The pipeline is made with PVC pipe with an SDR (standard diameter ratio) of 26.The pipeline inlet at the dam is a square edge with a minor loss coefficient (${K_e}$) of 0.5. The discharge at the top of the distribution tank results in a loss of all of the kinetic energy and thus the exit minor loss coefficient is 1. See the minor loss equation below.${h_e} = {K_e}\frac{{{V^2}}}{{2g}}$The water temperature ranges from 10 to 30 Celsius. The roughness of a PVC pipe is approximately 0.1 mm. Use the fluids functions to calculate the minimum inner pipe diameter to carry this flow from the dam to the distribution tank.Report the following * critical design temperature* kinematic viscosity (maximum viscosity will occur at the lowest temperature)* the minimum inner pipe diameter (in mm). Use complete sentences to report the results and use 2 significant digits (use the sig function). ###Code SDR = 26 Q = 10 * u.L/u.s delta_elevation = 1500 * u.m - 1440 * u.m L_pipe = 2.5 * u.km # am using 0 minor losses because pipe diameter function fails if not zero. K_minor = 1.5 # The maximum viscosity will occur at the lowest temperature. T_crit = u.Quantity(10,u.degC) nu = pc.viscosity_kinematic(T_crit) e = 0.1 * u.mm pipeline_ID_min = pc.diam_pipe(Q,delta_elevation,L_pipe,nu,e,K_minor) print('The critical water temperature for this design is '+ str(T_crit)+'.') print('The kinematic viscosity of water is '+ut.sig(nu,2)+'.') print('The minimum pipe inner diameter is '+ ut.sig(pipeline_ID_min.to(u.mm),2)+'.') ###Output The critical water temperature for this design is 10 degC. The kinematic viscosity of water is 1.3e-6 m²/s. The minimum pipe inner diameter is 97 mm. ###Markdown 2)Find the nominal diameter of a PVC pipe that is SDR 26. SDR means standard diameter ratio. The thickness of the pipe wall is 1/SDR of the outside diameter. The pipedatabase file has a useful function that returns nominal diameter given SDR and inner diameter. ###Code pipeline_ND = pipe.ND_SDR_available(pipeline_ID_min,SDR) print('The nominal diameter of the pipeline is '+ut.sig(pipeline_ND,2)+' ('+ut.sig(pipeline_ND.to(u.mm),2)+').') ###Output The nominal diameter of the pipeline is 4.0 in (1.0e+2 mm). ###Markdown 3) What is the actual inner diameter of this pipe in mm? Compare this with the [reported inner diameter for SDR-26 pipe](http://www.cresline.com/pdf/cresline-northwest/pvcpressupipeline_Re/CNWPVC-26.pdf) to see if our pipe database is reporting the correct value. ###Code pipeline_ID = pipe.ID_SDR(pipeline_ND,SDR) cresline_ID = 4.154*u.inch print('The inner diameter of the pipe is '+ut.sig(pipeline_ID.to(u.mm),3)+'.') print('Cresline reports the inner diameter is '+ut.sig(cresline_ID.to(u.mm),3)+'.') ###Output The inner diameter of the pipe is 106 mm. Cresline reports the inner diameter is 106 mm. ###Markdown 4) What is the maximum flow rate that can be carried by this pipe at the coldest design temperature?Display the flow rate in L/s using the .to method. ###Code pipeline_Q_max = pc.flow_pipe(pipeline_ID,delta_elevation,L_pipe,nu,e,K_minor) print('The maximum flow rate at '+ut.sig(T_crit,2)+' is '+ut.sig(pipeline_Q_max.to(u.L/u.s),2)+'.') ###Output The maximum flow rate at 10 celsius is 13 l/s. ###Markdown 5) What is the Reynolds number and friction factor for this maximum flow? Assign these values to variable names so you can plot them later on the Moody diagram. ###Code pipeline_Re = pc.re_pipe(pipeline_Q_max,pipeline_ID,nu) fPipe = pc.fric(pipeline_Q_max,pipeline_ID,nu,e) print('The Reynolds number and friction factor for the pipeline flow are '+ut.sig(pipeline_Re,2)+' and '+ut.sig(fPipe,2)+' respectively.') ###Output The Reynolds number and friction factor for the pipeline flow are 1.2e+5 and 0.022 respectively. ###Markdown 6) Check to see if the fluids functions are internally consistent by calculating the head loss given the flow rate that you calculated and comparing that head loss with the elevation difference. Display enough significant digits to see the difference in the two values. Note that the Moody diagram has an accuracy of about ±5% for smooth pipes and ±10% for rough pipes [Moody, 1944](http://user.engineering.uiowa.edu/~me_160/lecture_notes/MoodyLFpaper1944.pdf). ###Code HLCheck = pc.headloss(pipeline_Q_max,pipeline_ID,L_pipe,nu,e,K_minor) print('The head loss is '+ut.sig(HLCheck,3)+' and that is close to the elevation difference of '+ut.sig(delta_elevation,3)+'.') ###Output The head loss is 60.5 m and that is close to the elevation difference of 60.0 m. ###Markdown 7) How much more water (both volumetric and mass rate) will flow through the pipe at the maximum water temperature of 30 C? Take into account both the change in viscosity (changes the flow rate) and the change in density (changes the mass rate). Report the flow rates in L/s. ###Code Tmax = u.Quantity(30,u.degC) nuhot = pc.viscosity_kinematic(Tmax) pipeline_Q_maxhot = pc.flow_pipe(pipeline_ID,delta_elevation,L_pipe,nuhot,e,K_minor) QDelta = pipeline_Q_maxhot-pipeline_Q_max MassFlowDelta = (pipeline_Q_maxhot*DensityWater(Tmax)-pipeline_Q_max*DensityWater(T_crit)).to_base_units() print('The increase in flow rate at '+ut.sig(Tmax,2)+' is '+ut.sig(QDelta.to(u.L/u.s),2)+'.') print('The increase in mass rate at '+ut.sig(Tmax,2)+' is '+ut.sig(MassFlowDelta,2)+'.') ###Output The increase in flow rate at 30 celsius is 0.24 l/s. The increase in mass rate at 30 celsius is 0.19 kg/s. ###Markdown 8)Why is the flow increase due to this temperature change so small given that viscosity actually changed significantly (see the calculation below)? ###Code print('The viscosity ratio for the two temperatures was '+ut.sig(pc.viscosity_kinematic(Tmax)/pc.viscosity_kinematic(T_crit),2)+'.') ###Output The viscosity ratio for the two temperatures was 0.62. ###Markdown The flow is turbulent and thus viscosity has little influence on the flow rate. 9)Suppose an AguaClara plant is designed to be built up the hill from the distribution tank. The transmission line will need to be lengthened by 30 m and the elevation of the inlet to the entrance tank will be 1450 m. The rerouting will also require the addition of 3 elbows with a minor loss coefficient of 0.3 each. What is the new maximum flow from the water source? ###Code delta_elevationnew = 1500*u.m - 1450*u.m L_pipenew = 2.5*u.km + 30*u.m Knew = 1.5+3*0.3 pipeline_Q_maxnew = pc.flow_pipe(pipeline_ID,delta_elevationnew,L_pipenew,nu,e,Knew) print('The new maximum flow rate at '+ut.sig(T_crit,2)+' is '+ut.sig(pipeline_Q_maxnew.to(u.L/u.s),2)+'.') ###Output The new maximum flow rate at 10 celsius is 12 l/s. ###Markdown 10)How much less water will flow through the transmission line after the line is rerouted? ###Code print('The reduction in flow is '+ut.sig((pipeline_Q_max-pipeline_Q_maxnew).to(u.L/u.s),2)+'.') ###Output The reduction in flow is 1.3 l/s. ###Markdown We noticed that many of you are having some difficulty with naming convention and syntax.Please refer to the following for Github [Standards Page] (https://github.com/AguaClara/aide_design/wiki/Standards) for naming standards. Additionally, here is a Github [Variable Naming Guide] (https://github.com/AguaClara/aide_design/wiki/Variable-Naming) that will be useful for creating variable names. 11)There exists a function within the physchem file called `pc.fric(FlowRate, Diam, Nu, PipeRough)` that returns the friction factor for both laminar and turbulent flow. In this problem, you will be creating a new function which you shall call `fofRe()` that takes the Reynolds number and the dimensionless pipe roughness (ε/D) as inputs.Recall that the format for defining a function is `def fofRe(input1, input2): f = buncha stuff return f`Since the equation for calculating the friction factor is different for laminar and turbulent flow (with the transition Reynolds number being defined within the physchem file), you will need to use an `if, else` statement for the two conditions. The two friction factor equations are given in the **Assorted Fluids Functions** table. ###Code #returns the friction factor for pipe flow for both laminar and turbulent flows def fofRe(Re,roughness): if Re >= pc.RE_TRANSITION_PIPE: f = 0.25/(math.log10(roughness/(3.7)+5.74/Re**0.9))**2 else: f = 64/Re return f ###Output _____no_output_____ ###Markdown 12) Need to update picture!Create a beautiful Moody diagram. Include axes labels and show a legend that clearly describes each plot. The result should look like the picture of the graph below.![](Moody.png) 12a)You will be creating a Moody diagram showing Reynolds number vs friction factor for multiple dimensionless pipe roughnesses. The first step to do this is to define the number of dimensionless pipe roughnesses you want to plot. We will plot 8 curves for the following values: 0, 0.0001, 0.0003, 0.001, 0.003, 0.01, 0.03, 0.1. We will plot an additional curve, which will be a straight line, for laminar flow, since it is not dependent on the pipe roughness value (see the Moody diagram above).* Create an array for the dimensionless pipe roughness values, using `np.array([])`.* Specify the amount of data points you want to plot for each curve. We will be using 50 points.Because the Moody diagram is a log-log plot, we need to ensure that all 50 points on the diagram we are creating are equally spaced in log-space. Use the `np.logspace(input1, input2, input3)` function to create an array for turbulent Reynolds numbers and an array for laminar Reynolds numbers.* `input1` is the exponent for the lower bound of the range. For example, if you want your lower bound to be 1000, your input should be `math.log10(1000)` which is equal to 3.* `input2` is the exponent for the upper bound of the range. Format this input as you have formatted `input1`.* `input3` is the number of data points you are using for each curve.Note: The range for array that yo**12a) Deliverables*** Array of dimentionless pipe roughnesses. Call this array `eGraph`.* Variable defining the amount of points on each pipe roughness curve* Two arrays created using `np.logspace` which for turbulent and laminar Reynolds numbers, which will be the x-axis values for the Moody diagramNote: The bounds for the laminar Reynolds numbers array should span between 670 and the predefined transition number used in Problem 11. The bounds for the turbulent Reynolds numbers array should span between 3,500 and 100,000,000. These ranges are chosen to make the curves fit well within the graph and to intentionally omit data in the transition range between laminar and turbulent flows. ###Code eGraph = np.array([0,0.0001,0.0003,0.001,0.003,0.01,0.03,0.1]) Gpoint = 50 ReG = np.logspace(math.log10(3500), 8, Gpoint) ReLam = np.logspace(math.log10(670),math.log10(pc.RE_TRANSITION_PIPE),Gpoint) ###Output _____no_output_____ ###Markdown 12b)Now you will create the y-axis values for turbulent flow (based on dimensionless pipe roughness) and laminar flow (not based on dimensionless pipe roughness). To do this, you will use the `fofRe()` function you wrote in Problem 11 to find the friction factors. Begin by creating an empty 2-dimensional array that will be populated by the turbulent-flow friction factors for each dimensionless pipe roughness. Use `np.zeros(number of rows, number of columns)`. The number of rows should be the number of dimensionless pipe roughness values (`len(eGraph)`), while the number of columns should be the number of data points per curve as defined above.Populating this array with friction factor values will require two `for` loops, one to iterate through rows and one to iterate through columns. Recall that `for` loop syntax is as follows:`example = np.zeros((40, 30))for i in range(0, 40): for j in range(0, 30): example[i,j] = function(buncha[i],stuff[j])` where `buncha` and `stuff` are arrays.You will repeat this process to find the friction factors for laminar flow. The only difference between the turbulent and laminar friction flow arrays will be that the laminar array will only have one dimension since it does not affected by the dimensionless pipe roughness. Start by creating an empty 1-dimensional array and then use a single `for` loop. **12b) Deliverables*** One 1-D array containing friction factor values for laminar flow.* One 2-D array containing friction factor values for each dimensionless pipe roughness for turbulent flow. ###Code fLam = np.zeros(Gpoint) for i in range(0,Gpoint): fLam[i] = fofRe(ReLam[i],0) fG = np.zeros((len(eGraph),Gpoint)) for i in range(0,len(eGraph)): for j in range(0, Gpoint): fG[i,j]=fofRe(ReG[j],eGraph[i]) ###Another way (probably better) is to make only 1 for loop like the following example #fLam_opt = np.zeros((1,Gpoint)) #fG_opt = np.zeros((len(eGraph),Gpoint)) #for i in range(0, Gpoint): # fLam_opt[0,i] = fofRe(1,ReLam[i]) # for j in range(0, len(eGraph)): # fG_opt[j,i] = fofRe(eGraph[j],ReG[i]) ###Output _____no_output_____ ###Markdown 12c)Now, we are ready to start making the Moody diagram!!!!!1!!! The plot formatting is included for you in the cell below. You will add to this cell the code that will actually plot the arrays you brought into existence in 12a) and 12b) with a legend. For the sake of your own sanity, please only add code where specified.* First, plot your arrays. See the plots in the tutorial above for the syntax. Recall that each dimensionless pipe roughness is a separate row within the 2-D array you created. To plot these roughnesses as separate curves, use a `for` loop to iterate through the rows of your array. To plot all columns in a particular row, use the `[1,:]` call on an array, where 1 is the row you are calling.* Plotting the laminar flow curve does not require a `for` loop because it is a 1-D array. * Use a linewidth of 4 for all curves.* Now plot the data point you calculated in DC Python Tutorial 1, conveniently located a few problems above this one. Use the Reynolds number and friction factor obtained in Problem 5. Because this is a single point, it should be plotted as a circle instead of a line. Because a line composed of a single point does not exist.* You will need to make a legend for the graph using `leg = plt.legend(stringarray, loc = 'best')` * The first input, `stringarray`, must be an array composed of strings instead of numbers. The array you created which contains the dimensionless pipe roughness values (`eGraph`) can be converted into a string array for your legend (`eGraph.astype('str'))`. You will need to add 'Laminar' and 'Pipeline' as strings to the new ` eGraph ` string array. Perhaps you will find `np.append(basestring, [('string1','string2')])` to be useful ;) ###Code #Set the size of the figure to make it big! plt.figure('ax',(10,8)) #-------------------------------------------------------------------------------------- #---------------------WRITE CODE BELOW------------------------------------------------- #-------------------------------------------------------------------------------------- #You should begin by plotting your data. for i in range(len(fG)): plt.plot( ReG,fG[i,:], '-', linewidth = 4) #fig = plt.figure() plt.plot(ReLam,fLam,'k-',linewidth = 4) plt.plot(pipeline_Re,fPipe,'ko') #Your legend should go below. If you try to make your legend before you make you plot your data, the legend will not show and you will be dazed and confused. mylegend = np.append(eGraph.astype('str'),[('laminar', 'Pipeline')]) leg = plt.legend(mylegend, loc='best') #-------------------------------------------------------------------------------------- #---------------------WRITE CODE ABOVE------------------------------------------------- #-------------------------------------------------------------------------------------- #LOOK AT ALL THIS COOL CODE! plt.yscale('log') plt.xscale('log') plt.grid(b=True, which='major', color='k', linestyle='-', linewidth=0.5) #Set the grayscale of the minor gridlines. Note that 1 is white and 0 is black. plt.grid(b=True, which='minor', color='0.5', linestyle='-', linewidth=0.5) #The next 2 lines of code are used to set the transparency of the legend to 1. #The default legend setting was transparent and was cluttered. plt.xlabel('Reynolds number', fontsize=30) plt.ylabel('Friction factor', fontsize=30) plt.show() ###Output _____no_output_____ ###Markdown 13) Researchers in the AguaClara laboratory collected the following head loss data through a 1/8" diameter tube that was 2 m long using water at 22°C. The data is in a comma separated data (.csv) file named ['Head_loss_vs_Flow_dosing_tube_data.csv'](https://github.com/AguaClara/CEE4540_DC/blob/master/Head_loss_vs_Flow_dosing_tube_data.csv). Use the pandas read csv function (`pd.read_csv('filename.csv')`) to read the data file. Display the data so you can see how it is formatted. ###Code head_loss_data = pd.read_csv('Head_loss_vs_Flow_dosing_tube_data.csv') head_loss_data ###Output _____no_output_____ ###Markdown 14)Using the data table from Problem 13, assign the head loss **and flow rate** data to separate 1-D arrays. Attach the correct units. `np.array` can extract the data by simply inputting the text string of the column header. Here is example code to create the first array:`HL_data=np.array(head_loss_data['Head loss (m)'])*u.m` ###Code HL_data = np.array(head_loss_data['Head loss (m)'])*u.m Q_data = np.array(head_loss_data['Flow rate (mL/min)'])*u.mL/u.min ###Output _____no_output_____ ###Markdown 15)Calculate and report the maximum and minimum Reynolds number for this data set. Use the tube and temperature parameters specified in Problem 13. Use the `min` and `max` functions which take arrays as their inputs. ###Code D_tube=1/8*u.inch L_tube=2*u.m T_data=u.Quantity(22,u.degC) nu_data=pc.viscosity_kinematic(T_data) Re_data_max=max(pc.re_pipe(Q_data,D_tube,nu_data)) Re_data_min=min(pc.re_pipe(Q_data,D_tube,nu_data)) print('The Reynolds number varied from '+ut.sig(Re_data_min,2)+' to '+ut.sig(Re_data_max,2)+'.') ###Output The Reynolds number varied from 2.9e+2 to 1.0e+3. ###Markdown 16)You will now create a graph of headloss vs flow for the tube mentioned in the previous problems. This graph will have two sets of data: the real data contained within the csv file and some theoretical data. The theoretical data is what we would expect the headloss through the tube to be in an ideal world for any given flow. When calculating the theoretical headloss, assume that minor losses are negligible. Plot the data from the csv file as individual data points and the theoretical headloss as a continuous curve. Make the y-axis have units of cm and the x-axis have units of mL/s. A few hints.* To find the theoretical headloss, you will first need to create an array of different flow values. While you could use the values in the csv file that you extracted in Problem 14, we would instead like you to create an array of 50 equally-spaced flow values. These values shall be between the minimum and maximum flows in the csv file.* You can use the `np.linspace(input1, input2, input3)` function to create this set of equally-spaced flows. Inputs for `np.linspace` are the same as they were for `np.logspace`, which was used in Problem 12a). Linspace does not work with units; you will need to remove the units (using `.magnitude`) from the inputs to `np.logspace` and then reattach the correct units of flow after creating the array.* The `pc.headloss_fric` function can handle arrays as inputs, so that makes it easy to produce the theoretical headloss array once you have finished your equally-spaced flow array.* When using `plt.plot`, make sure to convert the flow and headloss data to the desired units. ###Code Qpoint=50 QGraph= np.linspace((min(Q_data).to(u.mL/u.s)).magnitude, (max(Q_data).to(u.mL/u.s)).magnitude, Qpoint)*u.mL/u.s plt.plot(Q_data.to(u.mL/u.s),HL_data.to(u.cm),'o') plt.plot(QGraph.to(u.mL/u.s),pc.headloss_fric(QGraph,D_tube,L_tube,nu_data,0*u.mm).to(u.cm), '-',linewidth=2) leg=plt.legend(['data','theoretical major losses'], loc='best') #leg.get_frame().set_alpha(1) plt.xlabel('Flow rate (mL/s)') plt.ylabel('Head loss (cm)') plt.show() ###Output _____no_output_____ ###Markdown The theoretical model doesn't fit the data very well. We assumed that major losses dominated. But that assumption was wrong. So let's try a more sophisticated approach where we fit minor losses to the data. Below we demonstrate the use of the [scipy curve_fit method](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.curve_fit.htmlscipy.optimize.curve_fit) to fit the minor loss coefficient given this data set. In this example, `Q_data` is the flow rate array for the csv file from problem 13. You should re-name this variable below to whatever you titled this variable. ###Code from scipy.optimize import curve_fit # Define a new function that calculates head loss given the flow rate # and the parameter that we want to use curve fitting to estimate # Define the other known values inside the function because we won't be passing those parameters to the function. def HL_curvefit(FlowRate, KMinor): # The tubing is smooth AND pipe roughness isn't significant for laminar flow. PipeRough = 0*u.mm L_tube = 2*u.m T_data = u.Quantity(22,u.degC) nu_data = pc.viscosity_kinematic(T_data) D_tube = 1/8*u.inch # pass all of the parameters to the head loss function and then strip the units so # the curve fitting function can handle the data. return (pc.headloss(FlowRate, D_tube, L_tube, nu_data, PipeRough, KMinor)).magnitude # The curve fit function will need bounds on the unknown parameters to find a real solution. # The bounds for K minor are 0 and 20. # The curve fit function returns a list that includes the optimal parameters and the covariance. popt, pcov = curve_fit(HL_curvefit, Q_data, HL_data, bounds=[[0.],[20]]) K_minor_fit = popt[0] # Plot the raw data plt.plot(Q_data.to(u.mL/u.s), HL_data.to(u.cm), 'o', label='data') # Plot the curve fit equation. plt.plot(Q_data.to(u.mL/u.s), ((HL_curvefit(Q_data, *popt))*u.m).to(u.cm), 'r-', label='fit') plt.xlabel('Flow rate (mL/s)') plt.ylabel('Head loss (cm)') plt.legend() plt.show() #Calculate the root mean square error to estimate the goodness of fit of the model to the data RMSE_Kminor = (np.sqrt(np.var(np.subtract((HL_curvefit(Q_data, *popt)),HL_data.magnitude)))*u.m).to(u.cm) print('The root mean square error for the model fit when adjusting the minor loss coefficient was '+ut.sig(RMSE_Kminor,2)) ###Output _____no_output_____ ###Markdown 17)Repeat the analysis from the previous cell, but this time assume that the minor loss coefficient is zero and that diameter is the unknown parameter. The bounds specified in the line beginning with `popt, pcov` should be changed from the previous question (which had bounds from 0 to 20) to the new bounds of 0.001 to 0.01. Hint: Don't think too much about this, you only need to change the name of the defined function (perhaps "`HL_curvefit2`"?) and adjust its inputs/values. ###Code # Define a new function that calculates head loss given the flow rate # and the parameter that we want to use curve fitting to estimate # Define the other known values inside the function because we won't be passing those parameters to the function. def HL_curvefit2(FlowRate, D_tube): # The tubing is smooth AND pipe roughness isn't significant for laminar flow. PipeRough = 0*u.mm L_tube=2*u.m T_data=u.Quantity(22,u.degC) nu_data=pc.viscosity_kinematic(T_data) KMinor=0 # pass all of the parameters to the head loss function and then strip the units so # the curve fitting function can handle the data. return (pc.headloss(FlowRate, D_tube, L_tube, nu_data, PipeRough, KMinor)).magnitude # The curve fit function will need bounds on the two unknown parameters to find a real solution. # The bounds for the diameter are 1 to 10 mm and must be given in meters. # The curve fit function returns a list that includes the optimal parameters and the covariance. popt, pcov = curve_fit(HL_curvefit2, Q_data, HL_data, bounds=[[0.001],[0.01]]) D_tube_fit = popt[0]*u.m # Plot the raw data plt.plot(Q_data.to(u.mL/u.s), HL_data.to(u.cm), 'o', label='data') # Plot the curve fit equation. plt.plot(Q_data.to(u.mL/u.s), ((HL_curvefit2(Q_data, *popt))*u.m).to(u.cm), 'r-', label='fit') plt.xlabel('Flow rate (mL/s)') plt.ylabel('Head loss (cm)') plt.legend() plt.show() #Calculate the root mean square error to estimate the goodness of fit of the model to the data RMSE_Diameter = (np.sqrt(np.var(np.subtract((HL_curvefit2(Q_data, *popt)),HL_data.magnitude)))*u.m).to(u.cm) print('The root mean square error for the model fit when adjusting the diameter was '+ut.sig(RMSE_Diameter,2)) ###Output _____no_output_____ ###Markdown 18Changes to which of the two parameters, minor loss coefficient or tube diameter, results in a better fit to the data? The root mean square error was smaller when the minor loss coefficient was varied to fit the data. 19What did you find most difficult about learning to use Python? Create a brief example as an extension to this tutorial to help students learn the topic that you found most difficult. Final PointerIt is good practice to select Restart & Run All from the Kernel menu after completing an assignment to make sure that everything in your notebook works correctly and that you haven't deleted an essential line of code! ###Code #I had trouble with the for loop and filling an array and plotting that array #Problem: use a for loop to make an array where each point is a sum of the indices #ex array[1,1]=2, etc array=np.zeros((2,2)) for i in range(2): #if you want to start from 0, you don't need to include 0, but if you for j in range(2): #wanted a range going from 1 to 7 you would put range(1,7) array[i,j]=i+1+j+1 #python starts to count from 1 print(array) print('I found creating arrays to be particularly difficult. For example, creating an empty 2-D array with 30 rows and 2 columns shown below') #Example of how to create a 2-D array of zeros example = np.zeros((30,2)) example print('The most difficult part is the units convertion. My suggestion would be to list all the values with SI units at first place, so that we can know what units we have. Then once we encounter any English unit, we will use the *u.units to call the original units.') #Getting the units to match up with what you want. #It was hard to keep track of where to change the units and what the units of these arrays and variables are. #Example #A plant can process 1,300 L/day. #The users want to know how many seconds it will take to fill their water jugs which are cylinders of radius 5 cm and height of 10 inches. #Answer plantFL = 1300 *u.l /u.day jugVol = (np.pi*(5*u.cm)**2)*10*u.inch time = (jugVol/plantFL).to(u.s) print('The time to fill the jug is ' + ut.sig(time,3) + '.') # In order to create a evenly spaced array use the function linspace, but take out the units and bring them back in. #flow_data = np.linspace(min(FR_data).to(u.mL/u.s).magnitude,max(FR_data).to(u.mL/u.s).magnitude,50)*u.mL/u.s #indentation for i in range(1,5): for j in range(1,5): for k in range(1,5): if( i != k ) and (i != j) and (j != k): print(ut.sig(i,j)) # When learning how to use python, I found the if /else if/ else statement could be confused # though we did not use them a lot in DC1/2, they still worth mention # little example attached x = 2 if x < 0: print('x < 0') # executes only if x < 0 elif x == 0: print('x is zero') # if it's not true that x < 0, check if x == 0 elif x == 1: print('x == 1') # if it's not true that x < 0 and x != 0, check if x == 1 else: print('non of the above is true') #The unit conversions from a given set of data to a graphical representation (such as mL/min given data to mL/s graphical data) #Make a smooth plot that represents the theoretical range of head loss values for a pipe with a minimum flow rate of 60 L/min #and a maximum of 110 L/min. The tube is 5 m long and has a diameter of a 1/4". It's chilly where this pipe is hanging out, #so the temperature is 5 degrees Celsius. It's a pretty ~cool~ pipe to say the least. The plot should be mL/s of flow vs m of head loss. numpoints = 50 Diam = .25*u.inch Length = 5*u.m Temp = u.Quantity(5,u.degC) Nu = pc.viscosity_kinematic(Temp) PipeRough = .1*u.mm MaxFlow = 110*u.L/u.min MinFlow = 60*u.L/u.min FlowRate = (np.linspace(MinFlow.magnitude, MaxFlow.magnitude, numpoints))*(u.L/u.min) Headloss = pc.headloss_fric(FlowRate, Diam, Length, Nu, PipeRough) plt.plot(FlowRate.to(u.mL/u.s), Headloss.to(u.m), '-') plt.xlabel('Flow Rate (mL/s)', fontsize=20) plt.ylabel('Head Loss (m)', fontsize=20) plt.show() #it's not a very good pipe design, but you get the point ###Output _____no_output_____ ###Markdown DC Python Tutorial 2: 10-19 Hint: If you are typing a function name and want to know what the options are for completing what you are typing, just hit the tab key for a menu of options.Hint: If you want to see the source code associated with a function, you can do the followingimport inspectinspect.getsource(foo) Where "foo" is the function that you'd like to learn about. Each cell in Jupyter is either code or markdown (select in the drop down menu above). You can learn about markdown language from the help menu. Markdown allows you to create very nicely formatted text including Latex equations.$$c = \sqrt{a^2 + b^2}$$Each cell is either in edit mode (select this cell and press the enter key) or in display mode (press shift enter). Shift Enter also executes the code in the cell.When you open a Jupyter notebook it is convenient to go to the cell menu and select Run All so that all results are calculated and displayed.The Python Kernel remembers all definitions (functions and variables) as they are defined based on execution of the cells in the Jupyter notebook. Thus if you fail to execute a cell, the parameters defined in that cell won't be available. Similarly, if you define a parameter and then delete that line of code, that parameter remains defined until you go to the Kernel menu and select restart. It is good practice to select Restart & Run All from the Kernel menu after completing an assignment to make sure that everything in your notebook works correctly and that you haven't deleted an essential line of code! ###Code #Here we import packages that we will need for this notebook. You can find out about these packages in the Help menu. # although math is "built in" it needs to be imported so it's functions can be used. import math from scipy import constants, interpolate #see numpy cheat sheet https://www.dataquest.io/blog/images/cheat-sheets/numpy-cheat-sheet.pdf #The numpy import is needed because it is renamed here as np. import numpy as np #Pandas is used to import data from spreadsheets import pandas as pd import matplotlib.pyplot as plt # sys and os give us access to operating system directory paths and to sys paths. import sys, os # If you place your GitHub directory in your documents folder and # clone both the design challenge notebook and the AguaClara_design repo, then this code should all work. # If you have your GitHub directory at a different location on your computer, # then you will need to adjust the directory path below. # add the path to your GitHub directory so that python can find files in other contained folders. path1 = '~' path2 = 'Documents' path3 = 'GitHub' path4 = os.path.join(path1, path2, path3) myGitHubdir = os.path.expanduser(path4) if myGitHubdir not in sys.path: sys.path.append(myGitHubdir) # add imports for AguaClara code that will be needed # physchem has functions related to hydraulics, fractal flocs, flocculation, sedimentation, etc. from aide_design import physchem as pc # pipedatabase has functions related to pipe diameters from aide_design import pipedatabase as pipe # units allows us to include units in all of our calculations from aide_design.units import unit_registry as u from aide_design import utility as ut ###Output _____no_output_____ ###Markdown --- Resources in getting started with Python Here are some basic [Python functions](http://docs.python.org/3/library/functions.html) that might be helpful to look through. Transitioning From Matlab To Python**Indentation** - When writing functions or using statements, Python recognizes code blocks from the way they are indented. A code block is a group of statements that, together, perform a task. A block begins with a header that is followed by one or more statements that are indented with respect to the header. The indentation indicates to the Python interpreter, and to programmers that are reading the code, that the indented statements and the preceding header form a code block.**Suppressing Statements** - Unlike Matlab, you do not need a semi-colon to suppress a statement in Python;**Indexing** - Matlab starts at index 1 whereas Python starts at index 0. **Functions** - In Matlab, functions are written by invoking the keyword "function", the return parameter(s), the equal to sign, the function name and the input parameters. A function is terminated with "end". `function y = average(x)if ~isvector(x) error('Input must be a vector')endy = sum(x)/length(x); end`In Python, functions can be written by using the keyword "def", followed by the function name and then the input parameters in paranthesis followed by a colon. A function is terminated with "return". `def average(x): if ~isvector(x) raise VocationError("Input must be a vector") return sum(x)/length(x); ` **Statements** - for loops and if statements do not require the keyword "end" in Python. The loop header in Matlab varies from that of Python. Check examples below:Matlab code`s = 10; H = zeros(s); for c = 1:s for r = 1:s H(r,c) = 1/(r+c-1); endend`Python code`s = 10 H = []for (r in range(s)): for (c in range(s)): H[r][c].append(1/(r+c-1)` **Printing** - Use "print()" in Python instead of "disp" in Matlab.**Helpful Documents**[Numpy for Matlab Users](https://docs.scipy.org/doc/numpy-dev/user/numpy-for-matlab-users.html)[Stepping from Matlab to Python](http://stsievert.com/blog/2015/09/01/matlab-to-python/)[Python for Matlab Users, UC Boulder](http://researchcomputing.github.io/meetup_fall_2014/pdfs/fall2014_meetup13_python_matlab.pdf) --- Arrays and ListsPython has no native array type. Instead, it has lists, which are defined using [ ]: ###Code a = [0,1,2,3] ###Output _____no_output_____ ###Markdown Python has a number of helpful commands to modify lists, and you can read more about them [here](https://docs.python.org/2/tutorial/datastructures.html). In order to use lists as arrays, numpy (numpy provides tools for working with **num**bers in **py**thon) provides an array data type that is defined using ( ). ###Code a_array = np.array(a) a_array ###Output _____no_output_____ ###Markdown Pint, which adds unit capabilities to Python, (see section on units below) is compatible with NumPy, so it is possible to add units to arrays and perform certain calculations with these arrays. We recommend using NumPy arrays rather than lists because NumPy arrays can handle units. Additionally, use functions from NumPy if possible instead of function from the math package when possible because the math package does not yet handle units. Units are added by multiplying the number by the unit raised to the appropriate power. The pint unit registry was imported above as "u" and thus the units for milliliters are defined as u.mL. ###Code a_array_units = a_array * u.m a_array_units ###Output _____no_output_____ ###Markdown In order to make a 2D array, you can use the same [NumPy array command](https://docs.scipy.org/doc/numpy/reference/generated/numpy.array.html). ###Code b = np.array([[0,1,2],[3,4,5],[6,7,8]])*u.mL b ###Output _____no_output_____ ###Markdown Indexing is done by row and then by column. To call all of the elements in a row or column, use a colon. As you can see in the following example, indexing in python begins at zero. So `b[:,1]` is calling all rows in the second column ###Code b[:,1] ###Output _____no_output_____ ###Markdown If you want a specific range of values in an array, you can also use a colon to slice the array, with the number before the colon being the index of the first element, and the number after the colon being **one greater** than the index of the last element. ###Code b[1:3,0] ###Output _____no_output_____ ###Markdown For lists and 1D arrays, the `len()` command can be used to determine the length. Note that the length is NOT equal to the index of the last element because the indexes are zero based. The len function can be used with lists and arrays. For multiple dimension arrays the `len()` command returns the length of the first dimension. ###Code len(a) len(b) ###Output _____no_output_____ ###Markdown For any higher dimension of array, `numpy.size()` can be used to find the total number of elements and `numpy.shape()` can be used to learn the dimensions of the array. ###Code np.size(b) np.shape(b) ###Output _____no_output_____ ###Markdown For a listing of the commands you can use to manipulate numpy arrays, refer to the [scipy documentation](https://docs.scipy.org/doc/numpy/reference/routines.array-manipulation.html). Sometimes, it is helpful to have an array of elements that range from zero to a specified number. This can be useful, for example, in creating a graph. To create an array of this type, use [numpy.arange](https://docs.scipy.org/doc/numpy/reference/generated/numpy.arange.html). ###Code crange = np.arange(10) crange cdetailedrange = np.arange(5,10,0.1) cdetailedrange ###Output _____no_output_____ ###Markdown --- UnitsUnits are essential to engineering calculations. Units provide a quick check on all of our calculations to help reduce the number of errors in our analysis. Getting the right dimensions back from a calculation doesn't prove that the answer is correct, but getting the wrong dimensions back does prove that the answer is wrong! Unit errors from incorrect conversions are common when using apps that don't calculate with units. Engineering design work should always include units in the calculations. We use the [pint package](https://pint.readthedocs.io/) to add unit capabilities to our calculations in Python. We have imported the `pint.UnitRegistry` as 'u' and thus all of pint's units can be used by placing a 'u.' in front of the unit name. Meters are `u.m`, seconds are `u.s`, etc. Most units are simple values that can be used just like other terms in algebraic equations. The exception to this are units that have an offset. For example, in the equation PV=nRT, temperature must be given with units that have value of zero at absolute zero. We would like to be able to enter 20 degC into that equation and have it handle the units correctly. But you can't convert from degC to Kelvin by simply multiplying by a conversion factor. Thus for temperature the units have to be handled in a special way.Temperatures require use of the u.Quantity function to enter the value and the units of temperature separated by a ',' rather than by a multiplication symbol. This is because it doesn't make sense to multiply by a temperature unit because temperatures (that aren't absolute temperatures) have both a slope and a nonzero intercept.You can find [constants that are defined in pint](https://github.com/hgrecco/pint/blob/master/pint/constants_en.txt) at the github page for pint.Below is a simple calculation illustrating the use of units to calculate the flow through a vertical pipe given a velocity and an inner diameter. We will illustrate how to calculate pipe diameters further ahead in the tutorial. ###Code V_up = 1*u.mm/u.s D_reactor = 1*u.inch A_reactor = pc.area_circle(D_reactor) Q_reactor = V_up*A_reactor Q_reactor ###Output _____no_output_____ ###Markdown The result isn't formatted very nicely. We can select the units we'd like to display by using the `.to` method. ###Code Q_reactor.to(u.mL/u.s) ###Output _____no_output_____ ###Markdown We can also force the display to be in the metric base units ###Code Q_reactor.to_base_units() ###Output _____no_output_____ ###Markdown If you need to strip units from a quantity (for example, for calculations using funtions that don't support units) you can use the `.magnitude` method. It is important that you force the quantity to be in the correct units before stripping the units. ###Code Q_reactor.to(u.mL/u.s).magnitude ###Output _____no_output_____ ###Markdown Significant digitsPython will happily display results with 17 digits of precision. We'd like to display a reasonable number of significant digits so that we don't get distracted with 14 digits of useless information. We created a [sig function in the AguaClara_design repository](https://github.com/AguaClara/AguaClara_design/blob/master/utility.py) that allows you to specify the number of significant digits to display. You can couple this with the print function to create a well formatted solution to a calculation. The sig function also displays the accompanying units. The sig function call is `ut.sig(value, sigfig)`. Example problem and solution.Calculate the number of moles of methane in a 20 L container at 15 psi above atmospheric pressure with a temperature of 30 C. ###Code # First assign the values given in the problem to variables. P = 15 * u.psi + 1 * u.atm T = u.Quantity(30,u.degC) V = 20 * u.L # Use the equation PV=nRT and solve for n, the number of moles. # The universal gas constant is available in pint. nmolesmethane = (P*V/(u.R*T.to(u.kelvin))).to_base_units() print('There are '+ut.sig(nmolesmethane,3)+' of methane in the container.') nmolesmethane ###Output There are 1.62 mol of methane in the container.
数据爬取/爬虫基础.ipynb
###Markdown 基本流程* 通过浏览器查看分析网页 * 获取数据 * 解析内容 * 保存数据 ###Code 进入网页,查看网页源代码 elements(元素)中小箭头定位网页的位置 network(网络)请求标头 User-Agent:表明是一个什么版本的浏览器 模拟用户请求登录 传入登陆密码等其他信息 # 获取一个post请求 模拟浏览器访问 import urllib.parse data = bytes(urllib.parse.urlencode({"hello":"world"}),encoding="utf-8") response = urllib.request.urlopen('http://httpbin.org/post',data = data) print(response.read().decode('utf-8')) # get请求 import urllib.parse try: # 超时处理 response = urllib.request.urlopen('http://httpbin.org/get',timeout=0.01) print(response.read().decode('utf-8')) except urllib.error.URLError as e: print('time out !') response = urllib.request.urlopen('http://www.baidu.com') response.status # 200正常 # 418被发现为一只爬虫 response.getheaders() # 获取头部信息 response.getheader('Bdpagetype') url = 'http://httpbin.org/post' # https://www.douban.com headers = {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.90 Safari/537.36 Edg/89.0.774.63"} data = bytes(urllib.parse.urlencode({"hello":"world"}),encoding="utf-8") req = urllib.request.Request(url = url,data = data,headers=headers,method = 'POST') response = urllib.request.urlopen(req) response.read().decode('utf-8') url = 'https://www.douban.com' # https://www.douban.com headers = {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.90 Safari/537.36 Edg/89.0.774.63"} # data = bytes(urllib.parse.urlencode({"hello":"world"}),encoding="utf-8") req = urllib.request.Request(url = url,headers=headers) response = urllib.request.urlopen(req) type(response.read().decode('utf-8')) def askURL(url): head = {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.90 Safari/537.36 Edg/89.0.774.63"} req = urllib.request.Request(url = url,headers=head) html = "" try: response = urllib.request.urlopen(req) html = response.read().decode('utf-8') print(html) except urllib.error.URLError as e: if hasattr(e,'code'): print(e.code) if hasattr(e,'reason'): print(e.reason) url = 'https://www.douban.com' askURL(url) ###Output <!DOCTYPE HTML> <html lang="zh-cmn-Hans" class="ua-windows ua-webkit"> <head> <meta charset="UTF-8"> <meta name="google-site-verification" content="ok0wCgT20tBBgo9_zat2iAcimtN4Ftf5ccsh092Xeyw" /> <meta name="description" content="提供图书、电影、音乐唱片的推荐、评论和价格比较,以及城市独特的文化生活。"> <meta name="keywords" content="豆瓣,小组,电影,同城,豆品,广播,登录豆瓣"> <meta property="qc:admins" content="2554215131764752166375" /> <meta property="wb:webmaster" content="375d4a17a4fa24c2" /> <meta name="mobile-agent" content="format=html5; url=https://m.douban.com"> <title>豆瓣</title> <script> function set_cookie(t,e,o,n){var i,a,r=new Date;r.setTime(r.getTime()+24*(e||30)*60*60*1e3),i="; expires="+r.toGMTString();for(a in t)document.cookie=a+"="+t[a]+i+"; domain="+(o||"douban.com")+"; path="+(n||"/")}function get_cookie(t){var e,o,n=t+"=",i=document.cookie.split(";");for(e=0;e<i.length;e++){for(o=i[e];" "==o.charAt(0);)o=o.substring(1,o.length);if(0===o.indexOf(n))return o.substring(n.length,o.length).replace(/\"/g,"")}return null}window.Douban=window.Douban||{};var Do=function(){Do.actions.push([].slice.call(arguments))};Do.ready=function(){Do.actions.push([].slice.call(arguments))},Do.add=Do.define=function(t,e){Do.mods[t]=e},Do.global=function(){Do.global.mods=Array.prototype.concat(Do.global.mods,[].slice.call(arguments))},Do.global.mods=[],Do.mods={},Do.actions=[],Douban.init_show_login=function(t){Do("dialog",function(){var t="/j/misc/login_form";dui.Dialog({title:"登录",url:t,width:/device-mobile/i.test(document.documentElement.className)?.9*document.documentElement.offsetWidth:350,cache:!0,callback:function(t,e){e.node.addClass("dialog-login"),e.node.find("h2").css("display","none"),e.node.find(".hd h3").replaceWith(e.node.find(".bd h3")),e.node.find("form").css({border:"none",width:"auto",padding:"0"}),e.update()}}).open()})},Do(function(){function t(t,e){var o=["ref="+encodeURIComponent(location.pathname)];for(var n in e)e.hasOwnProperty(n)&&o.push(n+"="+e[n]);window._SPLITTEST&&o.push("splittest="+window._SPLITTEST),localStorage.setItem("report",(localStorage.getItem("report")||"")+"_moreurl_separator_"+o.join("&"))}!function(){"localStorage"in window||(window.localStorage=function(){var t=document;if(!t.documentElement.addBehavior)throw"don't support localstorage or userdata.";var e="_localstorage_ie",o=t.createElement("input");o.type="hidden";var n=function(n){return function(){t.body.appendChild(o),o.addBehavior("#default#userData");var i=new Date;i.setDate(i.getDate()+365),o.expires=i.toUTCString(),o.load(e);var a=n.apply(o,arguments);return t.body.removeChild(o),a}};return{getItem:n(function(t){return this.getAttribute(t)}),setItem:n(function(t,o){this.setAttribute(t,o),this.save(e)}),removeItem:n(function(t){this.removeAttribute(t),this.save(e)}),clear:n(function(){for(var t,o=this.XMLDocument.documentElement.attributes,n=0;t=o[n];n++)this.removeAttribute(t.name);this.save(e)})}}())}(),$(window).one("load",function(){var t=localStorage.getItem("report");if(t){t=t.split("_moreurl_separator_");var e=function(o){return""==o?void e(t.shift()):void $.get("undefined"==typeof _MOREURL_REQ?"/stat.html?"+o:_MOREURL_REQ+"?"+o,function(){return t.length?(e(t.shift()),void localStorage.setItem("report",t.join("_moreurl_separator_"))):void localStorage.removeItem("report")})};e(t.shift())}}),window.moreurl=t,$(document).click(function(e){var o=e.target,n=$(o).data("moreurl-dict");n&&t(o,n)}),$.ajax_withck=function(t){return"POST"==t.type&&(t.data=$.extend(t.data||{},{ck:get_cookie("ck")})),$.ajax(t)},$.postJSON_withck=function(t,e,o){return $.post_withck(t,e,o,"json")},$.post_withck=function(t,e,o,n){return $.isFunction(e)&&(n=o,o=e,e={}),$.ajax({type:"POST",url:t,data:$.extend(e,{ck:get_cookie("ck")}),success:o,dataType:n||"text"})},$("html").click(function(t){var e=$(t.target),o=e.attr("class");o&&$(o.match(/a_(\w+)/gi)).each($.proxy(function(e,o){var n=Douban[o.replace(/^a_/,"init_")];"function"==typeof n&&(t.preventDefault(),n.call(this,t))},e[0]))})}); Do.add('dialog', {path: 'https://img3.doubanio.com/f/shire/383a6e43f2108dc69e3ff2681bc4dc6c72a5ffb0/js/ui/dialog.js', type: 'js', requires: ['https://img3.doubanio.com/f/shire/8377b9498330a2e6f056d863987cc7a37eb4d486/css/ui/dialog.css']}); Do.global('https://img3.doubanio.com/f/sns/b5793c2d7c298173d57ecf7d96708b5615336def/js/sns/fp/base.js', 'dialog'); </script> <link rel="stylesheet" href="https://img3.doubanio.com/f/shire/929d7e5bfb15cd179ff6df68bbd3d7e501681909/css/core/_init_.css"> <link rel="stylesheet" href="https://img3.doubanio.com/f/sns/9d57748637cabc648a8ff1116bb0f2249560b6f8/css/sns/dist/anonymous_home/index.css"> <style type="text/css"> .rec_topics_name{ display: inline-block; margin-bottom: 6px; font-size: 14px; line-height: 1.3; color: #3377aa; } .rec_topics_subtitle{ display: block; margin-bottom: 15px; font-size: 13px; line-height: 1; color: #aaaaaa; white-space: nowrap; overflow: hidden; text-overflow: ellipsis; } .rec_topics_label{ transform: translateY(-0.5px); display: inline-block; font-size: 13px; margin-left: 2px; } .rec_topics{ line-height: 1; margin-bottom: 15px; } .rec_topics:last-child{ margin-bottom: 0; } .rec_topics_label_ad{ color: #c9c9c9; -moz-transform: scale(0.91); -webkit-transform: scale(0.91); transform: scale(0.91); } html[class*=ua-ff] .rec_topics_subtitle{ line-height: 14px; } </style> </head> <body class=''> <div> <div id="anony-nav-banner" style="clear: both;overflow: hidden;background-color: #EDF4EC;margin-bottom: 30px;margin-top: -30px;"> <h1 id="douban-logo" style="height: 20px;width: 102px;margin: 7px 13px;background-size: contain;"> <a href="https://www.douban.com" style="height: 20px;">豆瓣</a> </h1> </div> <div id="anony-nav"> <div class="anony-nav-links"> <ul> <li> <a target="_blank" class="lnk-book" href="https://book.douban.com">豆瓣读书</a> </li> <li> <a target="_blank" class="lnk-movie" href="https://movie.douban.com">豆瓣电影</a> </li> <li> <a target="_blank" class="lnk-music" href="https://music.douban.com">豆瓣音乐</a> </li> <li> <a target="_blank" class="lnk-events" href="https://www.douban.com/location/">豆瓣同城</a> </li> <li> <a target="_blank" class="lnk-group" href="https://www.douban.com/group/">豆瓣小组</a> </li> <li> <a target="_blank" class="lnk-read" href="https://read.douban.com">豆瓣阅读</a> </li> <li> <a target="_blank" class="lnk-fm" href="https://douban.fm">豆瓣FM</a> </li> <li> <a target="_blank" class="lnk-shijian" href="https://time.douban.com/?dt_time_source=douban-web_anonymous_index_top_nav">豆瓣时间</a> </li> <li> <a target="_blank" class="lnk-market" href="https://market.douban.com?utm_campaign=anonymous_top_nav&utm_source=douban&utm_medium=pc_web">豆瓣豆品</a> </li> </ul> </div> <div class="site-name" title="豆瓣网" style="display: inline-block; line-height: 24px; height: 24px; margin-top: 4px;margin-right: 24px;width: 73px;background-image: url(https://img3.doubanio.com/f/sns/714b8751a533ef592bea5cd4603dbb9e713ded61/pics/sns/sitename.png);background-size: contain; background-repeat: no-repeat;text-indent: -999em;"> 豆瓣网 </div> <div class="anony-srh"> <form action="https://www.douban.com/search" method="get"> <span class="inp"><input type="text" maxlength="60" size="12" placeholder="书籍、电影、音乐、小组、小站、成员" name="q" autocomplete="off"></span> <span class="bn"><input type="submit" value="搜索"></span> </form> </div> </div> </div> <div id="anony-reg-new" style="background-image: url(https://img9.doubanio.com/view/puppy_image/raw/public/1771365ca98ig9er706.jpg)"> <div class="wrapper"> <div class="login"> <iframe style="height: 300px; width: 300px;" frameborder='0' src="//accounts.douban.com/passport/login_popup?login_source=anony"></iframe> </div> <div class="app"> <p class="app-title">豆瓣<span>7.0</span></p> <p class="app-slogan"></p> <a href="https://www.douban.com/doubanapp/app?channel=nimingye" class="lnk-app">下载豆瓣 App</a> <div class="app-qr"> <a href="javascript: void 0;" class="lnk-qr" id="expand-qr"><img src="https://img3.doubanio.com/f/sns/0c708de69ce692883c1310053c5748c538938cb0/pics/sns/anony_home/icon_qrcode_green.png" width="28" height="28" /></a> <div class="app-qr-expand"> <img src="https://img3.doubanio.com/f/sns/1cad523e614ec4ecb6bf91b054436bb79098a958/pics/sns/anony_home/doubanapp_qrcode.png" width="160" height="160" /> <p>iOS / Android 扫码直接下载</p> </div> </div> </div> </div> <script> Do(function() { var app_qr = $('.app-qr'); app_qr.hover(function() { app_qr.addClass('open'); }, function() { app_qr.removeClass('open'); }); }); </script> </div> <div id="anony-sns" class="section"> <div class="wrapper"> <!-- douban ad begin --> <div id="dale_anonymous_homepage_top_for_crazy_ad"></div> <!-- douban ad end --> <div class="side"> <div style="margin:10px 0px;"> <div id="dale_anonymous_homepage_right_top"></div> </div> <div class="online"> <ul> <div class="mod"> <h2> 热门话题 &nbsp;&middot;&nbsp;&middot;&nbsp;&middot;&nbsp;&middot;&nbsp;&middot;&nbsp;&middot; <span class="pl">&nbsp;( <a href="/gallery/" target="_self">去话题广场</a> ) </span> </h2> <ul> <li class="rec_topics"> <a href="https://www.douban.com/gallery/topic/95602/?from=hot_topic_anony_sns" class="rec_topics_name">视频·城市里的劳动者</a> <span class="rec_topics_subtitle">63.3万次浏览</span> </li> <li class="rec_topics"> <a href="https://www.douban.com/gallery/topic/302144/?from=hot_topic_anony_sns" class="rec_topics_name">城里乡间不知名文物大赏</a> <span class="rec_topics_subtitle">新话题 · 1029次浏览</span> </li> <li class="rec_topics"> <a href="https://www.douban.com/gallery/topic/300360/?from=hot_topic_anony_sns" class="rec_topics_name">你的从“入门到放弃”经历</a> <span class="rec_topics_subtitle">50.2万次浏览</span> </li> <li class="rec_topics"> <a href="https://www.douban.com/gallery/topic/301554/?from=hot_topic_anony_sns" class="rec_topics_name">从书里走出来的美食</a> <span class="rec_topics_subtitle">2.5万次浏览</span> </li> <li class="rec_topics"> <a href="https://www.douban.com/gallery/topic/301025/?from=hot_topic_anony_sns" class="rec_topics_name">老师的有趣的头像或者朋友圈</a> <span class="rec_topics_subtitle">725.9万次浏览</span> </li> <li class="rec_topics"> <a href="https://www.douban.com/gallery/topic/300552/?from=hot_topic_anony_sns" class="rec_topics_name">打工人必备的职场小神器</a> <span class="rec_topics_subtitle">27.6万次浏览</span> </li> </ul> </div> <!-- douban ad begin --> <li> <div id="dale_homepage_online_activity_promo_1"></div> </li> <li> <div id="dale_anonymous_homepage_doublemint"></div> </li> <!-- douban ad end --> </ul> </div> </div> <div class="main"> <div class="mod"> <h2> 热点内容 ······ <span class="pl">&nbsp;( <a href="https://www.douban.com/explore/" target="_self">更多</a> ) </span> </h2> <div class="albums"> <ul> <li> <div class="pic"> <a href="https://www.douban.com/photos/album/1881620415/"><img src="https://img3.doubanio.com/f/shire/a1fdee122b95748d81cee426d717c05b5174fe96/pics/blank.gif" data-origin="https://img1.doubanio.com/view/photo/albumcover/public/p2630466917.jpg" alt="" /></a> </div> <a href="https://www.douban.com/photos/album/1881620415/">球状星团</a> <span class="num">24张照片</span> <li> <div class="pic"> <a href="https://www.douban.com/photos/album/1873547684/"><img src="https://img3.doubanio.com/f/shire/a1fdee122b95748d81cee426d717c05b5174fe96/pics/blank.gif" data-origin="https://img3.doubanio.com/view/photo/albumcover/public/p2635480590.jpg" alt="" /></a> </div> <a href="https://www.douban.com/photos/album/1873547684/">画猫记</a> <span class="num">125张照片</span> <li> <div class="pic"> <a href="https://www.douban.com/photos/album/1880895311/"><img src="https://img3.doubanio.com/f/shire/a1fdee122b95748d81cee426d717c05b5174fe96/pics/blank.gif" data-origin="https://img3.doubanio.com/view/photo/albumcover/public/p2628498670.jpg" alt="" /></a> </div> <a href="https://www.douban.com/photos/album/1880895311/">女权相关插画</a> <span class="num">23张照片</span> <li> <div class="pic"> <a href="https://www.douban.com/photos/album/1872596741/"><img src="https://img3.doubanio.com/f/shire/a1fdee122b95748d81cee426d717c05b5174fe96/pics/blank.gif" data-origin="https://img2.doubanio.com/view/photo/albumcover/public/p2593347932.jpg" alt="" /></a> </div> <a href="https://www.douban.com/photos/album/1872596741/">电影摸鱼</a> <span class="num">23张照片</span> </ul> </div> <div class="notes"> <ul> <li class="first"> <div class="title"> <a href="https://www.douban.com/note/796047386/">「豆瓣小组春节大庙会」获奖结果</a> </div> <div class="author"> 组长万事屋的日记 </div> <p>在「过豆年·小组春节大庙会」期间,一百多个小组支起摊位加入了这场云端庙会,在综合了内容质量、热度,以及与活动主题契合程度等因素后,我们评选出了最终获奖结果(排名...</p> </li> <li><a href="https://www.douban.com/note/794437099/">2021.02.13 奇怪的梦境</a></li> <li><a href="https://www.douban.com/note/795780460/">家门口</a></li> <li><a href="https://www.douban.com/note/795926274/">可是,我开的是书店</a></li> <li><a href="https://www.douban.com/note/795876430/">记:我妈是怎么变成神医的</a></li> <li><a href="https://www.douban.com/note/797944126/">十爸十妈</a></li> <li><a href="https://www.douban.com/note/797318580/">毕业五年</a></li> <li><a href="https://www.douban.com/note/795899891/">我那上过私塾的爷爷</a></li> <li><a href="https://www.douban.com/note/795914841/">万事屋有约Vol.4|“如果我们可以不通过消费获得快乐”小组如何打破消费迷思?</a></li> <li><a href="https://www.douban.com/note/797996491/">温暖</a></li> </ul> </div> </div> </div> </div> </div> <div id="anony-time" class="section"> <div class="wrapper"> <div class="sidenav"> <h2 class="section-title"><a href="https://time.douban.com?dt_time_source=douban-web_anonymous">豆瓣时间</a></h2> </div> <div class="side"></div> <div class="main"> <h2> 热门专栏 &nbsp;&middot;&nbsp;&middot;&nbsp;&middot;&nbsp;&middot;&nbsp;&middot;&nbsp;&middot; <span class="pl">&nbsp;( <a href="https://time.douban.com?dt_time_source=douban-web_anonymous" target="_self">更多</a> ) </span> </h2> <ul class="time-list"> <li> <a class="cover time-audio new" href="https://m.douban.com/time/column/91?dt_time_source=douban-web_anonymous" target="_blank"> <img src="https://img3.doubanio.com/dae/niffler/niffler/images/c4972ec0-e3bf-11e7-9d88-0242ac110021.jpg" alt="一个故事的诞生——22堂创意思维写作课"> </a> <a class="title" href="https://m.douban.com/time/column/91?dt_time_source=douban-web_anonymous" target="_blank">一个故事的诞生——22堂创意思维写作课</a> <span class="type">音频专栏</span> </li> <li> <a class="cover time-audio new" href="https://m.douban.com/time/column/200?dt_time_source=douban-web_anonymous" target="_blank"> <img src="https://img3.doubanio.com/dae/niffler/niffler/images/bd70700a-c5a5-11ea-8a59-f23c99dd97de.jpg" alt="如何读透一本书——12堂阅读写作训练课"> </a> <a class="title" href="https://m.douban.com/time/column/200?dt_time_source=douban-web_anonymous" target="_blank">如何读透一本书——12堂阅读写作训练课</a> <span class="type">音频专栏</span> </li> <li> <a class="cover time-audio new " href="https://m.douban.com/time/column/215?dt_time_source=douban-web_anonymous" target="_blank"> <img src="https://img2.doubanio.com/dae/niffler/niffler/images/8522e2c8-860c-11eb-8e39-4eb6eb021333.jpg" alt="人人听得懂用得上的法律课"> </a> <a class="title" href="https://m.douban.com/time/column/215?dt_time_source=douban-web_anonymous" target="_blank">人人听得懂用得上的法律课</a> <span class="type">音频专栏</span> </li> <li> <a class="cover time-audio new " href="https://m.douban.com/time/column/208?dt_time_source=douban-web_anonymous" target="_blank"> <img src="https://img9.doubanio.com/dae/niffler/niffler/images/45a8b4f8-12b0-11eb-9266-7a8156e98d14.jpg" alt="了不起的文明现场——一线考古队长带你探秘历史"> </a> <a class="title" href="https://m.douban.com/time/column/208?dt_time_source=douban-web_anonymous" target="_blank">了不起的文明现场——一线考古队长带你探秘历史</a> <span class="type">音频专栏</span> </li> <li> <a class="cover time-audio new" href="https://m.douban.com/time/column/76?dt_time_source=douban-web_anonymous" target="_blank"> <img src="https://img3.doubanio.com/dae/niffler/niffler/images/f90e218a-b8aa-11e7-9cc5-0242ac110021.jpg" alt="52倍人生——戴锦华大师电影课"> </a> <a class="title" href="https://m.douban.com/time/column/76?dt_time_source=douban-web_anonymous" target="_blank">52倍人生——戴锦华大师电影课</a> <span class="type">音频专栏</span> </li> <li> <a class="cover time-audio new" href="https://m.douban.com/time/column/213?dt_time_source=douban-web_anonymous" target="_blank"> <img src="https://img1.doubanio.com/dae/niffler/niffler/images/f6189198-5ae8-11eb-865e-16d4e49064e7.png" alt="我们的女性400年——文学里的女性主义简史"> </a> <a class="title" href="https://m.douban.com/time/column/213?dt_time_source=douban-web_anonymous" target="_blank">我们的女性400年——文学里的女性主义简史</a> <span class="type">音频专栏</span> </li> <li> <a class="cover time-audio new " href="https://m.douban.com/time/column/188?dt_time_source=douban-web_anonymous" target="_blank"> <img src="https://img9.doubanio.com/dae/niffler/niffler/images/8e457bfe-5872-11ea-916d-4e50984eeed6.jpg" alt="用性别之尺丈量世界——18堂思想课解读女性问题"> </a> <a class="title" href="https://m.douban.com/time/column/188?dt_time_source=douban-web_anonymous" target="_blank">用性别之尺丈量世界——18堂思想课解读女性问题</a> <span class="type">音频专栏</span> </li> <li> <a class="cover time-audio new" href="https://m.douban.com/time/column/83?dt_time_source=douban-web_anonymous" target="_blank"> <img src="https://img9.doubanio.com/dae/niffler/niffler/images/6da43bc4-cdd7-11e7-bb25-0242ac110014.png" alt="哲学闪耀时——不一样的西方哲学史"> </a> <a class="title" href="https://m.douban.com/time/column/83?dt_time_source=douban-web_anonymous" target="_blank">哲学闪耀时——不一样的西方哲学史</a> <span class="type">音频专栏</span> </li> <li> <a class="cover time-audio new " href="https://m.douban.com/time/column/214?dt_time_source=douban-web_anonymous" target="_blank"> <img src="https://img1.doubanio.com/dae/niffler/niffler/images/a6916252-6525-11eb-8ab0-4e8a1ad4ea89.png" alt="读梦——村上春树长篇小说指南"> </a> <a class="title" href="https://m.douban.com/time/column/214?dt_time_source=douban-web_anonymous" target="_blank">读梦——村上春树长篇小说指南</a> <span class="type">音频专栏</span> </li> <li> <a class="cover time-article " href="https://m.douban.com/time/column/45?dt_time_source=douban-web_anonymous" target="_blank"> <img src="https://img3.doubanio.com/dae/niffler/niffler/images/2621f522-50ad-11e7-a3d6-0242ac110041.png" alt="拍张好照片——跟七七学生活摄影"> </a> <a class="title" href="https://m.douban.com/time/column/45?dt_time_source=douban-web_anonymous" target="_blank">拍张好照片——跟七七学生活摄影</a> <span class="type">图文专栏</span> </li> </ul> </div> </div> </div> <div id="anony-movie" class="section"> <div class="wrapper"> <div class="sidenav"> <h2 class="section-title"><a href="https://movie.douban.com">电影</a></h2> <div class="side-links nav-anon"> <ul> <li><a href="https://movie.douban.com/nowplaying/">影讯&amp;购票</a></li> <li class="site-nav-bt"> <a href="https://movie.douban.com/explore">选电影</a> <img style="top: -5px; position: relative;" src="https://img3.doubanio.com/pics/new_menu.gif"/> </li> <li><a href="https://movie.douban.com/tv/">电视剧</a></li> <li><a href="https://movie.douban.com/chart">排行榜</a></li> <li><a href="https://movie.douban.com/tag/">分类</a></li> <li><a href="https://movie.douban.com/review/best/">影评</a></li> <li class="site-nav-bt"><a href="https://movie.douban.com/trailers">预告片</a></li> <li><a href="https://movie.douban.com/askmatrix/hot_questions/all">问答</a></li> </ul> </div> <div class="apps-list"> <ul> </ul> </div> </div> <div class="side"> <div class="mod"> <h2> 影片分类 &nbsp;&middot;&nbsp;&middot;&nbsp;&middot;&nbsp;&middot;&nbsp;&middot;&nbsp;&middot; <span class="pl">&nbsp;( <a href="https://movie.douban.com/tag/?view=type" target="_self">更多</a> ) </span> </h2> <div class="tags list"> <ul> <li><a href="https://movie.douban.com/tag/爱情">爱情</a></li> <li><a href="https://movie.douban.com/tag/剧情">剧情</a></li> <li><a href="https://movie.douban.com/tag/喜剧">喜剧</a></li> <li><a href="https://movie.douban.com/tag/悬疑">悬疑</a></li> <li><a href="https://movie.douban.com/tag/经典">经典</a></li> <li><a href="https://movie.douban.com/tag/动画">动画</a></li> <li><a href="https://movie.douban.com/tag/科幻">科幻</a></li> <li><a href="https://movie.douban.com/tag/犯罪">犯罪</a></li> <li><a href="https://movie.douban.com/tag/动作">动作</a></li> <li><a href="https://movie.douban.com/tag/青春">青春</a></li> <li><a href="https://movie.douban.com/tag/搞笑">搞笑</a></li> <li><a href="https://movie.douban.com/tag/文艺">文艺</a></li> <li><a href="https://movie.douban.com/tag/惊悚">惊悚</a></li> <li><a href="https://movie.douban.com/tag/励志">励志</a></li> <li><a href="https://movie.douban.com/tag/纪录片">纪录片</a></li> <li><a href="https://movie.douban.com/tag/黑色幽默">黑色幽默</a></li> <li><a href="https://movie.douban.com/tag/战争">战争</a></li> <li><a href="https://movie.douban.com/tag/恐怖">恐怖</a></li> <li><a href="https://movie.douban.com/tag/美国">美国</a></li> <li><a href="https://movie.douban.com/tag/日本">日本</a></li> <li><a href="https://movie.douban.com/tag/香港">香港</a></li> <li><a href="https://movie.douban.com/tag/中国大陆">中国大陆</a></li> <li><a href="https://movie.douban.com/tag/韩国">韩国</a></li> <li><a href="https://movie.douban.com/tag/中国">中国</a></li> <li><a href="https://movie.douban.com/tag/英国">英国</a></li> <li><a href="https://movie.douban.com/tag/法国">法国</a></li> <li><a href="https://movie.douban.com/tag/台湾">台湾</a></li> <li><a href="https://movie.douban.com/tag/印度">印度</a></li> </ul> </div> </div> <div class="mod"> <h2> 近期热门 &nbsp;&middot;&nbsp;&middot;&nbsp;&middot;&nbsp;&middot;&nbsp;&middot;&nbsp;&middot; <span class="pl">&nbsp;( <a href="https://movie.douban.com/chart" target="_self">更多</a> ) </span> </h2> <div class="list1 movie-charts"> <ol> <li> <a href="https://movie.douban.com/subject/30458949/">无依之地</a> </li> <li> <a href="https://movie.douban.com/subject/34902639/">同学麦娜丝</a> </li> <li> <a href="https://movie.douban.com/subject/35096844/">送你一朵小红花</a> </li> <li> <a href="https://movie.douban.com/subject/30454679/">打开心世界</a> </li> <li> <a href="https://movie.douban.com/subject/35068230/">吉祥如意</a> </li> <li> <a href="https://movie.douban.com/subject/34805873/">孤味</a> </li> <li> <a href="https://movie.douban.com/subject/34960094/">亲爱的同志</a> </li> <li> <a href="https://movie.douban.com/subject/34962956/">缉魂</a> </li> <li> <a href="https://movie.douban.com/subject/34852385/">间谍之妻</a> </li> <li> <a href="https://movie.douban.com/subject/10428501/">新·福音战士剧场版:终</a> </li> </ol> </div> </div> </div> <div class="main"> <h2> 正在热映 ······ <span class="pl">&nbsp;( <a href="https://movie.douban.com/showtimes/" target="_self">更多</a> ) </span> </h2> <div class="movie-list list"> <ul> <li> <div class="pic"> <a href="https://movie.douban.com/subject/26613692/"><img src="https://img3.doubanio.com/f/shire/a1fdee122b95748d81cee426d717c05b5174fe96/pics/blank.gif" data-origin="https://img9.doubanio.com/view/photo/s_ratio_poster/public/p2634253484.jpg" alt="哥斯拉大战金刚" /></a> </div> <div class="title"> <a href="https://movie.douban.com/subject/26613692/">哥斯拉大战金刚...</a> </div> <div class="rating"> <span class="allstar35"></span><i>6.7</i> </div> <a href="https://movie.douban.com/subject/26613692/cinema/" class="bn-link bn-ticket">选座购票</a> <li> <div class="pic"> <a href="https://movie.douban.com/subject/30466931/"><img src="https://img3.doubanio.com/f/shire/a1fdee122b95748d81cee426d717c05b5174fe96/pics/blank.gif" data-origin="https://img9.doubanio.com/view/photo/s_ratio_poster/public/p2637021404.jpg" alt="波斯语课" /></a> </div> <div class="title"> <a href="https://movie.douban.com/subject/30466931/">波斯语课</a> </div> <div class="rating"> <span class="allstar45"></span><i>8.2</i> </div> <a href="https://movie.douban.com/subject/30466931/cinema/" class="bn-link bn-ticket">选座购票</a> <li> <div class="pic"> <a href="https://movie.douban.com/subject/27098602/"><img src="https://img3.doubanio.com/f/shire/a1fdee122b95748d81cee426d717c05b5174fe96/pics/blank.gif" data-origin="https://img2.doubanio.com/view/photo/s_ratio_poster/public/p2634744472.jpg" alt="日不落酒店" /></a> </div> <div class="title"> <a href="https://movie.douban.com/subject/27098602/">日不落酒店</a> </div> <div class="rating"> <span class="allstar15"></span><i>2.8</i> </div> <a href="https://movie.douban.com/subject/27098602/cinema/" class="bn-link bn-ticket">选座购票</a> <li> <div class="pic"> <a href="https://movie.douban.com/subject/30271717/"><img src="https://img3.doubanio.com/f/shire/a1fdee122b95748d81cee426d717c05b5174fe96/pics/blank.gif" data-origin="https://img3.doubanio.com/view/photo/s_ratio_poster/public/p2635094940.jpg" alt="21座桥" /></a> </div> <div class="title"> <a href="https://movie.douban.com/subject/30271717/">21座桥</a> </div> <div class="rating"> <span class="allstar35"></span><i>6.6</i> </div> <a href="https://movie.douban.com/subject/30271717/cinema/" class="bn-link bn-ticket">选座购票</a> <li> <div class="pic"> <a href="https://movie.douban.com/subject/30437716/"><img src="https://img3.doubanio.com/f/shire/a1fdee122b95748d81cee426d717c05b5174fe96/pics/blank.gif" data-origin="https://img3.doubanio.com/view/photo/s_ratio_poster/public/p2634228191.jpg" alt="又见奈良" /></a> </div> <div class="title"> <a href="https://movie.douban.com/subject/30437716/">又见奈良</a> </div> <div class="rating"> <span class="allstar40"></span><i>7.6</i> </div> <a href="https://movie.douban.com/subject/30437716/cinema/" class="bn-link bn-ticket">选座购票</a> <li> <div class="pic"> <a href="https://movie.douban.com/subject/34804147/"><img src="https://img3.doubanio.com/f/shire/a1fdee122b95748d81cee426d717c05b5174fe96/pics/blank.gif" data-origin="https://img9.doubanio.com/view/photo/s_ratio_poster/public/p2633531206.jpg" alt="寻龙传说" /></a> </div> <div class="title"> <a href="https://movie.douban.com/subject/34804147/">寻龙传说</a> </div> <div class="rating"> <span class="allstar40"></span><i>7.2</i> </div> <a href="https://movie.douban.com/subject/34804147/cinema/" class="bn-link bn-ticket">选座购票</a> <li> <div class="pic"> <a href="https://movie.douban.com/subject/30135110/"><img src="https://img3.doubanio.com/f/shire/a1fdee122b95748d81cee426d717c05b5174fe96/pics/blank.gif" data-origin="https://img1.doubanio.com/view/photo/s_ratio_poster/public/p2634893087.jpg" alt="五尺天涯" /></a> </div> <div class="title"> <a href="https://movie.douban.com/subject/30135110/">五尺天涯</a> </div> <div class="rating"> <span class="allstar40"></span><i>7.4</i> </div> <a href="https://movie.douban.com/subject/30135110/cinema/" class="bn-link bn-ticket">选座购票</a> <li> <div class="pic"> <a href="https://movie.douban.com/subject/34841067/"><img src="https://img3.doubanio.com/f/shire/a1fdee122b95748d81cee426d717c05b5174fe96/pics/blank.gif" data-origin="https://img1.doubanio.com/view/photo/s_ratio_poster/public/p2629056068.jpg" alt="你好,李焕英" /></a> </div> <div class="title"> <a href="https://movie.douban.com/subject/34841067/">你好,李焕英</a> </div> <div class="rating"> <span class="allstar40"></span><i>8.1</i> </div> <a href="https://movie.douban.com/subject/34841067/cinema/" class="bn-link bn-ticket">选座购票</a> </ul> </div> </div> </div> <div id="dale_anonymous_homepage_movie_bottom" class="extra"></div> </div> <div id="anony-group" class="section"> <div class="wrapper"> <div class="sidenav"> <h2 class="section-title"><a href="https://www.douban.com/group/">小组</a></h2> <div class="side-links nav-anon"> <ul> <li><a href="/group/explore">精选</a></li> <li><a href="/group/explore/culture">文化</a></li> <li><a href="/group/explore/travel">行摄</a></li> <li><a href="/group/explore/ent">娱乐</a></li> <li><a href="/group/explore/fashion">时尚</a></li> <li><a href="/group/explore/life">生活</a></li> <li><a href="/group/explore/tech">科技</a></li> </ul> </div> <div class="apps-list"> <ul> </ul> </div> </div> <div class="side"> <div class="mod"> <h2> 小组分类 &nbsp;&middot;&nbsp;&middot;&nbsp;&middot;&nbsp;&middot;&nbsp;&middot;&nbsp;&middot; </h2> <div class="cate group-cate"> <ul> <li class="cate-label"><a href="https://www.douban.com/group/explore?tag=兴趣">兴趣&raquo; </a></li> <li><a href="https://www.douban.com/group/explore?tag=旅行">旅行</a></li> <li><a href="https://www.douban.com/group/explore?tag=摄影">摄影</a></li> <li><a href="https://www.douban.com/group/explore?tag=影视">影视</a></li> <li><a href="https://www.douban.com/group/explore?tag=音乐">音乐</a></li> <li><a href="https://www.douban.com/group/explore?tag=文学">文学</a></li> <li><a href="https://www.douban.com/group/explore?tag=游戏">游戏</a></li> <li><a href="https://www.douban.com/group/explore?tag=动漫">动漫</a></li> <li><a href="https://www.douban.com/group/explore?tag=运动">运动</a></li> <li><a href="https://www.douban.com/group/explore?tag=戏曲">戏曲</a></li> <li><a href="https://www.douban.com/group/explore?tag=桌游">桌游</a></li> <li><a href="https://www.douban.com/group/explore?tag=怪癖">怪癖</a></li> </ul> </div> <div class="cate group-cate"> <ul> <li class="cate-label"><a href="https://www.douban.com/group/explore?tag=生活">生活&raquo; </a></li> <li><a href="https://www.douban.com/group/explore?tag=健康">健康</a></li> <li><a href="https://www.douban.com/group/explore?tag=美食">美食</a></li> <li><a href="https://www.douban.com/group/explore?tag=宠物">宠物</a></li> <li><a href="https://www.douban.com/group/explore?tag=美容">美容</a></li> <li><a href="https://www.douban.com/group/explore?tag=化妆">化妆</a></li> <li><a href="https://www.douban.com/group/explore?tag=护肤">护肤</a></li> <li><a href="https://www.douban.com/group/explore?tag=服饰">服饰</a></li> <li><a href="https://www.douban.com/group/explore?tag=公益">公益</a></li> <li><a href="https://www.douban.com/group/explore?tag=家庭">家庭</a></li> <li><a href="https://www.douban.com/group/explore?tag=育儿">育儿</a></li> <li><a href="https://www.douban.com/group/explore?tag=汽车">汽车</a></li> </ul> </div> <div class="cate group-cate"> <ul> <li class="cate-label"><a href="https://www.douban.com/group/explore?tag=购物">购物&raquo; </a></li> <li><a href="https://www.douban.com/group/explore?tag=淘宝">淘宝</a></li> <li><a href="https://www.douban.com/group/explore?tag=二手">二手</a></li> <li><a href="https://www.douban.com/group/explore?tag=团购">团购</a></li> <li><a href="https://www.douban.com/group/explore?tag=数码">数码</a></li> <li><a href="https://www.douban.com/group/explore?tag=品牌">品牌</a></li> <li><a href="https://www.douban.com/group/explore?tag=文具">文具</a></li> </ul> </div> <div class="cate group-cate"> <ul> <li class="cate-label"><a href="https://www.douban.com/group/explore?tag=社会">社会&raquo; </a></li> <li><a href="https://www.douban.com/group/explore?tag=求职">求职</a></li> <li><a href="https://www.douban.com/group/explore?tag=租房">租房</a></li> <li><a href="https://www.douban.com/group/explore?tag=励志">励志</a></li> <li><a href="https://www.douban.com/group/explore?tag=留学">留学</a></li> <li><a href="https://www.douban.com/group/explore?tag=出国">出国</a></li> <li><a href="https://www.douban.com/group/explore?tag=理财">理财</a></li> <li><a href="https://www.douban.com/group/explore?tag=传媒">传媒</a></li> <li><a href="https://www.douban.com/group/explore?tag=创业">创业</a></li> <li><a href="https://www.douban.com/group/explore?tag=考试">考试</a></li> </ul> </div> <div class="cate group-cate"> <ul> <li class="cate-label"><a href="https://www.douban.com/group/explore?tag=艺术">艺术&raquo; </a></li> <li><a href="https://www.douban.com/group/explore?tag=设计">设计</a></li> <li><a href="https://www.douban.com/group/explore?tag=手工">手工</a></li> <li><a href="https://www.douban.com/group/explore?tag=展览">展览</a></li> <li><a href="https://www.douban.com/group/explore?tag=曲艺">曲艺</a></li> <li><a href="https://www.douban.com/group/explore?tag=舞蹈">舞蹈</a></li> <li><a href="https://www.douban.com/group/explore?tag=雕塑">雕塑</a></li> <li><a href="https://www.douban.com/group/explore?tag=纹身">纹身</a></li> </ul> </div> <div class="cate group-cate"> <ul> <li class="cate-label"><a href="https://www.douban.com/group/explore?tag=学术">学术&raquo; </a></li> <li><a href="https://www.douban.com/group/explore?tag=人文">人文</a></li> <li><a href="https://www.douban.com/group/explore?tag=社科">社科</a></li> <li><a href="https://www.douban.com/group/explore?tag=自然">自然</a></li> <li><a href="https://www.douban.com/group/explore?tag=建筑">建筑</a></li> <li><a href="https://www.douban.com/group/explore?tag=国学">国学</a></li> <li><a href="https://www.douban.com/group/explore?tag=语言">语言</a></li> <li><a href="https://www.douban.com/group/explore?tag=宗教">宗教</a></li> <li><a href="https://www.douban.com/group/explore?tag=哲学">哲学</a></li> <li><a href="https://www.douban.com/group/explore?tag=软件">软件</a></li> <li><a href="https://www.douban.com/group/explore?tag=硬件">硬件</a></li> <li><a href="https://www.douban.com/group/explore?tag=互联网">互联网</a></li> </ul> </div> <div class="cate group-cate"> <ul> <li class="cate-label"><a href="https://www.douban.com/group/explore?tag=情感">情感&raquo; </a></li> <li><a href="https://www.douban.com/group/explore?tag=恋爱">恋爱</a></li> <li><a href="https://www.douban.com/group/explore?tag=心情">心情</a></li> <li><a href="https://www.douban.com/group/explore?tag=心理学">心理学</a></li> <li><a href="https://www.douban.com/group/explore?tag=星座">星座</a></li> <li><a href="https://www.douban.com/group/explore?tag=塔罗">塔罗</a></li> <li><a href="https://www.douban.com/group/explore?tag=LES">LES</a></li> <li><a href="https://www.douban.com/group/explore?tag=GAY">GAY</a></li> </ul> </div> <div class="cate group-cate"> <ul> <li class="cate-label"><a href="https://www.douban.com/group/explore?tag=闲聊">闲聊&raquo; </a></li> <li><a href="https://www.douban.com/group/explore?tag=吐槽">吐槽</a></li> <li><a href="https://www.douban.com/group/explore?tag=笑话">笑话</a></li> <li><a href="https://www.douban.com/group/explore?tag=直播">直播</a></li> <li><a href="https://www.douban.com/group/explore?tag=八卦">八卦</a></li> <li><a href="https://www.douban.com/group/explore?tag=发泄">发泄</a></li> </ul> </div> </div> </div> <div class="main"> <h2> 热门小组 ······ <span class="pl">&nbsp;( <a href="https://www.douban.com/group/explore" target="_self">更多</a> ) </span> </h2> <div class="group-list list"> <ul> </ul> </div> </div> </div> </div> <div id="anony-book" class="section"> <div class="wrapper"> <div class="sidenav"> <div class="mod"> <h2 class="section-title"><a href="https://book.douban.com">读书</a></h2> <div class="side-links nav-anon"> <ul> <li><a href="https://book.douban.com/tag/">分类浏览</a></li> <li> <a target="_blank" href="https://read.douban.com?dcn=entry&amp;dcs=book-nav&amp;dcm=douban">阅读</a> <img style="top: 4px; position: absolute;" src="https://img3.doubanio.com/pics/new_menu.gif"/> </li> <li><a href="https://book.douban.com/writers/">作者</a></li> <li><a href="https://book.douban.com/review/best/">书评</a></li> <li class="site-nav-prom"> <a class="lnk-buy" href="https://book.douban.com/cart"> <em>购书单</em> </a> </li> </ul> </div> </div> <div class="apps-list"> <ul> <li> <a href="https://read.douban.com/app/" class="lnk-icon"> <i class="app-icon app-icon-read"></i> </a> <a href="https://read.douban.com/app/">豆瓣阅读</a> </li> </ul> </div> </div> <div class="side"> <div class="mod"> <h2> 热门标签 &nbsp;&middot;&nbsp;&middot;&nbsp;&middot;&nbsp;&middot;&nbsp;&middot;&nbsp;&middot; <span class="pl">&nbsp;( <a href="https://book.douban.com/tag/?view=type" target="_self">更多</a> ) </span> </h2> <div class="book-cate-mod"> <div class="cate book-cate"> <ul> <li class="cate-label">[文学]</li> <li><a href="https://book.douban.com/tag/小说">小说</a></li> <li><a href="https://book.douban.com/tag/随笔">随笔</a></li> <li><a href="https://book.douban.com/tag/日本文学">日本文学</a></li> <li><a href="https://book.douban.com/tag/散文">散文</a></li> <li><a href="https://book.douban.com/tag/诗歌">诗歌</a></li> <li><a href="https://book.douban.com/tag/童话">童话</a></li> <li><a href="https://book.douban.com/tag/名著">名著</a></li> <li><a href="https://book.douban.com/tag/港台">港台</a></li> <li><a href="https://book.douban.com/tag/?view=type#文学">更多</a></li> </ul> </div> <div class="cate book-cate"> <ul> <li class="cate-label">[流行]</li> <li><a href="https://book.douban.com/tag/漫画">漫画</a></li> <li><a href="https://book.douban.com/tag/推理">推理</a></li> <li><a href="https://book.douban.com/tag/绘本">绘本</a></li> <li><a href="https://book.douban.com/tag/青春">青春</a></li> <li><a href="https://book.douban.com/tag/科幻">科幻</a></li> <li><a href="https://book.douban.com/tag/言情">言情</a></li> <li><a href="https://book.douban.com/tag/奇幻">奇幻</a></li> <li><a href="https://book.douban.com/tag/武侠">武侠</a></li> <li><a href="https://book.douban.com/tag/?view=type#流行">更多</a></li> </ul> </div> <div class="cate book-cate"> <ul> <li class="cate-label">[文化]</li> <li><a href="https://book.douban.com/tag/历史">历史</a></li> <li><a href="https://book.douban.com/tag/哲学">哲学</a></li> <li><a href="https://book.douban.com/tag/传记">传记</a></li> <li><a href="https://book.douban.com/tag/设计">设计</a></li> <li><a href="https://book.douban.com/tag/建筑">建筑</a></li> <li><a href="https://book.douban.com/tag/电影">电影</a></li> <li><a href="https://book.douban.com/tag/回忆录">回忆录</a></li> <li><a href="https://book.douban.com/tag/音乐">音乐</a></li> <li><a href="https://book.douban.com/tag/?view=type#文化">更多</a></li> </ul> </div> <div class="cate book-cate"> <ul> <li class="cate-label">[生活]</li> <li><a href="https://book.douban.com/tag/旅行">旅行</a></li> <li><a href="https://book.douban.com/tag/励志">励志</a></li> <li><a href="https://book.douban.com/tag/教育">教育</a></li> <li><a href="https://book.douban.com/tag/职场">职场</a></li> <li><a href="https://book.douban.com/tag/美食">美食</a></li> <li><a href="https://book.douban.com/tag/灵修">灵修</a></li> <li><a href="https://book.douban.com/tag/健康">健康</a></li> <li><a href="https://book.douban.com/tag/家居">家居</a></li> <li><a href="https://book.douban.com/tag/?view=type#生活">更多</a></li> </ul> </div> <div class="cate book-cate"> <ul> <li class="cate-label">[经管]</li> <li><a href="https://book.douban.com/tag/经济学">经济学</a></li> <li><a href="https://book.douban.com/tag/管理">管理</a></li> <li><a href="https://book.douban.com/tag/商业">商业</a></li> <li><a href="https://book.douban.com/tag/金融">金融</a></li> <li><a href="https://book.douban.com/tag/营销">营销</a></li> <li><a href="https://book.douban.com/tag/理财">理财</a></li> <li><a href="https://book.douban.com/tag/股票">股票</a></li> <li><a href="https://book.douban.com/tag/企业史">企业史</a></li> <li><a href="https://book.douban.com/tag/?view=type#经管">更多</a></li> </ul> </div> <div class="cate book-cate"> <ul> <li class="cate-label">[科技]</li> <li><a href="https://book.douban.com/tag/科普">科普</a></li> <li><a href="https://book.douban.com/tag/互联网">互联网</a></li> <li><a href="https://book.douban.com/tag/编程">编程</a></li> <li><a href="https://book.douban.com/tag/交互设计">交互设计</a></li> <li><a href="https://book.douban.com/tag/算法">算法</a></li> <li><a href="https://book.douban.com/tag/通信">通信</a></li> <li><a href="https://book.douban.com/tag/神经网络">神经网络</a></li> <li><a href="https://book.douban.com/tag/?view=type#科技">更多</a></li> </ul> </div> </div> </div> </div> <div class="main"> <div class="mod"> <h2> 新书速递 ······ <span class="pl">&nbsp;( <a href="https://book.douban.com/latest" target="_self">更多</a> ) </span> </h2> <div class="book-list list"> <ul> <li> <div class="pic"> <a href="https://book.douban.com/subject/35333414/"><img src="https://img3.doubanio.com/f/shire/a1fdee122b95748d81cee426d717c05b5174fe96/pics/blank.gif" data-origin="https://img3.doubanio.com/view/subject/m/public/s33842971.jpg" alt="面孔" /></a> </div> <div class="title"> <a href="https://book.douban.com/subject/35333414/" >面孔</a> </div> <div class="author">东君</div> <a href="https://read.douban.com/reader/ebook/276365526/" target="_blank" class="bn-link">免费试读</a> <li> <div class="pic"> <a href="https://book.douban.com/subject/35309343/"><img src="https://img3.doubanio.com/f/shire/a1fdee122b95748d81cee426d717c05b5174fe96/pics/blank.gif" data-origin="https://img1.doubanio.com/view/subject/m/public/s33821298.jpg" alt="天使降临之塔" /></a> </div> <div class="title"> <a href="https://book.douban.com/subject/35309343/" >天使降临之塔</a> </div> <div class="author">里卡多</div> <a href="https://read.douban.com/reader/ebook/188524557/" target="_blank" class="bn-link">免费试读</a> <li> <div class="pic"> <a href="https://book.douban.com/subject/35315159/"><img src="https://img3.doubanio.com/f/shire/a1fdee122b95748d81cee426d717c05b5174fe96/pics/blank.gif" data-origin="https://img3.doubanio.com/view/subject/m/public/s33844191.jpg" alt="永恒之王" /></a> </div> <div class="title"> <a href="https://book.douban.com/subject/35315159/" >永恒之王</a> </div> <div class="author">〔英〕T.H....</div> <a href="https://read.douban.com/reader/ebook/172970619/" target="_blank" class="bn-link">免费试读</a> <li> <div class="pic"> <a href="https://book.douban.com/subject/35316123/"><img src="https://img3.doubanio.com/f/shire/a1fdee122b95748d81cee426d717c05b5174fe96/pics/blank.gif" data-origin="https://img9.doubanio.com/view/subject/m/public/s33842435.jpg" alt="烟与镜" /></a> </div> <div class="title"> <a href="https://book.douban.com/subject/35316123/" >烟与镜</a> </div> <div class="author">[英] 尼尔·...</div> <a href="https://read.douban.com/reader/ebook/199383264/" target="_blank" class="bn-link">免费试读</a> </ul> </div> </div> <div class="mod"> <h2> 原创数字作品 ······ <span class="pl">&nbsp;( <a href="https://read.douban.com" target="_self">更多</a> ) </span> </h2> <div class="book-list list"> <ul> <li> <div class="pic"> <a href="https://read.douban.com/ebook/158808525" target="_blank"><img src="https://img3.doubanio.com/f/shire/a1fdee122b95748d81cee426d717c05b5174fe96/pics/blank.gif" data-origin="https://img3.doubanio.com/view/ark_column_cover/large/public/35213160.jpg?v=1605494241" alt="狼人九月与他的神农荒野" /></a> </div> <div class="title"> <a href="https://read.douban.com/ebook/158808525" target="_blank">狼人九月与他的...</a> </div> <div class="author"></div> <div class="price"> 免费 </div> <a href="https://read.douban.com/reader/column/35213160/chapter/158881440/" target="_blank" class="bn-link">免费试读</a> <li> <div class="pic"> <a href="https://read.douban.com/ebook/169603201" target="_blank"><img src="https://img3.doubanio.com/f/shire/a1fdee122b95748d81cee426d717c05b5174fe96/pics/blank.gif" data-origin="https://img9.doubanio.com/view/ark_column_cover/large/public/36098006.jpg?v=1615733956" alt="心有灵犀的你" /></a> </div> <div class="title"> <a href="https://read.douban.com/ebook/169603201" target="_blank">心有灵犀的你</a> </div> <div class="author"></div> <div class="price"> 免费 </div> <a href="https://read.douban.com/reader/column/36098006/chapter/169603380/" target="_blank" class="bn-link">免费试读</a> <li> <div class="pic"> <a href="https://read.douban.com/ebook/163615226" target="_blank"><img src="https://img3.doubanio.com/f/shire/a1fdee122b95748d81cee426d717c05b5174fe96/pics/blank.gif" data-origin="https://img3.doubanio.com/view/ark_column_cover/large/public/35634861.jpg?v=1606704339" alt="余生下一课" /></a> </div> <div class="title"> <a href="https://read.douban.com/ebook/163615226" target="_blank">余生下一课</a> </div> <div class="author"></div> <div class="price"> 免费 </div> <a href="https://read.douban.com/reader/column/35634861/chapter/163615324/" target="_blank" class="bn-link">免费试读</a> <li> <div class="pic"> <a href="https://read.douban.com/ebook/164159971" target="_blank"><img src="https://img3.doubanio.com/f/shire/a1fdee122b95748d81cee426d717c05b5174fe96/pics/blank.gif" data-origin="https://img9.doubanio.com/view/ark_column_cover/large/public/35684015.jpg?v=1612692126" alt="三人行" /></a> </div> <div class="title"> <a href="https://read.douban.com/ebook/164159971" target="_blank">三人行</a> </div> <div class="author"></div> <div class="price"> 免费 </div> <a href="https://read.douban.com/reader/column/35684015/chapter/167288214/" target="_blank" class="bn-link">免费试读</a> </ul> </div> </div> </div> </div> </div> <div id="anony-music" class="section"> <div class="wrapper"> <div class="sidenav"> <h2 class="section-title"><a href="https://music.douban.com">音乐</a></h2> <div class="side-links nav-anon"> <ul> <li><a href="https://music.douban.com/artists/">音乐人</a></li> <li><a href="https://www.douban.com/wetware/">潮潮豆瓣音乐周</a></li> <li><a href="https://music.douban.com/artists/royalty/">金羊毛计划</a></li> <li><a href="https://music.douban.com/topics/">专题</a></li> <li><a href="https://music.douban.com/chart">排行榜</a></li> <li><a href="https://music.douban.com/tag/">分类浏览</a></li> <li><a href="https://music.douban.com/review/latest/">乐评</a></li> <li><a href="https://douban.fm/?from_=music_nav">豆瓣FM</a></li> <li><a href="https://douban.fm/explore/songlists/">歌单</a></li> <li><a href="https://artist.douban.com/abilu/">阿比鹿音乐奖</a></li> </ul> </div> <div class="apps-list"> <ul> <li> <a href="https://douban.fm/app?from_=shire_anonymous_home" class="lnk-icon"> <i class="app-icon app-icon-fm"></i> </a> <a href="https://douban.fm/app?from_=shire_anonymous_home">豆瓣FM</a> </li> <li> <a href="https://artist.douban.com/app" class="lnk-icon"> <i class="app-icon app-icon-artists"></i> </a> <a href="https://artist.douban.com/app">豆瓣音乐人</a> </li> </ul> </div> </div> <div class="side"> <div class="mod"> <h2> 本周流行音乐人 &nbsp;&middot;&nbsp;&middot;&nbsp;&middot;&nbsp;&middot;&nbsp;&middot;&nbsp;&middot; <span class="pl">&nbsp;( <a href="https://music.douban.com/artists/" target="_self">更多</a> ) </span> </h2> <div class="list1 artist-charts"> <ul> <li> <span class="num">1.</span> <div class="pic artist-song-play" data-sids="[&#34;763166&#34;, &#34;29448&#34;, &#34;31047&#34;]"> <a href="https://site.douban.com/woaiqizhen/"><img src="https://img3.doubanio.com/f/shire/a1fdee122b95748d81cee426d717c05b5174fe96/pics/blank.gif" data-origin="https://img9.doubanio.com/view/site/small/public/a6504788187af24.jpg" width="48"></a> <i class="icon icon-bg"></i> <i class="icon icon-stat icon-play"></i> </div> <div class="content"> <a href="https://site.douban.com/woaiqizhen/">我爱陈绮贞</a> <div class="desc"> 流派: 流行 Pop <br>7525人关注 </div> </div> </li> <li> <span class="num">2.</span> <div class="pic artist-song-play" data-sids="[&#34;763169&#34;, &#34;752323&#34;, &#34;752322&#34;]"> <a href="https://site.douban.com/toneless/"><img src="https://img3.doubanio.com/f/shire/a1fdee122b95748d81cee426d717c05b5174fe96/pics/blank.gif" data-origin="https://img2.doubanio.com/view/site/small/public/d5685d29e021ca3.jpg" width="48"></a> <i class="icon icon-bg"></i> <i class="icon icon-stat icon-play"></i> </div> <div class="content"> <a href="https://site.douban.com/toneless/">Toneless</a> <div class="desc"> 流派: 摇滚 Rock <br>81人关注 </div> </div> </li> <li> <span class="num">3.</span> <div class="pic artist-song-play" data-sids="[&#34;34426&#34;, &#34;6527&#34;, &#34;125739&#34;]"> <a href="https://site.douban.com/MiHS/"><img src="https://img3.doubanio.com/f/shire/a1fdee122b95748d81cee426d717c05b5174fe96/pics/blank.gif" data-origin="https://img2.doubanio.com/view/site/small/public/7a1c1ebfa2d0f82.jpg" width="48"></a> <i class="icon icon-bg"></i> <i class="icon icon-stat icon-play"></i> </div> <div class="content"> <a href="https://site.douban.com/MiHS/">十方(MiHS)</a> <div class="desc"> 流派: 原声 Soundtrack <br>1526人关注 </div> </div> </li> <li> <span class="num">4.</span> <div class="pic artist-song-play" data-sids="[&#34;763155&#34;, &#34;623943&#34;, &#34;652134&#34;]"> <a href="https://site.douban.com/chenmoon/"><img src="https://img3.doubanio.com/f/shire/a1fdee122b95748d81cee426d717c05b5174fe96/pics/blank.gif" data-origin="https://img1.doubanio.com/view/site/small/public/3c4ab93e23a492a.jpg" width="48"></a> <i class="icon icon-bg"></i> <i class="icon icon-stat icon-play"></i> </div> <div class="content"> <a href="https://site.douban.com/chenmoon/">岳璇</a> <div class="desc"> 流派: 轻音乐 Easy Listening <br>3596人关注 </div> </div> </li> <li> <span class="num">5.</span> <div class="pic artist-song-play" data-sids="[&#34;763113&#34;, &#34;762691&#34;, &#34;762629&#34;]"> <a href="https://site.douban.com/three37seven/"><img src="https://img3.doubanio.com/f/shire/a1fdee122b95748d81cee426d717c05b5174fe96/pics/blank.gif" data-origin="https://img1.doubanio.com/view/site/small/public/c12c83c63efb4a7.jpg" width="48"></a> <i class="icon icon-bg"></i> <i class="icon icon-stat icon-play"></i> </div> <div class="content"> <a href="https://site.douban.com/three37seven/">三七</a> <div class="desc"> 流派: 摇滚 Rock <br>210人关注 </div> </div> </li> </ul> </div> </div> </div> <div class="main"> <h2> 豆瓣新碟榜 &nbsp;&middot;&nbsp;&middot;&nbsp;&middot;&nbsp;&middot;&nbsp;&middot;&nbsp;&middot; <span class="pl">&nbsp;( <a href="https://music.douban.com#new1" target="_self">更多</a> ) </span> </h2> <div class="album-list list"> <ul> <li> <div class="pic"> <a href="https://music.douban.com/subject/35403580/"> <img src="https://img3.doubanio.com/f/shire/a1fdee122b95748d81cee426d717c05b5174fe96/pics/blank.gif" data-origin="https://img9.doubanio.com/view/subject/s/public/s33854064.jpg" alt="爱 广播 飞机" style="width: 80px; max-height: 120px;"/> </a> </div> <div class="title"> 1. <a href="https://music.douban.com/subject/35403580/">爱 广播 飞机</a> </div> <div class="artist"> <a href="">新裤子 newpants</a> </div> <div class="rating"> <span class="allstar30"></span><i>5.8</i> </div> </li> <li> <div class="pic"> <a href="https://music.douban.com/subject/35401308/"> <img src="https://img3.doubanio.com/f/shire/a1fdee122b95748d81cee426d717c05b5174fe96/pics/blank.gif" data-origin="https://img2.doubanio.com/view/subject/s/public/s33853712.jpg" alt="各自安好" style="width: 80px; max-height: 120px;"/> </a> </div> <div class="title"> 2. <a href="https://music.douban.com/subject/35401308/">各自安好</a> </div> <div class="artist"> <a href="">刘若英 Ren&#39;e Liu</a> </div> <div class="rating"> <span class="allstar30"></span><i>6.1</i> </div> </li> <li> <div class="pic"> <a href="https://music.douban.com/subject/35373219/"> <img src="https://img3.doubanio.com/f/shire/a1fdee122b95748d81cee426d717c05b5174fe96/pics/blank.gif" data-origin="https://img1.doubanio.com/view/subject/s/public/s33847028.jpg" alt="Kick Back" style="width: 80px; max-height: 120px;"/> </a> </div> <div class="title"> 3. <a href="https://music.douban.com/subject/35373219/">Kick Back</a> </div> <div class="artist"> <a href="">威神V WayV</a> </div> <div class="rating"> <span class="allstar40"></span><i>7.4</i> </div> </li> <li> <div class="pic"> <a href="https://music.douban.com/subject/35403946/"> <img src="https://img3.doubanio.com/f/shire/a1fdee122b95748d81cee426d717c05b5174fe96/pics/blank.gif" data-origin="https://img2.doubanio.com/view/subject/s/public/s33854863.jpg" alt="儿戏" style="width: 80px; max-height: 120px;"/> </a> </div> <div class="title"> 4. <a href="https://music.douban.com/subject/35403946/">儿戏</a> </div> <div class="artist"> <a href="">文雀乐队 Sparrow</a> </div> <div class="rating"> <span class="allstar40"></span><i>7.7</i> </div> </li> <li> <div class="pic"> <a href="https://music.douban.com/subject/35080887/"> <img src="https://img3.doubanio.com/f/shire/a1fdee122b95748d81cee426d717c05b5174fe96/pics/blank.gif" data-origin="https://img2.doubanio.com/view/subject/s/public/s33798763.jpg" alt="Chemtrails Over The Country Club" style="width: 80px; max-height: 120px;"/> </a> </div> <div class="title"> 5. <a href="https://music.douban.com/subject/35080887/">Chemtrails Over The Country Club</a> </div> <div class="artist"> <a href="">Lana Del Rey</a> </div> <div class="rating"> <span class="allstar45"></span><i>8.6</i> </div> </li> <li> <div class="pic"> <a href="https://music.douban.com/subject/35379380/"> <img src="https://img3.doubanio.com/f/shire/a1fdee122b95748d81cee426d717c05b5174fe96/pics/blank.gif" data-origin="https://img2.doubanio.com/view/subject/s/public/s33842373.jpg" alt="Justice" style="width: 80px; max-height: 120px;"/> </a> </div> <div class="title"> 6. <a href="https://music.douban.com/subject/35379380/">Justice</a> </div> <div class="artist"> <a href="">Justin Bieber</a> </div> <div class="rating"> <span class="allstar35"></span><i>6.6</i> </div> </li> <li> <div class="pic"> <a href="https://music.douban.com/subject/35380931/"> <img src="https://img3.doubanio.com/f/shire/a1fdee122b95748d81cee426d717c05b5174fe96/pics/blank.gif" data-origin="https://img1.doubanio.com/view/subject/s/public/s33859719.jpg" alt="5집 LILAC" style="width: 80px; max-height: 120px;"/> </a> </div> <div class="title"> 7. <a href="https://music.douban.com/subject/35380931/">5집 LILAC</a> </div> <div class="artist"> <a href="">IU</a> </div> <div class="rating"> <span class="allstar35"></span><i>6.4</i> </div> </li> <li> <div class="pic"> <a href="https://music.douban.com/subject/35373220/"> <img src="https://img3.doubanio.com/f/shire/a1fdee122b95748d81cee426d717c05b5174fe96/pics/blank.gif" data-origin="https://img3.doubanio.com/view/subject/s/public/s33853660.jpg" alt="10집 The Renaissance" style="width: 80px; max-height: 120px;"/> </a> </div> <div class="title"> 8. <a href="https://music.douban.com/subject/35373220/">10집 The Renaissance</a> </div> <div class="artist"> <a href="">SUPER JUNIOR</a> </div> <div class="rating"> <span class="allstar45"></span><i>8.2</i> </div> </li> </ul> </div> <h2> 热门歌单 &nbsp;&middot;&nbsp;&middot;&nbsp;&middot;&nbsp;&middot;&nbsp;&middot;&nbsp;&middot; <span class="pl">&nbsp;( <a href="https://music.douban.com/programmes/" target="_self">更多</a> ) </span> </h2> <div class="programme-list list"> <ul> <li> <div class="pic cover"><img width=80 src="https://img3.doubanio.com/img/songlist/large/42638716-1.png"><a href="https://music.douban.com/programme/42638716" target="_blank"><i></i></a></div> <div class="title">周传雄|忘不掉的回忆与歌声</div> </li> <li> <div class="pic cover"><img width=80 src="https://img3.doubanio.com/img/songlist/large/38220186-1.png"><a href="https://music.douban.com/programme/38220186" target="_blank"><i></i></a></div> <div class="title">一张歌单带你走进Punk Rock</div> </li> <li> <div class="pic cover"><img width=80 src="https://img3.doubanio.com/img/songlist/large/295108-1.jpg"><a href="https://music.douban.com/programme/295108" target="_blank"><i></i></a></div> <div class="title">银钥妹子</div> </li> <li> <div class="pic cover"><img width=80 src="https://img3.doubanio.com/img/songlist/large/51462538-1.jpg"><a href="https://music.douban.com/programme/51462538" target="_blank"><i></i></a></div> <div class="title">摇</div> </li> <li> <div class="pic cover"><img width=80 src="https://img2.doubanio.com/img/songlist/large/48986473-3.jpg"><a href="https://music.douban.com/programme/48986473" target="_blank"><i></i></a></div> <div class="title">小酒馆的故事</div> </li> <li> <div class="pic cover"><img width=80 src="https://img3.doubanio.com/img/songlist/large/1319062-1.jpg"><a href="https://music.douban.com/programme/1319062" target="_blank"><i></i></a></div> <div class="title">ALL MY COVERS 中岛美嘉</div> </li> </ul> </div> </div> </div> <div id="dale_anonymous_home_page_middle_2" class="extra"></div> </div> <div id="anony-market" class="section"> <div class="wrapper"> <div class="sidenav"> <h2 class="section-title"> <a href="https://market.douban.com?dcs=anonymous-home-sidenav&amp;dcm=douban"> 豆品 </a> </h2> </div> <div class="side"> <div class="mod"> <h2> 热门活动 &nbsp;&middot;&nbsp;&middot;&nbsp;&middot;&nbsp;&middot;&nbsp;&middot;&nbsp;&middot; </h2> <ul class="market-topics"> <li class="market-topic-item" > <a href="https://www.douban.com/gallery/topic/761/?index=2&amp;r=topic?dcm=douban&dcs=anonymous-home-topic" target="_blank"> <div class="market-topic-pic" style="background-image:url(https://img2.doubanio.com/img/files/file-1513305186-3.jpg)"> </div> </a> <p class="market-topic-footer"> <a href="https://www.douban.com/gallery/topic/761/?index=2&amp;r=topic?dcm=douban&dcs=anonymous-home-topic" target="_blank"> 我的豆瓣收藏夹里有什么 </a> </p> </li> </ul> </div> <div class="mod"> <h2> 官方小组 &nbsp;&middot;&nbsp;&middot;&nbsp;&middot;&nbsp;&middot;&nbsp;&middot;&nbsp;&middot; <span class="pl">&nbsp;( <a href="https://www.douban.com/group/588598?dcs=anonymous-home-more-shops&amp;dcm=douban#hot-shop-wrapper" target="_self">更多</a> ) </span> </h2> <ul class="market-group-topics"> <li> <p class="market-group-topic-title"> <a href="https://www.douban.com/group/topic/195001981?dcm=douban&dcs=anonymous-home-group" target="_blank"> 用书籍标记时光,豆瓣读书周历2021上线! </a> </p> <p class="market-group-topic-footer"> <span class="market-group-topic-date"> 01-29 </span> <span class="market-group-topic-amount"> 4 人参与 </span> </p> </li> <li> <p class="market-group-topic-title"> <a href="https://www.douban.com/group/topic/193176516?dcm=douban&dcs=anonymous-home-group" target="_blank"> 豆瓣读书书签——“每一页里,都得着深厚的趣味” </a> </p> <p class="market-group-topic-footer"> <span class="market-group-topic-date"> 12-23 </span> <span class="market-group-topic-amount"> 9 人参与 </span> </p> </li> <li> <p class="market-group-topic-title"> <a href="https://www.douban.com/group/topic/192190771?dcm=douban&dcs=anonymous-home-group" target="_blank"> 「豆瓣书立」新品上线,给好书一个“支撑”。 </a> </p> <p class="market-group-topic-footer"> <span class="market-group-topic-date"> 09-03 </span> <span class="market-group-topic-amount"> 0 人参与 </span> </p> </li> </ul> </div> </div> <div class="main"> <h2> 热卖商品 &nbsp;&middot;&nbsp;&middot;&nbsp;&middot;&nbsp;&middot;&nbsp;&middot;&nbsp;&middot; <span class="pl">&nbsp;( <a href="https://market.douban.com?dcs=anonymous-home-more-skus&amp;dcm=douban" target="_self">更多</a> ) </span> </h2> <ul class="market-spu-list"> <li class="main-sku"> <a href="https://market.douban.com/campaign/readsticker?dcm=douban&dcs=anonymous-home-spu" target="_blank"> <div class="market-spu-pic" style="background-image: url(https://img3.doubanio.com/img/files/file-1613971874-0.jpg)"> </div> </a> <div class="market-spu-footer"> <span class="market-spu-price"> ¥9 </span> <a href="https://market.douban.com/campaign/readsticker?dcm=douban&dcs=anonymous-home-spu" target="_blank" class="market-spu-title"> 豆瓣读书便签贴纸 </a> </div> </li> <li class="main-sku"> <a href="https://market.douban.com/campaign/watch_colourful?dcm=douban&dcs=anonymous-home-spu" target="_blank"> <div class="market-spu-pic" style="background-image: url(https://img2.doubanio.com/img/files/file-1603878070-2.jpg)"> </div> </a> <div class="market-spu-footer"> <span class="market-spu-price"> ¥329 </span> <a href="https://market.douban.com/campaign/watch_colourful?dcm=douban&dcs=anonymous-home-spu" target="_blank" class="market-spu-title"> 豆瓣逆向手表—多色款 </a> </div> </li> <li class="main-sku"> <a href="https://market.douban.com/campaign/calendar2021?dcm=douban&dcs=anonymous-home-spu" target="_blank"> <div class="market-spu-pic" style="background-image: url(https://img3.doubanio.com/img/files/file-1600251064-1.jpg)"> </div> </a> <div class="market-spu-footer"> <span class="market-spu-price"> ¥99 </span> <a href="https://market.douban.com/campaign/calendar2021?dcm=douban&dcs=anonymous-home-spu" target="_blank" class="market-spu-title"> 豆瓣电影日历2021 </a> </div> </li> <li class="main-sku"> <a href="https://market.douban.com/campaign/weeklycalendar2021?dcm=douban&dcs=anonymous-home-spu" target="_blank"> <div class="market-spu-pic" style="background-image: url(https://img3.doubanio.com/img/files/file-1603878070-0.jpg)"> </div> </a> <div class="market-spu-footer"> <span class="market-spu-price"> ¥88 </span> <a href="https://market.douban.com/campaign/weeklycalendar2021?dcm=douban&dcs=anonymous-home-spu" target="_blank" class="market-spu-title"> 豆瓣读书周历2021 </a> </div> </li> </ul> </div> </div> </div> <div id="anony-events" class="section"> <div class="wrapper"> <div class="sidenav"> <h2 class="section-title"><a href="https://www.douban.com/location/">同城</a></h2> <div class="side-links nav-anon"> <ul> <li> <a href="https://www.douban.com/location/shenzhen/events">近期活动</a> </li> <li> <a href="https://www.douban.com/location/shenzhen/hosts">主办方</a> </li> <li> <a href="https://www.douban.com/location/drama/">舞台剧</a> </li> </ul> </div> <div class="apps-list"> <ul> </ul> </div> </div> <div class="side"> <div class="mod"> <h2> 活动标签 &nbsp;&middot;&nbsp;&middot;&nbsp;&middot;&nbsp;&middot;&nbsp;&middot;&nbsp;&middot; </h2> <div class="cate events-cate"> <ul> <li class="cate-label"><a href="https://www.douban.com/location/shenzhen/events/week-music">音乐&raquo;</a></li> <li><a href="https://www.douban.com/location/shenzhen/events/week-1001">小型现场</a></li> <li><a href="https://www.douban.com/location/shenzhen/events/week-1002">音乐会</a></li> <li><a href="https://www.douban.com/location/shenzhen/events/week-1003">演唱会</a></li> <li><a href="https://www.douban.com/location/shenzhen/events/week-1004">音乐节</a></li> </ul> </div> <div class="cate events-cate"> <ul> <li class="cate-label"><a href="https://www.douban.com/location/shenzhen/events/week-drama">戏剧&raquo;</a></li> <li><a href="https://www.douban.com/location/shenzhen/events/week-1101">话剧</a></li> <li><a href="https://www.douban.com/location/shenzhen/events/week-1102">音乐剧</a></li> <li><a href="https://www.douban.com/location/shenzhen/events/week-1103">舞剧</a></li> <li><a href="https://www.douban.com/location/shenzhen/events/week-1104">歌剧</a></li> <li><a href="https://www.douban.com/location/shenzhen/events/week-1105">戏曲</a></li> <li><a href="https://www.douban.com/location/shenzhen/events/week-1106">其他</a></li> </ul> </div> <div class="cate events-cate"> <ul> <li class="cate-label"><a href="https://www.douban.com/location/shenzhen/events/week-party">聚会&raquo;</a></li> <li><a href="https://www.douban.com/location/shenzhen/events/week-1401">生活</a></li> <li><a href="https://www.douban.com/location/shenzhen/events/week-1402">集市</a></li> <li><a href="https://www.douban.com/location/shenzhen/events/week-1403">摄影</a></li> <li><a href="https://www.douban.com/location/shenzhen/events/week-1404">外语</a></li> <li><a href="https://www.douban.com/location/shenzhen/events/week-1405">桌游</a></li> <li><a href="https://www.douban.com/location/shenzhen/events/week-1406">夜店</a></li> <li><a href="https://www.douban.com/location/shenzhen/events/week-1407">交友</a></li> <li><a href="https://www.douban.com/location/shenzhen/events/week-1408">美食</a></li> <li><a href="https://www.douban.com/location/shenzhen/events/week-1409">派对</a></li> </ul> </div> <div class="cate events-cate"> <ul> <li class="cate-label"><a href="https://www.douban.com/location/shenzhen/events/week-film">电影&raquo;</a></li> <li><a href="https://www.douban.com/location/shenzhen/events/week-1801">主题放映</a></li> <li><a href="https://www.douban.com/location/shenzhen/events/week-1802">影展</a></li> <li><a href="https://www.douban.com/location/shenzhen/events/week-1803">影院活动</a></li> </ul> </div> <div class="cate events-cate"> <ul> <li class="cate-label"><a href="https://www.douban.com/location/shenzhen/events/week-all">其他&raquo;</a></li> <li><a href="https://www.douban.com/location/shenzhen/events/week-salon">讲座</a></li> <li><a href="https://www.douban.com/location/shenzhen/events/week-exhibition">展览</a></li> <li><a href="https://www.douban.com/location/shenzhen/events/week-sports">运动</a></li> <li><a href="https://www.douban.com/location/shenzhen/events/week-travel">旅行</a></li> <li><a href="https://www.douban.com/location/shenzhen/events/week-commonweal">公益</a></li> </ul> </div> </div> </div> <div class="main"> <h2> 深圳 · 本周热门活动 &nbsp;&middot;&nbsp;&middot;&nbsp;&middot;&nbsp;&middot;&nbsp;&middot;&nbsp;&middot; <span class="pl">&nbsp;( <a href="https://www.douban.com/location/" target="_self">更多</a> ) </span> </h2> <div class="events-list list"> <ul> <li> <div class="pic"> <a href="https://www.douban.com/event/34047523/"> <img data-origin="https://img2.doubanio.com/pview/event_poster/small/public/b641f97551642f3.jpg" src="https://img3.doubanio.com/f/shire/a1fdee122b95748d81cee426d717c05b5174fe96/pics/blank.gif" width="70"> </a> </div> <div class="info"> <div class="title"> <a href="https://www.douban.com/event/34047523/" title="陈鸿宇2021「步履不停」巡演 深圳站"> 陈鸿宇2021「步履不停」巡演 深圳站 </a> </div> <div class="datetime"> 4月17日 周六 19:30 - 21:00 </div> <address title="腾讯WeSpace 广东省深圳市福田区笋岗西路深业上城CEEC(Loft-D4)4楼"> 腾讯WeSpace 广东省深圳市... </address> <div class="follow"> 43人关注 </div> </div> <li> <div class="pic"> <a href="https://www.douban.com/event/34082723/"> <img data-origin="https://img1.doubanio.com/pview/event_poster/small/public/1c7b20b8c2f265b.jpg" src="https://img3.doubanio.com/f/shire/a1fdee122b95748d81cee426d717c05b5174fe96/pics/blank.gif" width="70"> </a> </div> <div class="info"> <div class="title"> <a href="https://www.douban.com/event/34082723/" title="【深圳】《鹿先森乐队·敬这伟大的良宵》"> 【深圳】《鹿先森乐队·敬这伟大的良宵》 </a> </div> <div class="datetime"> 4月11日 周日 20:00 - 21:30 </div> <address title="深圳保利剧院 后海滨路后海滨路保利文化广场保利剧院"> 深圳保利剧院 后海滨路后海... </address> <div class="follow"> 8人关注 </div> </div> <li> <div class="pic"> <a href="https://www.douban.com/event/34096225/"> <img data-origin="https://img9.doubanio.com/pview/event_poster/small/public/b28c86db9248315.jpg" src="https://img3.doubanio.com/f/shire/a1fdee122b95748d81cee426d717c05b5174fe96/pics/blank.gif" width="70"> </a> </div> <div class="info"> <div class="title"> <a href="https://www.douban.com/event/34096225/" title="【深圳站】「直火帮/神秘嘉宾」《SFG Revenge Season》下半程 巡演"> 【深圳站】「直火帮/神秘嘉宾」《SFG Revenge Season... </a> </div> <div class="datetime"> 4月15日 周四 20:30 - 22:30 </div> <address title="深圳HOU Live 深圳市福田区滨河大道9289号KK ONE购物中心负一层B112a HOU LIVE(地铁9号线下沙站B出口)"> 深圳HOU Live 深圳市福田区... </address> <div class="follow"> 7人关注 </div> </div> <li> <div class="pic"> <a href="https://www.douban.com/event/34043441/"> <img data-origin="https://img2.doubanio.com/pview/event_poster/small/public/2a02c08b237483f.jpg" src="https://img3.doubanio.com/f/shire/a1fdee122b95748d81cee426d717c05b5174fe96/pics/blank.gif" width="70"> </a> </div> <div class="info"> <div class="title"> <a href="https://www.douban.com/event/34043441/" title="白百【浪里有我的珍珠】2021巡演"> 白百【浪里有我的珍珠】2021巡演 </a> </div> <div class="datetime"> 4月3日 周六 20:30 - 21:30 </div> <address title="深圳B10现场 深圳南山区华侨城创意文化园北区C2栋北侧(旧天堂书店斜对面)"> 深圳B10现场 深圳南山区华... </address> <div class="follow"> 1人关注 </div> </div> </ul> </div> </div> </div> </div> <div class="wrapper"> <div id="dale_anonymous_home_page_bottom" class="extra"></div> <div id="ft"> <span id="icp" class="fleft gray-link"> &copy; 2005-2021 douban.com, all rights reserved 北京豆网科技有限公司 <br> <a href="https://beian.miit.gov.cn/" target="_blank">京ICP证090015号</a> 京ICP备11027288号 <a href="https://www.douban.com/about?topic=licence" target="_blank">网络视听许可证0110418号</a> <a href="https://img3.doubanio.com/view/treasury_image/raw/public/ee56ad1c288f141.jpg" target="_blank">食品经营许可证</a> <br>京网文[2015]2026-368号 <a href="https://img3.doubanio.com/f/shire/80d71f876c40a3ecdfde2fe2afe3b1983a2cac64/pics/licence/publication2018.png" target="_blank">新出发京批字第直160029号</a> &nbsp;&nbsp;新出网证(京)字129号 <br>违法和不良信息投诉电话:4008353331-9&nbsp;<img src="https://img3.doubanio.com/view/treasury_image/raw/public/6316c63a81deef1.jpg" height="16" align="top"/> <br><img src="https://img3.doubanio.com/pics/icon/jubao.png" align="absmiddle" width="15"> <a href="http://www.12377.cn/">中国互联网举报中心</a> 电话:12377 <img src="https://img3.doubanio.com/pics/biaoshi.gif" align="absmiddle"> <a href="http://www.beian.gov.cn/portal/registerSystemInfo?recordcode=11010502000728" target="_blank">京公网安备11010502000728</a> </span> <a href="https://www.douban.com/hnypt/variformcyst.py" style="display: none;"></a> <span class="fright"> <a href="https://www.douban.com/about">关于豆瓣</a> · <a href="https://www.douban.com/jobs">在豆瓣工作</a> · <a href="https://www.douban.com/about?topic=contactus">联系我们</a> · <a href="https://www.douban.com/about/legal">法律声明</a> · <a href="https://help.douban.com/?app=main" target="_blank">帮助中心</a> · <a href="https://www.douban.com/doubanapp/">移动应用</a> · <a href="https://www.douban.com/partner/">豆瓣广告</a> </span> </div> </div> <script src="https://img3.doubanio.com/f/shire/4b1bbfaa49f8fb30d2719ec0ec08a11f24412ff5/js/core/do/_init_.js" data-cfg-corelib="https://img3.doubanio.com/f/shire/72ced6df41d4d158420cebdd254f9562942464e3/js/jquery.min.js"></script> <script type="text/javascript" src="https://img3.doubanio.com/misc/mixed_static/21176bed409dcd01.js"></script> <!-- douban ad begin --> <script type="text/javascript"> (function (global) { var newNode = global.document.createElement('script'), existingNode = global.document.getElementsByTagName('script')[0], adSource = '//erebor.douban.com/', userId = '', browserId = '_fCP8a-9JK4', criteria = '3:/', preview = '', debug = false, adSlots = ['dale_anonymous_homepage_top_for_crazy_ad', 'dale_anonymous_homepage_right_top', 'dale_anonymous_homepage_movie_bottom', 'dale_anonymous_home_page_top', 'dale_homepage_online_activity_promo_1', 'dale_anonymous_homepage_doublemint', 'dale_anonymous_home_page_middle', 'dale_anonymous_home_page_middle_2', 'dale_anonymous_home_page_bottom']; global.DoubanAdRequest = {src: adSource, uid: userId, bid: browserId, crtr: criteria, prv: preview, debug: debug}; global.DoubanAdSlots = (global.DoubanAdSlots || []).concat(adSlots); newNode.setAttribute('type', 'text/javascript'); newNode.setAttribute('src', '//img1.doubanio.com/NWt2ZHQ5bC9mL2FkanMvNmEyODIwODUxODI1ZmFhMDA5YzM3YzUzM2ZmOTJkZTk5NGUzODExYS9hZC5yZWxlYXNlLmpz'); newNode.setAttribute('async', true); existingNode.parentNode.insertBefore(newNode, existingNode); })(this); </script> <!-- douban ad end --> <!-- Google Tag Manager --> <noscript><iframe src="//www.googletagmanager.com/ns.html?id=GTM-5WP579" height="0" width="0" style="display:none;visibility:hidden"></iframe></noscript> <script>(function(w,d,s,l,i){w[l]=w[l]||[];w[l].push({'gtm.start':new Date().getTime(),event:'gtm.js'});var f=d.getElementsByTagName(s)[0],j=d.createElement(s),dl=l!='dataLayer'?'&l='+l:'';j.async=true;j.src='//www.googletagmanager.com/gtm.js?id='+i+dl;f.parentNode.insertBefore(j,f);})(window,document,'script','dataLayer','GTM-5WP579');</script> <!-- End Google Tag Manager --> <script type="text/javascript"> var _paq = _paq || []; _paq.push(['trackPageView']); _paq.push(['enableLinkTracking']); (function() { var p=(('https:' == document.location.protocol) ? 'https' : 'http'), u=p+'://fundin.douban.com/'; _paq.push(['setTrackerUrl', u+'piwik']); _paq.push(['setSiteId', '100001']); var d=document, g=d.createElement('script'), s=d.getElementsByTagName('script')[0]; g.type='text/javascript'; g.defer=true; g.async=true; g.src=p+'://img3.doubanio.com/dae/fundin/piwik.js'; s.parentNode.insertBefore(g,s); })(); </script> <script type="text/javascript"> var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-7019765-1']); _gaq.push(['_setCampNameKey', 'dcn']); _gaq.push(['_setCampSourceKey', 'dcs']); _gaq.push(['_setCampMediumKey', 'dcm']); _gaq.push(['_setCampTermKey', 'dct']); _gaq.push(['_setCampContentKey', 'dcc']); _gaq.push(['_addOrganic', 'baidu', 'word']); _gaq.push(['_addOrganic', 'soso', 'w']); _gaq.push(['_addOrganic', '3721', 'name']); _gaq.push(['_addOrganic', 'youdao', 'q']); _gaq.push(['_addOrganic', 'so.360.cn', 'q']); _gaq.push(['_addOrganic', 'vnet', 'kw']); _gaq.push(['_addOrganic', 'sogou', 'query']); _gaq.push(['_addIgnoredOrganic', '豆瓣']); _gaq.push(['_addIgnoredOrganic', 'douban']); _gaq.push(['_addIgnoredOrganic', '豆瓣网']); _gaq.push(['_addIgnoredOrganic', 'www.douban.com']); _gaq.push(['_setDomainName', '.douban.com']); _gaq.push(['_setCustomVar', 1, 'responsive_view_mode', 'desktop', 3]); _gaq.push(['_trackPageview']); _gaq.push(['_trackPageLoadTime']); window._ga_init = function() { var ga = document.createElement('script'); ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; ga.setAttribute('async', 'true'); document.documentElement.firstChild.appendChild(ga); }; if (window.addEventListener) { window.addEventListener('load', _ga_init, false); } else { window.attachEvent('onload', _ga_init); } </script> </body> </html>
notebooks/Multiple Correspondence Analysis.ipynb
###Markdown 1.Package Import ###Code import pandas as pd import numpy as np import os import json import tempfile import datetime import re, string, unicodedata ###Output _____no_output_____ ###Markdown ###Code !pip install prince import prince ###Output Requirement already satisfied: prince in /usr/local/lib/python3.6/dist-packages (0.6.3) Requirement already satisfied: numpy>=1.16.1 in /usr/local/lib/python3.6/dist-packages (from prince) (1.17.4) Requirement already satisfied: pandas>=0.24.0 in /usr/local/lib/python3.6/dist-packages (from prince) (0.25.3) Requirement already satisfied: scipy>=1.1.0 in /usr/local/lib/python3.6/dist-packages (from prince) (1.3.3) Requirement already satisfied: matplotlib>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from prince) (3.1.2) Requirement already satisfied: scikit-learn>=0.20.1 in /usr/local/lib/python3.6/dist-packages (from prince) (0.21.3) Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.24.0->prince) (2018.9) Requirement already satisfied: python-dateutil>=2.6.1 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.24.0->prince) (2.6.1) Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=3.0.2->prince) (0.10.0) Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=3.0.2->prince) (1.1.0) Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=3.0.2->prince) (2.4.5) Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.6/dist-packages (from scikit-learn>=0.20.1->prince) (0.14.1) Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.6/dist-packages (from python-dateutil>=2.6.1->pandas>=0.24.0->prince) (1.12.0) Requirement already satisfied: setuptools in /usr/local/lib/python3.6/dist-packages (from kiwisolver>=1.0.1->matplotlib>=3.0.2->prince) (42.0.2) ###Markdown 2. Data Loading ###Code import prince !pip install -U -q PyDrive from pydrive.auth import GoogleAuth from pydrive.drive import GoogleDrive from google.colab import auth from oauth2client.client import GoogleCredentials # Authenticate and create the PyDrive client. auth.authenticate_user() gauth = GoogleAuth() gauth.credentials = GoogleCredentials.get_application_default() drive = GoogleDrive(gauth) def read_csv(uploaded, name): from io import StringIO s=str(uploaded[name],'utf-8') data = StringIO(s) df=pd.read_csv(data) return df #business side info of sample link = "https://drive.google.com/open?id=1UacX5fkE-p3pLC9rdzow-74JIkmXA6Yn" _, drive_id = link.split('=') downloaded = drive.CreateFile({'id': drive_id}) downloaded.GetContentFile('./bus_sub_sideinfo.csv') sub_sideinfo = pd.read_csv('./bus_sub_sideinfo.csv', index_col = 0) print(len(sub_sideinfo)) sub_sideinfo.head() new_sideinfo = sub_sideinfo.iloc[:, 1:] print (len(new_sideinfo)) ## data cleaning for i in range(len(new_sideinfo.columns)): new_sideinfo.iloc[:,i] = new_sideinfo.iloc[:,i].apply(lambda x: (int(x) if x != 'None' else 0) if isinstance(x,str) else x) ###Output _____no_output_____ ###Markdown 3. Dropped tags where less than 50 businesses own it ###Code ## Dropped tags where less than 50 businesses own it business_tags_amount = 0 stat = [] for i in new_sideinfo.columns: amount = sum(new_sideinfo[i]) stat.append([i, amount]) new_tags = [] for i in stat: if list(i)[1]>50: business_tags_amount += 1 new_tags.append(list(i)[0]) print(business_tags_amount) reduced_sideinfo = new_sideinfo[new_tags] ###Output 807 ###Markdown 4. MCA Multiple correspondence analysis (MCA) is an extension of correspondence analysis (CA). It should be used when you have more than two categorical variables. The idea is simply to compute the one-hot encoded version of a dataset and apply CA on it. ###Code def mcaHT(new_sideinfo, n_components = 50): mca = prince.MCA(n_components, n_iter=3, copy=True, check_input=True, engine='auto', random_state=42) mca = mca.fit(new_sideinfo) explained_inertia = mca.explained_inertia_ sum_explained_inertia = sum(mca.explained_inertia_) result = mca.transform(new_sideinfo) return result, explained_inertia, sum_explained_inertia total_inertia = [] # n_componets = 10 result_10m, exp_inertia_10, sum_exp_inertia_10 = mcaHT(reduced_sideinfo, n_components = 10) total_inertia.append(sum_exp_inertia_10) print('explained inertia for 10 components', exp_inertia_10) print('total inertia for 10 components', sum_exp_inertia_10) result_10m.to_csv('/content/drive/My Drive/result_sub10.csv') # n_componets = 20 result_20m, exp_inertia_20, sum_exp_inertia_20 = mcaHT(reduced_sideinfo, n_components = 20) total_inertia.append(sum_exp_inertia_20) print('explained inertia for 20 components', exp_inertia_20) print('total inertia for 20 components', sum_exp_inertia_20) result_20m.to_csv('/content/drive/My Drive/result_sub20.csv') # n_componets = 30 result_30m, exp_inertia_30, sum_exp_inertia_30 = mcaHT(reduced_sideinfo, n_components = 30) total_inertia.append(sum_exp_inertia_30) print('explained inertia for 30 components', exp_inertia_30) print('total inertia for 30 components', sum_exp_inertia_30) result_30m.to_csv('/content/drive/My Drive/result_sub30.csv') # n_componets = 40 result_40m, exp_inertia_40, sum_exp_inertia_40 = mcaHT(reduced_sideinfo, n_components = 40) total_inertia.append(sum_exp_inertia_40) print('explained inertia for 40 components', exp_inertia_40) print('total inertia for 40 components', sum_exp_inertia_40) # n_componets = 60 result_60, exp_inertia_60, sum_exp_inertia_60 = mcaHT(reduced_sideinfo, n_components = 60) total_inertia.append(sum_exp_inertia_60) print('explained inertia for 60 components', exp_inertia_60) print('total inertia for 60 components', sum_exp_inertia_60) print('explained inertia for 60 components', exp_inertia_60) print('total inertia for 60 components', sum_exp_inertia_60) # n_componets = 80 result_80, exp_inertia_80, sum_exp_inertia_80 = mcaHT(reduced_sideinfo, n_components = 80) total_inertia.append(sum_exp_inertia_80) print('explained inertia for 80 components', exp_inertia_80) print('total inertia for 80 components', sum_exp_inertia_80) # n_componets = 120 result_120, exp_inertia_120, sum_exp_inertia_120 = mcaHT(reduced_sideinfo, n_components = 120) total_inertia.append(sum_exp_inertia_120) print('explained inertia for 120 components', exp_inertia_120) print('total inertia for 120 components', sum_exp_inertia_120) # n_componets = 140 result_140, exp_inertia_140, sum_exp_inertia_140 = mcaHT(reduced_sideinfo, n_components = 140) total_inertia.append(sum_exp_inertia_140) print('explained inertia for 140 components', exp_inertia_140) print('total inertia for 140 components', sum_exp_inertia_140) # n_componets = 280 result_280, exp_inertia_280, sum_exp_inertia_280 = mcaHT(reduced_sideinfo, n_components = 280) total_inertia.append(sum_exp_inertia_280) print('explained inertia for 280 components', exp_inertia_280) print('total inertia for 280 components', sum_exp_inertia_280) # n_componets = 360 result_360, exp_inertia_360, sum_exp_inertia_360 = mcaHT(reduced_sideinfo, n_components = 360) total_inertia.append(sum_exp_inertia_360) print('explained inertia for 360 components', exp_inertia_360) print('total inertia for 360 components', sum_exp_inertia_360) # n_componets = 500 result_500, exp_inertia_500, sum_exp_inertia_500 = mcaHT(reduced_sideinfo, n_components = 500) total_inertia1.append(sum_exp_inertia_500) print('explained inertia for 360 components', exp_inertia_500) print('total inertia for 360 components', sum_exp_inertia_500) result_40m.to_csv('/content/drive/My Drive/result_sub40.csv') result_60.to_csv('/content/drive/My Drive/result_sub60.csv') result_80.to_csv('/content/drive/My Drive/result_sub80.csv') result_120.to_csv('/content/drive/My Drive/result_sub120.csv') result_140.to_csv('/content/drive/My Drive/result_sub140.csv') result_280.to_csv('/content/drive/My Drive/result_sub280.csv') result_360.to_csv('/content/drive/My Drive/result_sub360.csv') ###Output _____no_output_____ ###Markdown 5. Result Analysis ###Code total_inertia1 = list(set(total_inertia)) cate = [10, 20, 30, 40, 60, 80, 120, 140, 280, 360] total_inertia1 cate = [10, 20, 30, 40, 60, 80, 120, 140, 280, 360, 500] import matplotlib.pyplot as plt plt.plot(cate, total_inertia1) plt.xlabel('category size') plt.ylabel('total_inertia') plt.title('total_inertia') plt.show() ###Output _____no_output_____