prompt
stringlengths
501
4.98M
target
stringclasses
1 value
chunk_prompt
bool
1 class
kind
stringclasses
2 values
prob
float64
0.2
0.97
path
stringlengths
10
394
quality_prob
float64
0.4
0.99
learning_prob
float64
0.15
1
filename
stringlengths
4
221
# Loss Functions This python script illustrates the different loss functions for regression and classification. We start by loading the ncessary libraries and resetting the computational graph. ``` import matplotlib.pyplot as plt import tensorflow as tf from tensorflow.python.framework import ops ops.reset_default_graph() ``` ### Create a Graph Session ``` sess = tf.Session() ``` ## Numerical Predictions --------------------------------- To start with our investigation of loss functions, we begin by looking at numerical loss functions. To do so, we must create a sequence of predictions around a target. For this exercise, we consider the target to be zero. ``` # Various Predicted X-values x_vals = tf.linspace(-1., 1., 500) # Create our target of zero target = tf.constant(0.) ``` ### L2 Loss The L2 loss is one of the most common regression loss functions. Here we show how to create it in TensorFlow and we evaluate it for plotting later. ``` # L2 loss # L = (pred - actual)^2 l2_y_vals = tf.square(target - x_vals) l2_y_out = sess.run(l2_y_vals) ``` ### L1 Loss An alternative loss function to consider is the L1 loss. This is very similar to L2 except that we take the `absolute value` of the difference instead of squaring it. ``` # L1 loss # L = abs(pred - actual) l1_y_vals = tf.abs(target - x_vals) l1_y_out = sess.run(l1_y_vals) ``` ### Pseudo-Huber Loss The psuedo-huber loss function is a smooth approximation to the L1 loss as the (predicted - target) values get larger. When the predicted values are close to the target, the pseudo-huber loss behaves similar to the L2 loss. ``` # L = delta^2 * (sqrt(1 + ((pred - actual)/delta)^2) - 1) # Pseudo-Huber with delta = 0.25 delta1 = tf.constant(0.25) phuber1_y_vals = tf.multiply(tf.square(delta1), tf.sqrt(1. + tf.square((target - x_vals)/delta1)) - 1.) phuber1_y_out = sess.run(phuber1_y_vals) # Pseudo-Huber with delta = 5 delta2 = tf.constant(5.) phuber2_y_vals = tf.multiply(tf.square(delta2), tf.sqrt(1. + tf.square((target - x_vals)/delta2)) - 1.) phuber2_y_out = sess.run(phuber2_y_vals) ``` ### Plot the Regression Losses Here we use Matplotlib to plot the L1, L2, and Pseudo-Huber Losses. ``` x_array = sess.run(x_vals) plt.plot(x_array, l2_y_out, 'b-', label='L2 Loss') plt.plot(x_array, l1_y_out, 'r--', label='L1 Loss') plt.plot(x_array, phuber1_y_out, 'k-.', label='P-Huber Loss (0.25)') plt.plot(x_array, phuber2_y_out, 'g:', label='P-Huber Loss (5.0)') plt.ylim(-0.2, 0.4) plt.legend(loc='lower right', prop={'size': 11}) plt.show() ``` ## Categorical Predictions ------------------------------- We now consider categorical loss functions. Here, the predictions will be around the target of 1. ``` # Various predicted X values x_vals = tf.linspace(-3., 5., 500) # Target of 1.0 target = tf.constant(1.) targets = tf.fill([500,], 1.) ``` ### Hinge Loss The hinge loss is useful for categorical predictions. Here is is the `max(0, 1-(pred*actual))`. ``` # Hinge loss # Use for predicting binary (-1, 1) classes # L = max(0, 1 - (pred * actual)) hinge_y_vals = tf.maximum(0., 1. - tf.multiply(target, x_vals)) hinge_y_out = sess.run(hinge_y_vals) ``` ### Cross Entropy Loss The cross entropy loss is a very popular way to measure the loss between categorical targets and output model logits. You can read about the details more here: https://en.wikipedia.org/wiki/Cross_entropy ``` # Cross entropy loss # L = -actual * (log(pred)) - (1-actual)(log(1-pred)) xentropy_y_vals = - tf.multiply(target, tf.log(x_vals)) - tf.multiply((1. - target), tf.log(1. - x_vals)) xentropy_y_out = sess.run(xentropy_y_vals) ``` ### Sigmoid Entropy Loss TensorFlow also has a sigmoid-entropy loss function. This is very similar to the above cross-entropy function except that we take the sigmoid of the predictions in the function. ``` # L = -actual * (log(sigmoid(pred))) - (1-actual)(log(1-sigmoid(pred))) # or # L = max(actual, 0) - actual * pred + log(1 + exp(-abs(actual))) x_val_input = tf.expand_dims(x_vals, 1) target_input = tf.expand_dims(targets, 1) xentropy_sigmoid_y_vals = tf.nn.softmax_cross_entropy_with_logits(logits=x_val_input, labels=target_input) xentropy_sigmoid_y_out = sess.run(xentropy_sigmoid_y_vals) ``` ### Weighted (Softmax) Cross Entropy Loss Tensorflow also has a similar function to the `sigmoid cross entropy` loss function above, but we take the softmax of the actuals and weight the predicted output instead. ``` # Weighted (softmax) cross entropy loss # L = -actual * (log(pred)) * weights - (1-actual)(log(1-pred)) # or # L = (1 - pred) * actual + (1 + (weights - 1) * pred) * log(1 + exp(-actual)) weight = tf.constant(0.5) xentropy_weighted_y_vals = tf.nn.weighted_cross_entropy_with_logits(x_vals, targets, weight) xentropy_weighted_y_out = sess.run(xentropy_weighted_y_vals) ``` ### Plot the Categorical Losses ``` # Plot the output x_array = sess.run(x_vals) plt.plot(x_array, hinge_y_out, 'b-', label='Hinge Loss') plt.plot(x_array, xentropy_y_out, 'r--', label='Cross Entropy Loss') plt.plot(x_array, xentropy_sigmoid_y_out, 'k-.', label='Cross Entropy Sigmoid Loss') plt.plot(x_array, xentropy_weighted_y_out, 'g:', label='Weighted Cross Entropy Loss (x0.5)') plt.ylim(-1.5, 3) #plt.xlim(-1, 3) plt.legend(loc='lower right', prop={'size': 11}) plt.show() ``` ### Softmax entropy and Sparse Entropy Since it is hard to graph mutliclass loss functions, we will show how to get the output instead ``` # Softmax entropy loss # L = -actual * (log(softmax(pred))) - (1-actual)(log(1-softmax(pred))) unscaled_logits = tf.constant([[1., -3., 10.]]) target_dist = tf.constant([[0.1, 0.02, 0.88]]) softmax_xentropy = tf.nn.softmax_cross_entropy_with_logits(logits=unscaled_logits, labels=target_dist) print(sess.run(softmax_xentropy)) # Sparse entropy loss # Use when classes and targets have to be mutually exclusive # L = sum( -actual * log(pred) ) unscaled_logits = tf.constant([[1., -3., 10.]]) sparse_target_dist = tf.constant([2]) sparse_xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=unscaled_logits, labels=sparse_target_dist) print(sess.run(sparse_xentropy)) ```
true
code
0.726001
null
null
null
null
### Entrepreneurial Competency Analysis and Predict ``` import pandas as pd import numpy as np import seaborn as sns import matplotlib as mat import matplotlib.pyplot as plt import warnings warnings.filterwarnings("ignore") data = pd.read_csv('entrepreneurial competency.csv') data.head() data.describe() data.corr() list(data) data.shape data_reasons = pd.DataFrame(data.ReasonsForLack.value_counts()) data_reasons data.ReasonsForLack.value_counts().idxmax() data.isnull().sum()[data.isnull().sum()>0] data['ReasonsForLack'] = data.ReasonsForLack.fillna('Desconhecido') fill_na = pd.DataFrame(data.ReasonsForLack.value_counts()) fill_na.head(5) edu_sector = data.EducationSector.value_counts().sort_values(ascending=False) edu_sector edu_sector_pd = pd.DataFrame(edu_sector, columns = ['Sector', 'Amount']) edu_sector_pd.Sector = edu_sector.index edu_sector_pd.Amount = edu_sector.values edu_sector_pd perc_sec = round(data.EducationSector.value_counts()/data.EducationSector.shape[0],2) edu_sector_pd['Percentual'] = perc_sec.values *100 edu_sector_pd labels = [str(edu_sector_pd['Sector'][i])+' '+'['+str(round(edu_sector_pd['Percentual'][i],2)) +'%'+']' for i in edu_sector_pd.index] from matplotlib import cm cs = cm.Set3(np.arange(100)) f = plt.figure() plt.pie(edu_sector_pd['Amount'], labeldistance = 1, radius = 3, colors = cs, wedgeprops = dict(width = 0.8)) plt.legend(labels = labels, loc = 'center', prop = {'size':12}) plt.title("Students distribution based on Education Sector - General Analysis", loc = 'Center', fontdict = {'fontsize':20,'fontweight':20}) plt.show() rank_edu_sec = data.EducationSector.value_counts().sort_values(ascending=False) rank = pd.DataFrame(rank_edu_sec, columns=['Sector', 'Amount']) rank.Sector = rank_edu_sec.index rank.Amount = rank_edu_sec.values rank_3 = rank.head(3) rank_3 fig, ax = plt.subplots(figsize=(8,5)) colors = ["#00e600", "#ff8c1a", "#a180cc"] sns.barplot(x="Sector", y="Amount", palette=colors, data=rank_3) ax.set_title("Sectors with largest students number",fontdict= {'size':12}) ax.xaxis.set_label_text("Sectors",fontdict= {'size':12}) ax.yaxis.set_label_text("Students amount",fontdict= {'size':12}) plt.show() fig, ax = plt.subplots(figsize=(8,6)) sns.histplot(data["Age"], color="#33cc33",kde=True, ax=ax) ax.set_title('Students distribution based on Age', fontsize= 15) plt.ylabel("Density (KDE)", fontsize= 15) plt.xlabel("Age", fontsize= 15) plt.show() fig = plt.figure(figsize=(10,5)) plt.boxplot(data.Age) plt.show() gender = data.Gender.value_counts() gender perc_gender = round((data.Gender.value_counts()/data.Gender.shape[0])*100, 2) perc_gender df_gender = pd.DataFrame(gender, columns=['Gender','Absolut_Value', 'Percent_Value']) df_gender.Gender = gender.index df_gender.Absolut_Value = gender.values df_gender.Percent_Value = perc_gender.values df_gender fig, ax = plt.subplots(figsize=(8,6)) sns.histplot(data["Gender"], color="#33cc33", ax=ax) ax.set_title('Students distribution by gender', fontsize= 15) plt.ylabel("Amount", fontsize= 15) plt.xlabel("Gender", fontsize= 15) plt.show() ``` # Education Sector, Gender and Age Analyses, where Target = 1 ``` data_y = data[data.y == 1] data_y.head() data_y.shape edu_sector_y = data_y.EducationSector.value_counts().sort_values(ascending=False) edu_sector_y edu_sector_ypd = pd.DataFrame(edu_sector_y, columns = ['Sector', 'Amount']) edu_sector_ypd.Sector = edu_sector_y.index edu_sector_ypd.Amount = edu_sector_y.values edu_sector_ypd perc_sec_y = round(data_y.EducationSector.value_counts()/data_y.EducationSector.shape[0],2) edu_sector_ypd['Percent'] = perc_sec_y.values *100 edu_sector_ypd labels = [str(edu_sector_ypd['Sector'][i])+' '+'['+str(round(edu_sector_ypd['Percent'][i],2)) +'%'+']' for i in edu_sector_ypd.index] cs = cm.Set3(np.arange(100)) f = plt.figure() plt.pie(edu_sector_ypd['Amount'], labeldistance = 1, radius = 3, colors = cs, wedgeprops = dict(width = 0.8)) plt.legend(labels = labels, loc = 'center', prop = {'size':12}) plt.title("Students distribution based on Education Sector - Target Analysis", loc = 'Center', fontdict = {'fontsize':20,'fontweight':20}) plt.show() fig, ax = plt.subplots(figsize=(8,6)) sns.histplot(data_y["Age"], color="#1f77b4",kde=True, ax=ax) ax.set_title('Students distribution based on Age - Target Analysis', fontsize= 15) plt.ylabel("Density (KDE)", fontsize= 15) plt.xlabel("Age", fontsize= 15) plt.show() gender_y = data_y.Gender.value_counts() perc_gender_y = round((data_y.Gender.value_counts()/data_y.Gender.shape[0])*100, 2) df_gender_y = pd.DataFrame(gender_y, columns=['Gender','Absolut_Value', 'Percent_Value']) df_gender_y.Gender = gender_y.index df_gender_y.Absolut_Value = gender_y.values df_gender_y.Percent_Value = perc_gender_y.values df_gender_y fig, ax = plt.subplots(figsize=(8,6)) sns.histplot(data_y["Gender"], color="#9467bd", ax=ax) ax.set_title('Students distribution by gender', fontsize= 15) plt.ylabel("Amount", fontsize= 15) plt.xlabel("Gender", fontsize= 15) plt.show() pcy= round(data_y.IndividualProject.value_counts()/data_y.IndividualProject.shape[0]*100,2) pcy pc= round(data.IndividualProject.value_counts()/data.IndividualProject.shape[0]*100,2) pc fig = plt.figure(figsize=(15,5)) #tamanho do frame plt.subplots_adjust(wspace= 0.5) #espaço entre os graficos plt.suptitle('Comparation between Idividual Project on "y general" and "y == 1"') plt.subplot(1,2,2) plt.bar(data_y.IndividualProject.unique(), pcy, color = 'green') plt.title("Individual Project Distribution - y==1") plt.subplot(1,2,1) plt.bar(data.IndividualProject.unique(), pc, color = 'grey') plt.title("Individual Project Distribution - Full dataset") plt.show() round(data.Influenced.value_counts()/data.Influenced.shape[0],2)*100 round(data_y.Influenced.value_counts()/data_y.Influenced.shape[0],2)*100 ``` Here we can observe that the categoric features have no influence on Target. Each feature measure almost haven´t any impact when compared on 'y general' and 'y == 1'. In other words, we must take the numerical features as predict parameters. ``` data.head() list(data) data_num = data.drop(['EducationSector', 'Age', 'Gender', 'City','MentalDisorder'], axis = 1) data_num.head() data_num.corr() plt.hist(data_num.GoodPhysicalHealth, bins = 30) plt.title("Good Physical Health distribution") plt.show() data_num_fil1 = data_num[data_num.y == 1] plt.hist(data_num_fil1.GoodPhysicalHealth, bins = 30) plt.title("Good Physical Health distribution, where target == 1") plt.show() pers_fil = round(data_num.GoodPhysicalHealth.value_counts()/data_num.GoodPhysicalHealth.shape[0],2) pers_fil1 = round(data_num_fil1.GoodPhysicalHealth.value_counts()/data_num_fil1.GoodPhysicalHealth.shape[0],2) pers_fil pers_fil1 list(data_num) def plot_features(df, df_filtered, columns): df_original = df.copy() df2 = df_filtered.copy() for column in columns: a = df_original[column] b = df2[column] fig = plt.figure(figsize=(15,5)) #tamanho do frame plt.subplots_adjust(wspace= 0.5) #espaço entre os graficos plt.suptitle('Comparation between Different Features on "y general" and "y == 1"') plt.subplot(1,2,2) plt.bar(a.unique(), round(a.value_counts()/a.shape[0],2), color = 'green') plt.title("Comparation between " + column + " on 'y == 1'") plt.subplot(1,2,1) plt.bar(b.unique(), round(a.value_counts()/b.shape[0],2), color = 'grey') plt.title("Comparation between " + column + " Full dataset") plt.show() plot_features(data_num,data_num_fil1,columns=['Influenced', 'Perseverance', 'DesireToTakeInitiative', 'Competitiveness', 'SelfReliance', 'StrongNeedToAchieve', 'SelfConfidence']) ``` ### Data Transformation and Preprocessing ``` data_num.shape data_num.dtypes from sklearn.preprocessing import OneHotEncoder X = data_num.drop(['y', 'Influenced', 'ReasonsForLack'], axis = 1) def ohe_drop(data, columns): df = data.copy() ohe = OneHotEncoder() for column in columns: var_ohe = df[column].values.reshape(-1,1) ohe.fit(var_ohe) ohe.transform(var_ohe) OHE = pd.DataFrame(ohe.transform(var_ohe).toarray(), columns = ohe.categories_[0].tolist()) df = pd.concat([df, OHE], axis = 1) df = df.drop([column],axis = 1) return df X = ohe_drop(data_num, columns =['Perseverance', 'DesireToTakeInitiative', 'Competitiveness', 'SelfReliance', 'StrongNeedToAchieve', 'SelfConfidence', 'GoodPhysicalHealth', 'Influenced', 'KeyTraits'] ) X X = X.drop(['y', 'ReasonsForLack', 'IndividualProject'], axis = 1) y = np.array(data_num.y) X.shape y.shape X = np.array(X) type(X) from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X,y, test_size = 0.30, random_state = 0) X_train.shape X_test.shape y_train.shape y_test.shape ``` ### Logistic Regression ``` from sklearn.linear_model import LogisticRegression logreg = LogisticRegression() logreg.fit(X_train, y_train) logreg.predict(X_train) logreg.predict(X_train)[:20] y_train[:20] ``` ### Performance metrics calculation Accuracy Score Transformar df em matrix ``` from sklearn.metrics import accuracy_score accuracy_score(y_true = y_train, y_pred = logreg.predict(X_train)) ``` Cross Validation ``` from sklearn.model_selection import KFold kf = KFold(n_splits = 3) classif= LogisticRegression() train_accuracy_list = [] val_accuracy_list = [] for train_idx, val_idx in kf.split(X_train, y_train): Xtrain_folds = X_train[train_idx] ytrain_folds = y_train[train_idx] Xval_fold = X_train[val_idx] yval_fold = y_train[val_idx] classif.fit(Xtrain_folds,ytrain_folds) train_pred = classif.predict(Xtrain_folds) pred_validacao = classif.predict(Xval_fold) train_accuracy_list.append(accuracy_score(y_pred = train_pred, y_true = ytrain_folds)) val_accuracy_list.append(accuracy_score(y_pred = pred_validacao, y_true = yval_fold)) print("acurácias em treino: \n", train_accuracy_list, " \n| média: ", np.mean(train_accuracy_list)) print() print("acurácias em validação: \n", val_accuracy_list, " \n| média: ", np.mean(val_accuracy_list)) from sklearn.metrics import confusion_matrix confusion_matrix(y_true = y_train, y_pred = logreg.predict(X_train)) cm = confusion_matrix(y_true = y_train, y_pred = logreg.predict(X_train)) cm[1,1] / cm[1, :].sum() cm[1,1] / cm[:, 1].sum() from sklearn.metrics import precision_score, recall_score from sklearn.metrics import f1_score f1_score(y_true = y_train, y_pred = logreg.predict(X_train)) ``` ### Y test Predict ``` logreg.predict(X_test) f1_score(y_true = y_test, y_pred = logreg.predict(X_test)) ``` The predict of "y_test" is too low, so I'll optimize the model ### Model Optimization ``` from sklearn.feature_selection import SelectKBest, chi2 def try_k(x, y, n): the_best = SelectKBest(score_func = chi2, k =n) fit = the_best.fit(x, y) features = fit.transform(x) logreg.fit(features,y) preds = logreg.predict(features) f1 = f1_score(y_true = y, y_pred = preds) precision = precision_score(y_true = y, y_pred = preds) recall = recall_score(y_true = y, y_pred = preds) return preds, f1, precision, recall for n in n_list: preds,f1, precision, recall = try_k(X_test, y_test, n) f1 precision recall from sklearn.metrics import classification_report, plot_confusion_matrix,plot_roc_curve the_best = SelectKBest(score_func = chi2, k =30) fit = the_best.fit(X_test, y_test) feature = fit.transform(X_test) preds = logreg.predict(feature) plot_confusion_matrix(logreg,features,y_test) plot_roc_curve(logreg,features,y_test) print(classification_report(y_test, preds)) ```
true
code
0.467818
null
null
null
null
# Mean Shift using Standard Scaler This Code template is for the Cluster analysis using a simple Mean Shift(Centroid-Based Clustering using a flat kernel) Clustering algorithm along with feature scaling using Standard Scaler and includes 2D and 3D cluster visualization of the Clusters. ### Required Packages ``` !pip install plotly import operator import warnings import itertools import numpy as np import pandas as pd import seaborn as sns from sklearn.preprocessing import StandardScaler import plotly.express as px import matplotlib.pyplot as plt from mpl_toolkits import mplot3d import plotly.graph_objects as go from sklearn.cluster import MeanShift, estimate_bandwidth warnings.filterwarnings("ignore") ``` ### Initialization Filepath of CSV file ``` file_path = "" ``` List of features which are required for model training ``` features=[] ``` ### Data Fetching Pandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools. We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry. ``` df=pd.read_csv(file_path) df.head() ``` ### Feature Selections It is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model. We will assign all the required input features to X. ``` X = df[features] ``` ### Data Preprocessing Since the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes. ``` def NullClearner(df): if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])): df.fillna(df.mean(),inplace=True) return df elif(isinstance(df, pd.Series)): df.fillna(df.mode()[0],inplace=True) return df else:return df def EncodeX(df): return pd.get_dummies(df) ``` Calling preprocessing functions on the feature and target set. ``` x=X.columns.to_list() for i in x: X[i]=NullClearner(X[i]) X=EncodeX(X) X.head() ``` ####Feature Scaling Standard Scaler - Standardize features by removing the mean and scaling to unit variance Centering and scaling happen independently on each feature by computing the relevant statistics on the samples in the training set. Mean and standard deviation are then stored to be used on later data using transform.<br> [For more information click here](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html) ``` scaler = StandardScaler() X_scaled = scaler.fit_transform(X) ``` ### Model Mean shift clustering using a flat kernel. Mean shift clustering aims to discover “blobs” in a smooth density of samples. It is a centroid-based algorithm, which works by updating candidates for centroids to be the mean of the points within a given region. These candidates are then filtered in a post-processing stage to eliminate near-duplicates to form the final set of centroids. Seeding is performed using a binning technique for scalability. [More information](https://analyticsindiamag.com/hands-on-tutorial-on-mean-shift-clustering-algorithm/) #### Tuning Parameters 1. bandwidthfloat, default=None > Bandwidth used in the RBF kernel. If not given, the bandwidth is estimated using sklearn.cluster.estimate_bandwidth 2. seedsarray-like of shape (n_samples, n_features), default=None > Seeds used to initialize kernels. If not set, the seeds are calculated by clustering.get_bin_seeds with bandwidth as the grid size and default values for other parameters. 3. bin_seedingbool, default=False > If true, initial kernel locations are not locations of all points, but rather the location of the discretized version of points, where points are binned onto a grid whose coarseness corresponds to the bandwidth. 4. min_bin_freqint, default=1 > To speed up the algorithm, accept only those bins with at least min_bin_freq points as seeds. 5. cluster_allbool, default=True > If true, then all points are clustered, even those orphans that are not within any kernel. Orphans are assigned to the nearest kernel. If false, then orphans are given cluster label -1 6. n_jobsint, default=None > The number of jobs to use for the computation. This works by computing each of the n_init runs in parallel. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. 7. max_iterint, default=300 > Maximum number of iterations, per seed point before the clustering operation terminates [For more detail on API](https://scikit-learn.org/stable/modules/generated/sklearn.cluster.MeanShift.html) <br> <br> ####Estimate Bandwidth Estimate the bandwidth to use with the mean-shift algorithm. That this function takes time at least quadratic in n_samples. For large datasets, it’s wise to set that parameter to a small value. ``` bandwidth = estimate_bandwidth(X_scaled, quantile=0.15) ms = MeanShift(bandwidth=bandwidth, bin_seeding=True) ms.fit(X_scaled) y_pred = ms.predict(X_scaled) ``` ### Cluster Analysis First, we add the cluster labels from the trained model into the copy of the data frame for cluster analysis/visualization. ``` ClusterDF = X.copy() ClusterDF['ClusterID'] = y_pred ClusterDF.head() ``` #### Cluster Records The below bar graphs show the number of data points in each available cluster. ``` ClusterDF['ClusterID'].value_counts().plot(kind='bar') ``` #### Cluster Plots Below written functions get utilized to plot 2-Dimensional and 3-Dimensional cluster plots on the available set of features in the dataset. Plots include different available clusters along with cluster centroid. ``` def Plot2DCluster(X_Cols,df): for i in list(itertools.combinations(X_Cols, 2)): plt.rcParams["figure.figsize"] = (8,6) xi,yi=df.columns.get_loc(i[0]),df.columns.get_loc(i[1]) for j in df['ClusterID'].unique(): DFC=df[df.ClusterID==j] plt.scatter(DFC[i[0]],DFC[i[1]],cmap=plt.cm.Accent,label=j) plt.scatter(ms.cluster_centers_[:,xi],ms.cluster_centers_[:,yi],marker="^",color="black",label="centroid") plt.xlabel(i[0]) plt.ylabel(i[1]) plt.legend() plt.show() def Plot3DCluster(X_Cols,df): for i in list(itertools.combinations(X_Cols, 3)): xi,yi,zi=df.columns.get_loc(i[0]),df.columns.get_loc(i[1]),df.columns.get_loc(i[2]) fig,ax = plt.figure(figsize = (16, 10)),plt.axes(projection ="3d") ax.grid(b = True, color ='grey',linestyle ='-.',linewidth = 0.3,alpha = 0.2) for j in df['ClusterID'].unique(): DFC=df[df.ClusterID==j] ax.scatter3D(DFC[i[0]],DFC[i[1]],DFC[i[2]],alpha = 0.8,cmap=plt.cm.Accent,label=j) ax.scatter3D(ms.cluster_centers_[:,xi],ms.cluster_centers_[:,yi],ms.cluster_centers_[:,zi], marker="^",color="black",label="centroid") ax.set_xlabel(i[0]) ax.set_ylabel(i[1]) ax.set_zlabel(i[2]) plt.legend() plt.show() def Plotly3D(X_Cols,df): for i in list(itertools.combinations(X_Cols,3)): xi,yi,zi=df.columns.get_loc(i[0]),df.columns.get_loc(i[1]),df.columns.get_loc(i[2]) fig1 = px.scatter_3d(ms.cluster_centers_,x=ms.cluster_centers_[:,xi],y=ms.cluster_centers_[:,yi], z=ms.cluster_centers_[:,zi]) fig2=px.scatter_3d(df, x=i[0], y=i[1],z=i[2],color=df['ClusterID']) fig3 = go.Figure(data=fig1.data + fig2.data, layout=go.Layout(title=go.layout.Title(text="x:{}, y:{}, z:{}".format(i[0],i[1],i[2]))) ) fig3.show() sns.set_style("whitegrid") sns.set_context("talk") plt.rcParams["lines.markeredgewidth"] = 1 sns.pairplot(data=ClusterDF, hue='ClusterID', palette='Dark2', height=5) Plot2DCluster(X.columns,ClusterDF) Plot3DCluster(X.columns,ClusterDF) Plotly3D(X.columns,ClusterDF) ``` #### [Created by Anu Rithiga](https://github.com/iamgrootsh7)
true
code
0.607983
null
null
null
null
# Learning a LJ potential [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Teoroo-CMC/PiNN/blob/master/docs/notebooks/Learn_LJ_potential.ipynb) This notebook showcases the usage of PiNN with a toy problem of learning a Lennard-Jones potential with a hand-generated dataset. It serves as a basic test, and demonstration of the workflow with PiNN. ``` # Install PiNN !pip install git+https://github.com/Teoroo-CMC/PiNN %matplotlib inline import os, warnings import numpy as np import tensorflow as tf import matplotlib.pyplot as plt from ase import Atoms from ase.calculators.lj import LennardJones os.environ['CUDA_VISIBLE_DEVICES'] = '' index_warning = 'Converting sparse IndexedSlices' warnings.filterwarnings('ignore', index_warning) ``` ## Reference data ``` # Helper function: get the position given PES dimension(s) def three_body_sample(atoms, a, r): x = a * np.pi / 180 pos = [[0, 0, 0], [0, 2, 0], [0, r*np.cos(x), r*np.sin(x)]] atoms.set_positions(pos) return atoms atoms = Atoms('H3', calculator=LennardJones()) na, nr = 50, 50 arange = np.linspace(30,180,na) rrange = np.linspace(1,3,nr) # Truth agrid, rgrid = np.meshgrid(arange, rrange) egrid = np.zeros([na, nr]) for i in range(na): for j in range(nr): atoms = three_body_sample(atoms, arange[i], rrange[j]) egrid[i,j] = atoms.get_potential_energy() # Samples nsample = 100 asample, rsample = [], [] distsample = [] data = {'e_data':[], 'f_data':[], 'elems':[], 'coord':[]} for i in range(nsample): a, r = np.random.choice(arange), np.random.choice(rrange) atoms = three_body_sample(atoms, a, r) dist = atoms.get_all_distances() dist = dist[np.nonzero(dist)] data['e_data'].append(atoms.get_potential_energy()) data['f_data'].append(atoms.get_forces()) data['coord'].append(atoms.get_positions()) data['elems'].append(atoms.numbers) asample.append(a) rsample.append(r) distsample.append(dist) plt.pcolormesh(agrid, rgrid, egrid, shading='auto') plt.plot(asample, rsample, 'rx') plt.colorbar() ``` ## Dataset from numpy arrays ``` from pinn.io import sparse_batch, load_numpy data = {k:np.array(v) for k,v in data.items()} dataset = lambda: load_numpy(data, splits={'train':8, 'test':2}) train = lambda: dataset()['train'].shuffle(100).repeat().apply(sparse_batch(100)) test = lambda: dataset()['test'].repeat().apply(sparse_batch(100)) ``` ## Training ### Model specification ``` import pinn params={ 'model_dir': '/tmp/PiNet', 'network': { 'name': 'PiNet', 'params': { 'ii_nodes':[8,8], 'pi_nodes':[8,8], 'pp_nodes':[8,8], 'out_nodes':[8,8], 'depth': 4, 'rc': 3.0, 'atom_types':[1]}}, 'model':{ 'name': 'potential_model', 'params': { 'e_dress': {1:-0.3}, # element-specific energy dress 'e_scale': 2, # energy scale for prediction 'e_unit': 1.0, # output unit of energy dur 'log_e_per_atom': True, # log e_per_atom and its distribution 'use_force': True}}} # include force in Loss function model = pinn.get_model(params) %rm -rf /tmp/PiNet train_spec = tf.estimator.TrainSpec(input_fn=train, max_steps=5e3) eval_spec = tf.estimator.EvalSpec(input_fn=test, steps=10) tf.estimator.train_and_evaluate(model, train_spec, eval_spec) ``` ## Validate the results ### PES analysis ``` atoms = Atoms('H3', calculator=pinn.get_calc(model)) epred = np.zeros([na, nr]) for i in range(na): for j in range(nr): a, r = arange[i], rrange[j] atoms = three_body_sample(atoms, a, r) epred[i,j] = atoms.get_potential_energy() plt.pcolormesh(agrid, rgrid, epred, shading='auto') plt.colorbar() plt.title('NN predicted PES') plt.figure() plt.pcolormesh(agrid, rgrid, np.abs(egrid-epred), shading='auto') plt.plot(asample, rsample, 'rx') plt.title('NN Prediction error and sampled points') plt.colorbar() ``` ### Pairwise potential analysis ``` atoms1 = Atoms('H2', calculator=pinn.get_calc(model)) atoms2 = Atoms('H2', calculator=LennardJones()) nr2 = 100 rrange2 = np.linspace(1,1.9,nr2) epred = np.zeros(nr2) etrue = np.zeros(nr2) for i in range(nr2): pos = [[0, 0, 0], [rrange2[i], 0, 0]] atoms1.set_positions(pos) atoms2.set_positions(pos) epred[i] = atoms1.get_potential_energy() etrue[i] = atoms2.get_potential_energy() f, (ax1, ax2) = plt.subplots(2,1, gridspec_kw = {'height_ratios':[3, 1]}) ax1.plot(rrange2, epred) ax1.plot(rrange2, etrue,'--') ax1.legend(['Prediction', 'Truth'], loc=4) _=ax2.hist(np.concatenate(distsample,0), 20, range=(1,1.9)) ``` ## Molecular dynamics with ASE ``` from ase import units from ase.io import Trajectory from ase.md.nvtberendsen import NVTBerendsen from ase.md.velocitydistribution import MaxwellBoltzmannDistribution atoms = Atoms('H', cell=[2, 2, 2], pbc=True) atoms = atoms.repeat([5,5,5]) atoms.rattle() atoms.set_calculator(pinn.get_calc(model)) MaxwellBoltzmannDistribution(atoms, 300*units.kB) dyn = NVTBerendsen(atoms, 0.5 * units.fs, 300, taut=0.5*100*units.fs) dyn.attach(Trajectory('ase_nvt.traj', 'w', atoms).write, interval=10) dyn.run(5000) ```
true
code
0.595022
null
null
null
null
<h1><center>Assessmet 5 on Advanced Data Analysis using Pandas</center></h1> ## **Project 2: Correlation Between the GDP Rate and Unemployment Rate (2019)** ``` import warnings warnings.simplefilter('ignore', FutureWarning) import pandas as pd pip install pandas_datareader ``` # Getting the Datasets We got the two datasets we will be considering in this project from the Worldbank website. The first one dataset, available at http://data.worldbank.org/indicator/NY.GDP.MKTP.CD, lists the GDP of the world's countries in current US dollars, for various years. The use of a common currency allows us to compare GDP values across countries. The other dataset, available at https://data.worldbank.org/indicator/SL.UEM.TOTL.NE.ZS, lists the unemployment rate of the world's countries. The datasets were downloaded as Excel files in June 2021. ``` GDP_INDICATOR = 'NY.GDP.MKTP.CD' #below is the first five rows of the first dataset, GDP Indicator. gdpReset= pd.read_excel("API_NY.GDP.MKTP.CD.xls") gdpReset.head() #below is the last five rows of the first dataset, GDP Indicator. gdpReset.tail() UNEMPLOYMENT_INDICATORS = 'SL.UEM.TOTL.NE.ZS' #below is the first five rows of the second dataset, Uemployment Rate Indicator. UnemployReset= pd.read_excel('API_SL.UEM.TOTL.NE.ZS.xls') UnemployReset.head() #below is the last five rows of the second dataset, Unemployment Rate Indicator. UnemployReset.tail() ``` # Cleaning the data Inspecting the data with head() and tail() methods shows that for some countries the GDP and unemploymet rate values are missing. The data is, therefore, cleaned by removing the rows with unavailable values using the drop() method. ``` gdpCountries = gdpReset[0:].dropna() gdpCountries UnemployCountries = UnemployReset[0:].dropna() UnemployCountries ``` # Transforming the data The World Bank reports GDP in US dollars and cents. To make the data easier to read, the GDP is converted to millions of British pounds with the following auxiliary functions, using the average 2020 dollar-to-pound conversion rate provided by http://www.ukforex.co.uk/forex-tools/historical-rate-tools/yearly-average-rates.. ``` def roundToMillions (value): return round(value / 1000000) def usdToGBP (usd): return usd / 1.284145 GDP = 'GDP (£m)' gdpCountries[GDP] = gdpCountries[GDP_INDICATOR].apply(usdToGBP).apply(roundToMillions) gdpCountries.head() ``` The unnecessary columns can be dropped. ``` COUNTRY = 'Country Name' headings = [COUNTRY, GDP] gdpClean = gdpCountries[headings] gdpClean.head() ``` ``` UNEMPLOYMENT = 'Unemploymet Rate' UnemployCountries[UNEMPLOYMENT] = UnemployCountries[UNEMPLOYMENT_INDICATORS].apply(round) headings = [COUNTRY, UNEMPLOYMENT] UnempClean = UnemployCountries[headings] UnempClean.head() ``` # Combining the data The tables are combined through an inner join merge method on the common 'Country Name' column. ``` gdpVsUnemp = pd.merge(gdpClean, UnempClean, on=COUNTRY, how='inner') gdpVsUnemp.head() ``` # Calculating the correlation To measure if the unemployment rate and the GDP grow together or not, the Spearman rank correlation coefficient is used. ``` from scipy.stats import spearmanr gdpColumn = gdpVsUnemp[GDP] UnemployColumn = gdpVsUnemp[UNEMPLOYMENT] (correlation, pValue) = spearmanr(gdpColumn, UnemployColumn) print('The correlation is', correlation) if pValue < 0.05: print('It is statistically significant.') else: print('It is not statistically significant.') ``` The value shows an indirect correlation, i.e. richer countries tend to have lower unemployment rate. A rise by one percentage point of unemployment will reduce real GDP growth by 0.26 percentage points with a delay of 7 lags. Studies have shown that the higher the GDP growth rate of a country, the higher the employment rate. Thus, resulting to a lower unemployment rate. Besides, a negative or inverse correlation, between two variables, indicates that one variable increases while the other decreases, and vice-versa. # Visualizing the Data Measures of correlation can be misleading, so it is best to view the overall picture with a scatterplot. The GDP axis uses a logarithmic scale to better display the vast range of GDP values, from a few million to several billion (million of million) pounds. ``` %matplotlib inline gdpVsUnemp.plot(x=GDP, y=UNEMPLOYMENT, kind='scatter', grid=True, logx=True, figsize=(10, 4)) ``` The plot shows there is no clear correlation: there are some poor countries with a low unemployment rate and very few averagely rich countries with a high employment rate. Hpwever, most extremely rich countries have a low unemployment rate. Besides, countries with around 10 thousand (10^4) to (10^6) million pounds GDP have almost the full range of values, from below 5 to over 10 percentage but there are still some countries with more than 10 thousand (10^5) million pounds GDP with a high unemployment rate. Comparing the 10 poorest countries and the 10 countries with the lowest unemployment rate shows that total GDP is a rather crude measure. The population size should be taken into consideration for a more precise definiton of what 'poor' and 'rich' means. ``` # the 10 countries with lowest GDP gdpVsUnemp.sort_values(GDP).head(10) # the 10 countries with the lowest unemployment rate gdpVsUnemp.sort_values(UNEMPLOYMENT).head(10) ``` # Conclusion The correlation between real GDP growth and unemployment is very important for policy makers in order to obtain a sustainable rise in living standards. If GDP growth rate is below its natural rate it is indicated to promote employment because this rise in total income will note generate inflationary pressures. In contrast, if the GDP growth is above its natural level, policy makers will decide not to intensively promote the creation of new jobs in order to obtain a sustainable growth rate which will not generate inflation. The correlation coefficient shows that the variables are negatively correlated as predicted by the theory. These values are particularly important for policy makers in order to obtain an optimal relation between unemployment and real GDP growth.
true
code
0.401834
null
null
null
null
<a href="https://colab.research.google.com/github/yohanesnuwara/66DaysOfData/blob/main/D01_PCA.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # Principal Component Analysis ``` import numpy as np import matplotlib.pyplot as plt import pandas as pd import seaborn as sns from sklearn.decomposition import PCA from sklearn.datasets import load_digits, fetch_lfw_people from sklearn.preprocessing import StandardScaler rng = np.random.RandomState(1) X = np.dot(rng.rand(2, 5), rng.randn(5, 200)).T plt.scatter(X[:, 0], X[:, 1]) plt.axis('equal') plt.show() pca = PCA(n_components=2) pca.fit(X) ``` PCA components are called eigenvectors. ``` print(pca.components_) print(pca.explained_variance_) def draw_vector(v0, v1, ax=None): ax = ax or plt.gca() arrowprops=dict(arrowstyle='->', linewidth=2, shrinkA=0, shrinkB=0) ax.annotate('', v1, v0, arrowprops=arrowprops) # plot data plt.scatter(X[:, 0], X[:, 1]) for length, vector in zip(pca.explained_variance_, pca.components_): v = vector * 3 * np.sqrt(length) draw_vector(pca.mean_, pca.mean_ + v) plt.axis('equal'); ``` ## PCA to reduce dimension. ``` pca = PCA(n_components=1) pca.fit(X) X_pca = pca.transform(X) print("original shape: ", X.shape) print("transformed shape:", X_pca.shape) X_new = pca.inverse_transform(X_pca) plt.scatter(X[:, 0], X[:, 1], alpha=0.2) plt.scatter(X_new[:, 0], X_new[:, 1], alpha=0.8) plt.axis('equal') plt.show() ``` ## PCA for digit classification. ``` digits = load_digits() print(digits.data.shape) pca = PCA(2) # project from 64 to 2 dimensions projected = pca.fit_transform(digits.data) print(digits.data.shape) print(projected.shape) plt.scatter(projected[:, 0], projected[:, 1], c=digits.target, edgecolor='none', alpha=0.5, cmap=plt.cm.get_cmap('jet', 10)) plt.xlabel('component 1') plt.ylabel('component 2') plt.colorbar() plt.show() ``` Here, PCA can be used to approximate a digit. For instance, a 64-pixel image can be approximated by a dimensionality reduced 8-pixel image. Reconstructing using PCA as a basis function: $$image(x)=mean+x1⋅(basis 1)+x2⋅(basis 2)+x3⋅(basis 3)⋯$$ ``` def plot_pca_components(x, coefficients=None, mean=0, components=None, imshape=(8, 8), n_components=8, fontsize=12, show_mean=True): if coefficients is None: coefficients = x if components is None: components = np.eye(len(coefficients), len(x)) mean = np.zeros_like(x) + mean fig = plt.figure(figsize=(1.2 * (5 + n_components), 1.2 * 2)) g = plt.GridSpec(2, 4 + bool(show_mean) + n_components, hspace=0.3) def show(i, j, x, title=None): ax = fig.add_subplot(g[i, j], xticks=[], yticks=[]) ax.imshow(x.reshape(imshape), interpolation='nearest', cmap='binary') if title: ax.set_title(title, fontsize=fontsize) show(slice(2), slice(2), x, "True") approx = mean.copy() counter = 2 if show_mean: show(0, 2, np.zeros_like(x) + mean, r'$\mu$') show(1, 2, approx, r'$1 \cdot \mu$') counter += 1 for i in range(n_components): approx = approx + coefficients[i] * components[i] show(0, i + counter, components[i], r'$c_{0}$'.format(i + 1)) show(1, i + counter, approx, r"${0:.2f} \cdot c_{1}$".format(coefficients[i], i + 1)) if show_mean or i > 0: plt.gca().text(0, 1.05, '$+$', ha='right', va='bottom', transform=plt.gca().transAxes, fontsize=fontsize) show(slice(2), slice(-2, None), approx, "Approx") return fig pca = PCA(n_components=8) Xproj = pca.fit_transform(digits.data) fig = plot_pca_components(digits.data[3], Xproj[3], pca.mean_, pca.components_, show_mean=False) ``` Choose the optimum number of components. 20 is good to account over 90% of variance. ``` pca = PCA().fit(digits.data) plt.plot(np.cumsum(pca.explained_variance_ratio_)) plt.xlabel('number of components') plt.ylabel('cumulative explained variance') plt.show() ``` ## PCA for noise filtering ``` def plot_digits(data): fig, axes = plt.subplots(4, 10, figsize=(10, 4), subplot_kw={'xticks':[], 'yticks':[]}, gridspec_kw=dict(hspace=0.1, wspace=0.1)) for i, ax in enumerate(axes.flat): ax.imshow(data[i].reshape(8, 8), cmap='binary', interpolation='nearest', clim=(0, 16)) plot_digits(digits.data) ``` Add random noise. ``` np.random.seed(42) noisy = np.random.normal(digits.data, 5) # Tweak this number as level of noise plot_digits(noisy) ``` Make the PCA preserve 50% of the variance. There are 12 components the most fit one. ``` pca = PCA(0.50).fit(noisy) print(pca.n_components_) # See the number of components given % preservations x = np.linspace(0.1, 0.9, 19) comp = [(PCA(i).fit(noisy)).n_components_ for i in x] plt.plot(x, comp) plt.xlabel('Preservation') plt.ylabel('Number of components fit') plt.show() components = pca.transform(noisy) filtered = pca.inverse_transform(components) plot_digits(filtered) ``` ## Eigenfaces ``` faces = fetch_lfw_people(min_faces_per_person=60) print(faces.target_names) print(faces.images.shape) ``` There are 3,000 dimensions. Take a look at first 150 components. ``` pca = PCA(150) pca.fit(faces.data) ``` Look at the first 24 components (eigenvectors or "eigenfaces"). ``` fig, axes = plt.subplots(3, 8, figsize=(9, 4), subplot_kw={'xticks':[], 'yticks':[]}, gridspec_kw=dict(hspace=0.1, wspace=0.1)) for i, ax in enumerate(axes.flat): ax.imshow(pca.components_[i].reshape(62, 47), cmap='bone') ``` 150 is good to account for 90% of variance. Using these 150 components, we would recover most of the essential characteristics of the data. ``` plt.plot(np.cumsum(pca.explained_variance_ratio_)) plt.xlabel('number of components') plt.ylabel('cumulative explained variance'); # Compute the components and projected faces pca = PCA(150).fit(faces.data) components = pca.transform(faces.data) projected = pca.inverse_transform(components) ``` Reconstructing the full 3,000 pixel input image reduced to 150. ``` # Plot the results fig, ax = plt.subplots(2, 10, figsize=(10, 2.5), subplot_kw={'xticks':[], 'yticks':[]}, gridspec_kw=dict(hspace=0.1, wspace=0.1)) for i in range(10): ax[0, i].imshow(faces.data[i].reshape(62, 47), cmap='binary_r') ax[1, i].imshow(projected[i].reshape(62, 47), cmap='binary_r') ax[0, 0].set_ylabel('full-dim\ninput') ax[1, 0].set_ylabel('150-dim\nreconstruction'); ``` ## Feature selection ``` df = pd.read_csv('https://archive.ics.uci.edu/ml/' 'machine-learning-databases/wine/wine.data', header=None) df.columns = ['Class label', 'Alcohol', 'Malic acid', 'Ash', 'Alcalinity of ash', 'Magnesium', 'Total phenols', 'Flavanoids', 'Nonflavanoid phenols', 'Proanthocyanins', 'Color intensity', 'Hue', 'OD280/OD315 of diluted wines', 'Proline'] df.head() X, y = df.iloc[:, 1:], df.iloc[:, 0] scaler = StandardScaler() X_std = scaler.fit_transform(X) pca=PCA() Xt = pca.fit_transform(X_std) pca.explained_variance_ratio_ ``` From the bar plot below, 6 features are important, until it reach 90% of variance (red curve). ``` plt.bar(range(1,14),pca.explained_variance_ratio_,label='Variance Explained') plt.step(range(1,14),np.cumsum(pca.explained_variance_ratio_),label='CumSum Variance Explained',c='r') plt.ylabel('Explained variance ratio') plt.xlabel('Principal component index') plt.legend(loc='best') plt.tight_layout() plt.show() ``` References: * https://jakevdp.github.io/PythonDataScienceHandbook/05.10-manifold-learning.html * https://github.com/dishaaagarwal/Dimensionality-Reduction-Techniques * Other resources: * https://www.ritchieng.com/machine-learning-dimensionality-reduction-feature-transform/ * https://medium.com/analytics-vidhya/implementing-pca-in-python-with-sklearn-4f757fb4429e
true
code
0.718613
null
null
null
null
<a href="http://landlab.github.io"><img style="float: left" src="../../../landlab_header.png"></a> # Components for modeling overland flow erosion *(G.E. Tucker, July 2021)* There are two related components that calculate erosion resulting from surface-water flow, a.k.a. overland flow: `DepthSlopeProductErosion` and `DetachmentLtdErosion`. They were originally created by Jordan Adams to work with the `OverlandFlow` component, which solves for water depth across the terrain. They are similar to the `StreamPowerEroder` and `FastscapeEroder` components in that they calculate erosion resulting from water flow across a topographic surface, but whereas these components require a flow-routing algorithm to create a list of node "receivers", the `DepthSlopeProductErosion` and `DetachmentLtdErosion` components only require a user-identified slope field together with an at-node depth or discharge field (respectively). ## `DepthSlopeProductErosion` This component represents the rate of erosion, $E$, by surface water flow as: $$E = k_e (\tau^a - \tau_c^a)$$ where $k_e$ is an erodibility coefficient (with dimensions of velocity per stress$^a$), $\tau$ is bed shear stress, $\tau_c$ is a minimum bed shear stress for any erosion to occur, and $a$ is a parameter that is commonly treated as unity. For steady, uniform flow, $$\tau = \rho g H S$$, with $\rho$ being fluid density, $g$ gravitational acceleration, $H$ local water depth, and $S$ the (postive-downhill) slope gradient (an approximation of the sine of the slope angle). The component uses a user-supplied slope field (at nodes) together with the water-depth field `surface_water__depth` to calculate $\tau$, and then the above equation to calculate $E$. The component will then modify the `topographic__elevation` field accordingly. If the user wishes to apply material uplift relative to baselevel, an `uplift_rate` parameter can be passed on initialization. We can learn more about this component by examining its internal documentation. To get an overview of the component, we can examine its *header docstring*: internal documentation provided in the form of a Python docstring that sits just below the class declaration in the source code. This text can be displayed as shown here: ``` from landlab.components import DepthSlopeProductErosion print(DepthSlopeProductErosion.__doc__) ``` A second useful source of internal documentation for this component is its *init docstring*: a Python docstring that describes the component's class `__init__` method. In Landlab, the init docstrings for components normally provide a list of that component's parameters. Here's how to display the init docstring: ``` print(DepthSlopeProductErosion.__init__.__doc__) ``` ### Example In this example, we load the topography of a small drainage basin, calculate a water-depth field by running overland flow over the topography using the `KinwaveImplicitOverlandFlow` component, and then calculating the resulting erosion. Note that in order to accomplish this, we need to identify which variable we wish to use for slope gradient. This is not quite as simple as it may sound. An easy way to define slope is as the slope between two adjacent grid nodes. But using this approach means that slope is defined on the grid *links* rathter than *nodes*. To calculate slope magnitude at *nodes*, we'll define a little function below that uses Landlab's `calc_grad_at_link` method to calculate gradients at grid links, then use the `map_link_vector_components_to_node` method to calculate the $x$ and $y$ vector components at each node. With that in hand, we just use the Pythagorean theorem to find the slope magnitude from its vector components. First, though, some imports we'll need: ``` import copy import numpy as np import matplotlib as mpl import matplotlib.pyplot as plt from landlab import imshow_grid from landlab.components import KinwaveImplicitOverlandFlow from landlab.grid.mappers import map_link_vector_components_to_node from landlab.io import read_esri_ascii def slope_magnitude_at_node(grid, elev): # calculate gradient in elevation at each link grad_at_link = grid.calc_grad_at_link(elev) # set the gradient to zero for any inactive links # (those attached to a closed-boundaries node at either end, # or connecting two boundary nodes of any type) grad_at_link[grid.status_at_link != grid.BC_LINK_IS_ACTIVE] = 0.0 # map slope vector components from links to their adjacent nodes slp_x, slp_y = map_link_vector_components_to_node(grid, grad_at_link) # use the Pythagorean theorem to calculate the slope magnitude # from the x and y components slp_mag = (slp_x * slp_x + slp_y * slp_y) ** 0.5 return slp_mag, slp_x, slp_y ``` (See [here](https://landlab.readthedocs.io/en/latest/reference/grid/gradients.html#landlab.grid.gradients.calc_grad_at_link) to learn how `calc_grad_at_link` works, and [here](https://landlab.readthedocs.io/en/latest/reference/grid/raster_mappers.html#landlab.grid.raster_mappers.map_link_vector_components_to_node_raster) to learn how `map_link_vector_components_to_node` works.) Next, define some parameters we'll need. To estimate the erodibility coefficient $k_e$, one source is: [http://milford.nserl.purdue.edu/weppdocs/comperod/](http://milford.nserl.purdue.edu/weppdocs/comperod/) which reports experiments in rill erosion on agricultural soils. Converting their data into $k_e$, its values are on the order of 1 to 10 $\times 10^{-6}$ (m / s Pa), with threshold ($\tau_c$) values on the order of a few Pa. ``` # Process parameters n = 0.1 # roughness coefficient, (s/m^(1/3)) dep_exp = 5.0 / 3.0 # depth exponent R = 72.0 # runoff rate, mm/hr k_e = 4.0e-6 # erosion coefficient (m/s)/(kg/ms^2) tau_c = 3.0 # erosion threshold shear stress, Pa # Run-control parameters rain_duration = 240.0 # duration of rainfall, s run_time = 480.0 # duration of run, s dt = 10.0 # time-step size, s dem_filename = "../hugo_site_filled.asc" # Derived parameters num_steps = int(run_time / dt) # set up arrays to hold discharge and time time_since_storm_start = np.arange(0.0, dt * (2 * num_steps + 1), dt) discharge = np.zeros(2 * num_steps + 1) ``` Read an example digital elevation model (DEM) into a Landlab grid and set up the boundaries so that water can only exit out the right edge, representing the watershed outlet. ``` # Read the DEM file as a grid with a 'topographic__elevation' field (grid, elev) = read_esri_ascii(dem_filename, name="topographic__elevation") # Configure the boundaries: valid right-edge nodes will be open; # all NODATA (= -9999) nodes will be closed. grid.status_at_node[grid.nodes_at_right_edge] = grid.BC_NODE_IS_FIXED_VALUE grid.status_at_node[np.isclose(elev, -9999.0)] = grid.BC_NODE_IS_CLOSED ``` Now we'll calculate the slope vector components and magnitude, and plot the vectors as quivers on top of a shaded image of the topography: ``` slp_mag, slp_x, slp_y = slope_magnitude_at_node(grid, elev) imshow_grid(grid, elev) plt.quiver(grid.x_of_node, grid.y_of_node, slp_x, slp_y) ``` Let's take a look at the slope magnitudes: ``` imshow_grid(grid, slp_mag, colorbar_label="Slope gradient (m/m)") ``` Now we're ready to instantiate a `KinwaveImplicitOverlandFlow` component, with a specified runoff rate and roughness: ``` # Instantiate the component olflow = KinwaveImplicitOverlandFlow( grid, runoff_rate=R, roughness=n, depth_exp=dep_exp ) ``` The `DepthSlopeProductErosion` component requires there to be a field called `slope_magnitude` that contains our slope-gradient values, so we will we will create this field and assign `slp_mag` to it (the `clobber` keyword says it's ok to overwrite this field if it already exists, which prevents generating an error message if you run this cell more than once): ``` grid.add_field("slope_magnitude", slp_mag, at="node", clobber=True) ``` Now we're ready to instantiate a `DepthSlopeProductErosion` component: ``` dspe = DepthSlopeProductErosion(grid, k_e=k_e, tau_crit=tau_c, slope="slope_magnitude") ``` Next, we'll make a copy of the starting terrain for later comparison, then run overland flow and erosion: ``` starting_elev = elev.copy() for i in range(num_steps): olflow.run_one_step(dt) dspe.run_one_step(dt) slp_mag[:], slp_x, slp_y = slope_magnitude_at_node(grid, elev) ``` We can visualize the instantaneous erosion rate at the end of the run, in m/s: ``` imshow_grid(grid, dspe._E, colorbar_label="erosion rate (m/s)") ``` We can also inspect the cumulative erosion during the event by differencing the before and after terrain: ``` imshow_grid(grid, starting_elev - elev, colorbar_label="cumulative erosion (m)") ``` Note that because this is a bumpy DEM, much of the erosion has occurred on (probably digital) steps in the channels. But we can see some erosion across the slopes as well. ## `DetachmentLtdErosion` This component is similar to `DepthSlopeProductErosion` except that it calculates erosion rate from discharge and slope rather than depth and slope. The vertical incision rate, $I$ (equivalent to $E$ in the above; here we are following the notation in the component's documentation) is: $$I = K Q^m S^n - I_c$$ where $K$ is an erodibility coefficient (with dimensions of velocity per discharge$^m$; specified by parameter `K_sp`), $Q$ is volumetric discharge, $I_c$ is a threshold with dimensions of velocity, and $m$ and $n$ are exponents. (In the erosion literature, the exponents are sometimes treated as empirical parameters, and sometimes set to particular values on theoretical grounds; here we'll just set them to unity.) The component uses the fields `surface_water__discharge` and `topographic__slope` for $Q$ and $S$, respectively. The component will modify the `topographic__elevation` field accordingly. If the user wishes to apply material uplift relative to baselevel, an `uplift_rate` parameter can be passed on initialization. Here are the header and constructor docstrings: ``` from landlab.components import DetachmentLtdErosion print(DetachmentLtdErosion.__doc__) print(DetachmentLtdErosion.__init__.__doc__) ``` The example below uses the same approach as the previous example, but now using `DetachmentLtdErosion`. Note that the value for parameter $K$ (`K_sp`) is just a guess. Use of exponents $m=n=1$ implies the use of total stream power. ``` # Process parameters n = 0.1 # roughness coefficient, (s/m^(1/3)) dep_exp = 5.0 / 3.0 # depth exponent R = 72.0 # runoff rate, mm/hr K_sp = 1.0e-7 # erosion coefficient (m/s)/(m3/s) m_sp = 1.0 # discharge exponent n_sp = 1.0 # slope exponent I_c = 0.0001 # erosion threshold, m/s # Run-control parameters rain_duration = 240.0 # duration of rainfall, s run_time = 480.0 # duration of run, s dt = 10.0 # time-step size, s dem_filename = "../hugo_site_filled.asc" # Derived parameters num_steps = int(run_time / dt) # set up arrays to hold discharge and time time_since_storm_start = np.arange(0.0, dt * (2 * num_steps + 1), dt) discharge = np.zeros(2 * num_steps + 1) # Read the DEM file as a grid with a 'topographic__elevation' field (grid, elev) = read_esri_ascii(dem_filename, name="topographic__elevation") # Configure the boundaries: valid right-edge nodes will be open; # all NODATA (= -9999) nodes will be closed. grid.status_at_node[grid.nodes_at_right_edge] = grid.BC_NODE_IS_FIXED_VALUE grid.status_at_node[np.isclose(elev, -9999.0)] = grid.BC_NODE_IS_CLOSED slp_mag, slp_x, slp_y = slope_magnitude_at_node(grid, elev) grid.add_field("topographic__slope", slp_mag, at="node", clobber=True) # Instantiate the component olflow = KinwaveImplicitOverlandFlow( grid, runoff_rate=R, roughness=n, depth_exp=dep_exp ) dle = DetachmentLtdErosion( grid, K_sp=K_sp, m_sp=m_sp, n_sp=n_sp, entrainment_threshold=I_c ) starting_elev = elev.copy() for i in range(num_steps): olflow.run_one_step(dt) dle.run_one_step(dt) slp_mag[:], slp_x, slp_y = slope_magnitude_at_node(grid, elev) imshow_grid(grid, starting_elev - elev, colorbar_label="cumulative erosion (m)") ``` <hr> <small>For more Landlab tutorials, click here: <a href="https://landlab.readthedocs.io/en/latest/user_guide/tutorials.html">https://landlab.readthedocs.io/en/latest/user_guide/tutorials.html</a></small> <hr>
true
code
0.67077
null
null
null
null
# Approximate q-learning In this notebook you will teach a lasagne neural network to do Q-learning. __Frameworks__ - we'll accept this homework in any deep learning framework. For example, it translates to TensorFlow almost line-to-line. However, we recommend you to stick to theano/lasagne unless you're certain about your skills in the framework of your choice. ``` %env THEANO_FLAGS='floatX=float32' import os if type(os.environ.get("DISPLAY")) is not str or len(os.environ.get("DISPLAY"))==0: !bash ../xvfb start %env DISPLAY=:1 import gym import numpy as np, pandas as pd import matplotlib.pyplot as plt %matplotlib inline env = gym.make("CartPole-v0") env.reset() n_actions = env.action_space.n state_dim = env.observation_space.shape plt.imshow(env.render("rgb_array")) ``` # Approximate (deep) Q-learning: building the network In this section we will build and train naive Q-learning with theano/lasagne First step is initializing input variables ``` import theano import theano.tensor as T #create input variables. We'll support multiple states at once current_states = T.matrix("states[batch,units]") actions = T.ivector("action_ids[batch]") rewards = T.vector("rewards[batch]") next_states = T.matrix("next states[batch,units]") is_end = T.ivector("vector[batch] where 1 means that session just ended") import lasagne from lasagne.layers import * #input layer l_states = InputLayer((None,)+state_dim) <Your architecture. Please start with a single-layer network> #output layer l_qvalues = DenseLayer(<previous_layer>,num_units=n_actions,nonlinearity=None) ``` #### Predicting Q-values for `current_states` ``` #get q-values for ALL actions in current_states predicted_qvalues = get_output(l_qvalues,{l_states:current_states}) #compiling agent's "GetQValues" function get_qvalues = <compile a function that takes current_states and returns predicted_qvalues> #select q-values for chosen actions predicted_qvalues_for_actions = predicted_qvalues[T.arange(actions.shape[0]),actions] ``` #### Loss function and `update` Here we write a function similar to `agent.update`. ``` #predict q-values for next states predicted_next_qvalues = get_output(l_qvalues,{l_states:<theano input with for states>}) #Computing target q-values under gamma = 0.99 target_qvalues_for_actions = <target Q-values using rewards and predicted_next_qvalues> #zero-out q-values at the end target_qvalues_for_actions = (1-is_end)*target_qvalues_for_actions #don't compute gradient over target q-values (consider constant) target_qvalues_for_actions = theano.gradient.disconnected_grad(target_qvalues_for_actions) #mean squared error loss function loss = <mean squared between target_qvalues_for_actions and predicted_qvalues_for_actions> #all network weights all_weights = get_all_params(l_qvalues,trainable=True) #network updates. Note the small learning rate (for stability) updates = lasagne.updates.sgd(loss,all_weights,learning_rate=1e-4) #Training function that resembles agent.update(state,action,reward,next_state) #with 1 more argument meaning is_end train_step = theano.function([current_states,actions,rewards,next_states,is_end], updates=updates) ``` ### Playing the game ``` epsilon = 0.25 #initial epsilon def generate_session(t_max=1000): """play env with approximate q-learning agent and train it at the same time""" total_reward = 0 s = env.reset() for t in range(t_max): #get action q-values from the network q_values = get_qvalues([s])[0] a = <sample action with epsilon-greedy strategy> new_s,r,done,info = env.step(a) #train agent one step. Note that we use one-element arrays instead of scalars #because that's what function accepts. train_step([s],[a],[r],[new_s],[done]) total_reward+=r s = new_s if done: break return total_reward for i in range(100): rewards = [generate_session() for _ in range(100)] #generate new sessions epsilon*=0.95 print ("mean reward:%.3f\tepsilon:%.5f"%(np.mean(rewards),epsilon)) if np.mean(rewards) > 300: print ("You Win!") break assert epsilon!=0, "Please explore environment" ``` ### Video ``` epsilon=0 #Don't forget to reset epsilon back to initial value if you want to go on training #record sessions import gym.wrappers env = gym.wrappers.Monitor(env,directory="videos",force=True) sessions = [generate_session() for _ in range(100)] env.close() #unwrap env = env.env.env #upload to gym #gym.upload("./videos/",api_key="<your_api_key>") #you'll need me later #Warning! If you keep seeing error that reads something like"DoubleWrapError", #run env=gym.make("CartPole-v0");env.reset(); #show video from IPython.display import HTML import os video_names = list(filter(lambda s:s.endswith(".mp4"),os.listdir("./videos/"))) HTML(""" <video width="640" height="480" controls> <source src="{}" type="video/mp4"> </video> """.format("./videos/"+video_names[-1])) #this may or may not be _last_ video. Try other indices ```
true
code
0.441071
null
null
null
null
# Importing the libraries ``` import numpy as np import matplotlib.pyplot as plt import pandas as pd import seaborn as sns from sklearn.metrics import roc_curve, auc from sklearn.metrics import roc_auc_score,recall_score, precision_score, f1_score from sklearn.metrics import accuracy_score, confusion_matrix, classification_report, average_precision_score ``` # Load and Explore Data ``` dataset=pd.read_csv('weatherAUS.csv') dataset.head() dataset.describe() # find categorical variables categorical = [var for var in dataset.columns if dataset[var].dtype=='O'] print('There are {} categorical variables : \n'.format(len(categorical)), categorical) # view the categorical variables dataset[categorical].head() # check and print categorical variables containing missing values nullCategorical = [var for var in categorical if dataset[var].isnull().sum()!=0] print(dataset[nullCategorical].isnull().sum()) ``` Number of labels: cardinality The number of labels within a categorical variable is known as cardinality. A high number of labels within a variable is known as high cardinality. High cardinality may pose some serious problems in the machine learning model. So, I will check for high cardinality. ``` # check for cardinality in categorical variables for var in categorical: print(var, ' contains ', len(dataset[var].unique()), ' labels') # Feature Extraction dataset['Date'].dtypes # parse the dates, currently coded as strings, into datetime format dataset['Date'] = pd.to_datetime(dataset['Date']) dataset['Date'].dtypes # extract year from date dataset['Year'] = dataset['Date'].dt.year # extract month from date dataset['Month'] = dataset['Date'].dt.month # extract day from date dataset['Day'] = dataset['Date'].dt.day dataset.info() # drop the original Date variable dataset.drop('Date', axis=1, inplace = True) dataset.head() ``` ## Explore Categorical Variables ``` # Explore Location variable dataset.Location.unique() # check frequency distribution of values in Location variable dataset.Location.value_counts() # let's do One Hot Encoding of Location variable # get k-1 dummy variables after One Hot Encoding pd.get_dummies(dataset.Location, drop_first=True).head() # Explore WindGustDir variable dataset.WindGustDir.unique() # check frequency distribution of values in WindGustDir variable dataset.WindGustDir.value_counts() # let's do One Hot Encoding of WindGustDir variable # get k-1 dummy variables after One Hot Encoding # also add an additional dummy variable to indicate there was missing data pd.get_dummies(dataset.WindGustDir, drop_first=True, dummy_na=True).head() # sum the number of 1s per boolean variable over the rows of the dataset --> it will tell us how many observations we have for each category pd.get_dummies(dataset.WindGustDir, drop_first=True, dummy_na=True).sum(axis=0) # Explore WindDir9am variable dataset.WindDir9am.unique() dataset.WindDir9am.value_counts() pd.get_dummies(dataset.WindDir9am, drop_first=True, dummy_na=True).head() # sum the number of 1s per boolean variable over the rows of the dataset -- it will tell us how many observations we have for each category pd.get_dummies(dataset.WindDir9am, drop_first=True, dummy_na=True).sum(axis=0) # Explore WindDir3pm variable dataset['WindDir3pm'].unique() dataset['WindDir3pm'].value_counts() pd.get_dummies(dataset.WindDir3pm, drop_first=True, dummy_na=True).head() pd.get_dummies(dataset.WindDir3pm, drop_first=True, dummy_na=True).sum(axis=0) # Explore RainToday variable dataset['RainToday'].unique() dataset.RainToday.value_counts() pd.get_dummies(dataset.RainToday, drop_first=True, dummy_na=True).head() pd.get_dummies(dataset.RainToday, drop_first=True, dummy_na=True).sum(axis=0) ``` ## Explore Numerical Variables ``` # find numerical variables numerical = [var for var in dataset.columns if dataset[var].dtype!='O'] print('There are {} numerical variables : \n'.format(len(numerical)), numerical) # view the numerical variables dataset[numerical].head() # check missing values in numerical variables dataset[numerical].isnull().sum() # view summary statistics in numerical variables to check for outliers print(round(dataset[numerical].describe()),2) # plot box plot to check outliers plt.figure(figsize=(10,15)) plt.subplot(2, 2, 1) fig = sns.boxplot(y=dataset['Rainfall']) fig.set_ylabel('Rainfall') plt.subplot(2, 2, 2) fig = sns.boxplot(y=dataset["Evaporation"]) fig.set_ylabel('Evaporation') plt.subplot(2, 2, 3) fig = sns.boxplot(y=dataset['WindSpeed9am']) fig.set_ylabel('WindSpeed9am') plt.subplot(2, 2, 4) fig = sns.boxplot(y=dataset['WindSpeed3pm']) fig.set_ylabel('WindSpeed3pm') # plot histogram to check distribution plt.figure(figsize=(10,15)) plt.subplot(2, 2, 1) fig = dataset.Rainfall.hist(bins=10) fig.set_xlabel('Rainfall') fig.set_ylabel('RainTomorrow') plt.subplot(2, 2, 2) fig = dataset.Evaporation.hist(bins=10) fig.set_xlabel('Evaporation') fig.set_ylabel('RainTomorrow') plt.subplot(2, 2, 3) fig = dataset.WindSpeed9am.hist(bins=10) fig.set_xlabel('WindSpeed9am') fig.set_ylabel('RainTomorrow') plt.subplot(2, 2, 4) fig = dataset.WindSpeed3pm.hist(bins=10) fig.set_xlabel('WindSpeed3pm') fig.set_ylabel('RainTomorrow') # find outliers for Rainfall variable IQR = dataset.Rainfall.quantile(0.75) - dataset.Rainfall.quantile(0.25) Rainfall_Lower_fence = dataset.Rainfall.quantile(0.25) - (IQR * 3) Rainfall_Upper_fence = dataset.Rainfall.quantile(0.75) + (IQR * 3) print('Outliers are values < {lowerboundary} or > {upperboundary}'.format(lowerboundary=Rainfall_Lower_fence, upperboundary=Rainfall_Upper_fence)) print('Number of outliers are {}'. format(dataset[(dataset.Rainfall> Rainfall_Upper_fence) | (dataset.Rainfall< Rainfall_Lower_fence)]['Rainfall'].count())) # find outliers for Evaporation variable IQR = dataset.Evaporation.quantile(0.75) - dataset.Evaporation.quantile(0.25) Evaporation_Lower_fence = dataset.Evaporation.quantile(0.25) - (IQR * 3) Evaporation_Upper_fence = dataset.Evaporation.quantile(0.75) + (IQR * 3) print('Outliers are values < {lowerboundary} or > {upperboundary}'.format(lowerboundary=Evaporation_Lower_fence, upperboundary=Evaporation_Upper_fence)) print('Number of outliers are {}'. format(dataset[(dataset.Evaporation> Evaporation_Upper_fence) | (dataset.Evaporation< Evaporation_Lower_fence)]['Evaporation'].count())) # find outliers for WindSpeed9am variable IQR = dataset.WindSpeed9am.quantile(0.75) - dataset.WindSpeed9am.quantile(0.25) WindSpeed9am_Lower_fence = dataset.WindSpeed9am.quantile(0.25) - (IQR * 3) WindSpeed9am_Upper_fence = dataset.WindSpeed9am.quantile(0.75) + (IQR * 3) print('Outliers are values < {lowerboundary} or > {upperboundary}'.format(lowerboundary=WindSpeed9am_Lower_fence, upperboundary=WindSpeed9am_Upper_fence)) print('Number of outliers are {}'. format(dataset[(dataset.WindSpeed9am> WindSpeed9am_Upper_fence) | (dataset.WindSpeed9am< WindSpeed9am_Lower_fence)]['WindSpeed9am'].count())) # find outliers for WindSpeed3pm variable IQR = dataset.WindSpeed3pm.quantile(0.75) - dataset.WindSpeed3pm.quantile(0.25) WindSpeed3pm_Lower_fence = dataset.WindSpeed3pm.quantile(0.25) - (IQR * 3) WindSpeed3pm_Upper_fence = dataset.WindSpeed3pm.quantile(0.75) + (IQR * 3) print('Outliers are values < {lowerboundary} or > {upperboundary}'.format(lowerboundary=WindSpeed3pm_Lower_fence, upperboundary=WindSpeed3pm_Upper_fence)) print('Number of outliers are {}'. format(dataset[(dataset.WindSpeed3pm> WindSpeed3pm_Lower_fence) | (dataset.WindSpeed3pm< WindSpeed3pm_Upper_fence)]['WindSpeed3pm'].count())) def max_value(dataset, variable, top): return np.where(dataset[variable]>top, top, dataset[variable]) dataset['Rainfall'] = max_value(dataset, 'Rainfall', Rainfall_Upper_fence) dataset['Evaporation'] = max_value(dataset, 'Evaporation', Evaporation_Upper_fence) dataset['WindSpeed9am'] = max_value(dataset, 'WindSpeed9am', WindSpeed9am_Upper_fence) dataset['WindSpeed3pm'] = max_value(dataset, 'WindSpeed3pm', 57) print('Number of outliers are {}'. format(dataset[(dataset.Rainfall> Rainfall_Upper_fence) | (dataset.Rainfall< Rainfall_Lower_fence)]['Rainfall'].count())) print('Number of outliers are {}'. format(dataset[(dataset.Evaporation> Evaporation_Upper_fence) | (dataset.Evaporation< Evaporation_Lower_fence)]['Evaporation'].count())) print('Number of outliers are {}'. format(dataset[(dataset.WindSpeed9am> WindSpeed9am_Upper_fence) | (dataset.WindSpeed9am< WindSpeed9am_Lower_fence)]['WindSpeed9am'].count())) print('Number of outliers are {}'. format(dataset[(dataset.WindSpeed3pm> WindSpeed3pm_Lower_fence) | (dataset.WindSpeed3pm< WindSpeed3pm_Upper_fence)]['WindSpeed3pm'].count())) # Replace NaN with default values nullValues = [var for var in dataset.columns if dataset[var].isnull().sum()!=0] print(dataset[nullValues].isnull().sum()) categorical = [var for var in nullValues if dataset[var].dtype=='O'] from sklearn.impute import SimpleImputer categoricalImputer = SimpleImputer(missing_values=np.nan,strategy='constant') categoricalImputer.fit(dataset[categorical]) dataset[categorical]=categoricalImputer.transform(dataset[categorical]) print(dataset.head()) numerical = [var for var in dataset.columns if dataset[var].dtype!='O'] from sklearn.impute import SimpleImputer numericalImputer = SimpleImputer(missing_values=np.nan,strategy='mean') numericalImputer.fit(dataset[numerical]) dataset[numerical]=numericalImputer.transform(dataset[numerical]) print(dataset.head()) ``` # Split data for model ``` x = dataset.drop(['RainTomorrow'], axis=1) # get all row data expect RainTomorrow y = dataset['RainTomorrow'] # get the RainTomorrow column depentant variable data for all rows print(x.head()) print(y[:10]) ``` # Encoding categorical data ``` #encoding independent variable x = pd.get_dummies(x) print(x.head()) ## Encoding dependent variable # use LabelEncoder to replace purchased (dependent variable) with 0 and 1 from sklearn.preprocessing import LabelEncoder y= LabelEncoder().fit_transform(y) print(y[:10]) ``` # Splitting the dataset into training and test set ``` from sklearn.model_selection import train_test_split x_train,x_test,y_train,y_test = train_test_split(x,y,test_size=0.3,random_state = 0) # func returns train and test data. It takes dataset and then split size test_size =0.3 means 30% data is for test and rest for training and random_state print(x_train.head()) print(x_test.head()) print(y_train[:10]) print(y_test[:10]) ``` # Feature scaling ``` from sklearn.preprocessing import MinMaxScaler scaler = MinMaxScaler() x_train= scaler.fit_transform(x_train) x_test = scaler.transform(x_test) print(x_train[:10,:]) print(x_test[:10,:]) ``` # Build Model ``` from sklearn.linear_model import LogisticRegression classifier = LogisticRegression(solver='liblinear', random_state=0) classifier.fit(x_train,y_train) #predicting the test set results y_pred = classifier.predict(x_test) ``` # Evaluate Model ``` cm = confusion_matrix(y_test,y_pred) print(cm) cr = classification_report(y_test,y_pred) print(cr) accuracy_score(y_test,y_pred) average_precision= average_precision_score(y_test,y_pred) print(average_precision) recall_score(y_test,y_pred) precision_score(y_test,y_pred) f1_score(y_test,y_pred) from sklearn.metrics import precision_recall_curve from sklearn.metrics import plot_precision_recall_curve disp = plot_precision_recall_curve(classifier, x_test, y_test) disp.ax_.set_title('2-class Precision-Recall curve: ' 'AP={0:0.2f}'.format(average_precision)) ```
true
code
0.584034
null
null
null
null
``` # default_exp callback.PredictionDynamics ``` # PredictionDynamics > Callback used to visualize model predictions during training. This is an implementation created by Ignacio Oguiza ([email protected]) based on a [blog post](http://localhost:8888/?token=83bca9180c34e1c8991886445942499ee8c1e003bc0491d0) by Andrej Karpathy I read some time ago that I really liked. One of the things he mentioned was this: >"**visualize prediction dynamics**. I like to visualize model predictions on a fixed test batch during the course of training. The “dynamics” of how these predictions move will give you incredibly good intuition for how the training progresses. Many times it is possible to feel the network “struggle” to fit your data if it wiggles too much in some way, revealing instabilities. Very low or very high learning rates are also easily noticeable in the amount of jitter." A. Karpathy ``` #export from fastai.callback.all import * from tsai.imports import * # export class PredictionDynamics(Callback): order, run_valid = 65, True def __init__(self, show_perc=1., figsize=(10,6), alpha=.3, size=30, color='lime', cmap='gist_rainbow', normalize=False, sensitivity=None, specificity=None): """ Args: show_perc: percent of samples from the valid set that will be displayed. Default: 1 (all). You can reduce it if the number is too high and the chart is too busy. alpha: level of transparency. Default:.3. 1 means no transparency. figsize: size of the chart. You may want to expand it if too many classes. size: size of each sample in the chart. Default:30. You may need to decrease it a bit if too many classes/ samples. color: color used in regression plots. cmap: color map used in classification plots. normalize: flag to normalize histograms displayed in binary classification. sensitivity: (aka recall or True Positive Rate) if you pass a float between 0. and 1. the sensitivity threshold will be plotted in the chart. Only used in binary classification. specificity: (or True Negative Rate) if you pass a float between 0. and 1. it will be plotted in the chart. Only used in binary classification. The red line in classification tasks indicate the average probability of true class. """ store_attr() def before_fit(self): self.run = not hasattr(self.learn, 'lr_finder') and not hasattr(self, "gather_preds") if not self.run: return self.cat = True if (hasattr(self.dls, "c") and self.dls.c > 1) else False if self.cat: self.binary = self.dls.c == 2 if self.show_perc != 1: valid_size = len(self.dls.valid.dataset) self.show_idxs = np.random.choice(valid_size, int(round(self.show_perc * valid_size)), replace=False) # Prepare ground truth container self.y_true = [] def before_epoch(self): # Prepare empty pred container in every epoch self.y_pred = [] def after_pred(self): if self.training: return # Get y_true in epoch 0 if self.epoch == 0: self.y_true.extend(self.y.cpu().flatten().numpy()) # Gather y_pred for every batch if self.cat: if self.binary: y_pred = F.softmax(self.pred, -1)[:, 1].reshape(-1, 1).cpu() else: y_pred = torch.gather(F.softmax(self.pred, -1), -1, self.y.reshape(-1, 1).long()).cpu() else: y_pred = self.pred.cpu() self.y_pred.extend(y_pred.flatten().numpy()) def after_epoch(self): # Ground truth if self.epoch == 0: self.y_true = np.array(self.y_true) if self.show_perc != 1: self.y_true = self.y_true[self.show_idxs] self.y_bounds = (np.min(self.y_true), np.max(self.y_true)) self.min_x_bounds, self.max_x_bounds = np.min(self.y_true), np.max(self.y_true) self.y_pred = np.array(self.y_pred) if self.show_perc != 1: self.y_pred = self.y_pred[self.show_idxs] if self.cat: neg_thr = None pos_thr = None if self.specificity is not None: inp0 = self.y_pred[self.y_true == 0] neg_thr = np.sort(inp0)[-int(len(inp0) * (1 - self.specificity))] if self.sensitivity is not None: inp1 = self.y_pred[self.y_true == 1] pos_thr = np.sort(inp1)[-int(len(inp1) * self.sensitivity)] self.update_graph(self.y_pred, self.y_true, neg_thr=neg_thr, pos_thr=pos_thr) else: # Adjust bounds during validation self.min_x_bounds = min(self.min_x_bounds, np.min(self.y_pred)) self.max_x_bounds = max(self.max_x_bounds, np.max(self.y_pred)) x_bounds = (self.min_x_bounds, self.max_x_bounds) self.update_graph(self.y_pred, self.y_true, x_bounds=x_bounds, y_bounds=self.y_bounds) def update_graph(self, y_pred, y_true, x_bounds=None, y_bounds=None, neg_thr=None, pos_thr=None): if not hasattr(self, 'graph_fig'): self.df_out = display("", display_id=True) if self.cat: self._cl_names = self.dls.vocab self._classes = L(self.dls.vocab.o2i.values()) self._n_classes = len(self._classes) if self.binary: self.bins = np.linspace(0, 1, 101) else: _cm = plt.get_cmap(self.cmap) self._color = [_cm(1. * c/self._n_classes) for c in range(1, self._n_classes + 1)][::-1] self._h_vals = np.linspace(-.5, self._n_classes - .5, self._n_classes + 1)[::-1] self._rand = [] for i, c in enumerate(self._classes): self._rand.append(.5 * (np.random.rand(np.sum(y_true == c)) - .5)) self.graph_fig, self.graph_ax = plt.subplots(1, figsize=self.figsize) self.graph_out = display("", display_id=True) self.graph_ax.clear() if self.cat: if self.binary: self.graph_ax.hist(y_pred[y_true == 0], bins=self.bins, density=self.normalize, color='red', label=self._cl_names[0], edgecolor='black', alpha=self.alpha) self.graph_ax.hist(y_pred[y_true == 1], bins=self.bins, density=self.normalize, color='blue', label=self._cl_names[1], edgecolor='black', alpha=self.alpha) self.graph_ax.axvline(.5, lw=1, ls='--', color='gray') if neg_thr is not None: self.graph_ax.axvline(neg_thr, lw=2, ls='--', color='red', label=f'specificity={(self.specificity):.3f}') if pos_thr is not None: self.graph_ax.axvline(pos_thr, lw=2, ls='--', color='blue', label=f'sensitivity={self.sensitivity:.3f}') self.graph_ax.set_xlabel(f'probability of class {self._cl_names[1]}', fontsize=12) self.graph_ax.legend() else: for i, c in enumerate(self._classes): self.graph_ax.scatter(y_pred[y_true == c], y_true[y_true == c] + self._rand[i], color=self._color[i], edgecolor='black', alpha=self.alpha, lw=.5, s=self.size) self.graph_ax.vlines(np.mean(y_pred[y_true == c]), i - .5, i + .5, color='r') self.graph_ax.vlines(.5, min(self._h_vals), max(self._h_vals), lw=.5) self.graph_ax.hlines(self._h_vals, 0, 1, lw=.5) self.graph_ax.set_ylim(min(self._h_vals), max(self._h_vals)) self.graph_ax.set_yticks(self._classes) self.graph_ax.set_yticklabels(self._cl_names) self.graph_ax.set_ylabel('true class', fontsize=12) self.graph_ax.set_xlabel('probability of true class', fontsize=12) self.graph_ax.set_xlim(0, 1) self.graph_ax.set_xticks(np.linspace(0, 1, 11)) self.graph_ax.grid(axis='x', color='gainsboro', lw=.2) else: self.graph_ax.scatter(y_pred, y_true, color=self.color, edgecolor='black', alpha=self.alpha, lw=.5, s=self.size) self.graph_ax.set_xlim(*x_bounds) self.graph_ax.set_ylim(*y_bounds) self.graph_ax.plot([*x_bounds], [*x_bounds], color='gainsboro') self.graph_ax.set_xlabel('y_pred', fontsize=12) self.graph_ax.set_ylabel('y_true', fontsize=12) self.graph_ax.grid(color='gainsboro', lw=.2) self.graph_ax.set_title(f'Prediction Dynamics \nepoch: {self.epoch + 1}/{self.n_epoch}') self.df_out.update(pd.DataFrame(np.stack(self.learn.recorder.values)[-1].reshape(1,-1), columns=self.learn.recorder.metric_names[1:-1], index=[self.epoch])) self.graph_out.update(self.graph_ax.figure) if self.epoch == self.n_epoch - 1: plt.close(self.graph_ax.figure) from tsai.basics import * from tsai.models.InceptionTime import * dsid = 'NATOPS' X, y, splits = get_UCR_data(dsid, split_data=False) check_data(X, y, splits, False) tfms = [None, [Categorize()]] batch_tfms = [TSStandardize(by_var=True)] dls = get_ts_dls(X, y, splits=splits, tfms=tfms, batch_tfms=batch_tfms) learn = ts_learner(dls, InceptionTime, metrics=accuracy, cbs=PredictionDynamics()) learn.fit_one_cycle(2, 3e-3) #hide from tsai.imports import * from tsai.export import * nb_name = get_nb_name() # nb_name = "064_callback.PredictionDynamics.ipynb" create_scripts(nb_name); ```
true
code
0.716262
null
null
null
null
# Quickstart A quick introduction on how to use the OQuPy package to compute the dynamics of a quantum system that is possibly strongly coupled to a structured environment. We illustrate this by applying the TEMPO method to the strongly coupled spin boson model. **Contents:** * Example - The spin boson model * 1. The model and its parameters * 2. Create system, correlations and bath objects * 3. TEMPO computation First, let's import OQuPy and some other packages we are going to use ``` import sys sys.path.insert(0,'..') import oqupy import numpy as np import matplotlib.pyplot as plt ``` and check what version of tempo we are using. ``` oqupy.__version__ ``` Let's also import some shorthands for the spin Pauli operators and density matrices. ``` sigma_x = oqupy.operators.sigma("x") sigma_y = oqupy.operators.sigma("y") sigma_z = oqupy.operators.sigma("z") up_density_matrix = oqupy.operators.spin_dm("z+") down_density_matrix = oqupy.operators.spin_dm("z-") ``` ------------------------------------------------- ## Example - The spin boson model As a first example let's try to reconstruct one of the lines in figure 2a of [Strathearn2018] ([Nat. Comm. 9, 3322 (2018)](https://doi.org/10.1038/s41467-018-05617-3) / [arXiv:1711.09641v3](https://arxiv.org/abs/1711.09641)). In this example we compute the time evolution of a spin which is strongly coupled to an ohmic bath (spin-boson model). Before we go through this step by step below, let's have a brief look at the script that will do the job - just to have an idea where we are going: ``` Omega = 1.0 omega_cutoff = 5.0 alpha = 0.3 system = oqupy.System(0.5 * Omega * sigma_x) correlations = oqupy.PowerLawSD(alpha=alpha, zeta=1, cutoff=omega_cutoff, cutoff_type='exponential') bath = oqupy.Bath(0.5 * sigma_z, correlations) tempo_parameters = oqupy.TempoParameters(dt=0.1, dkmax=30, epsrel=10**(-4)) dynamics = oqupy.tempo_compute(system=system, bath=bath, initial_state=up_density_matrix, start_time=0.0, end_time=15.0, parameters=tempo_parameters) t, s_z = dynamics.expectations(0.5*sigma_z, real=True) plt.plot(t, s_z, label=r'$\alpha=0.3$') plt.xlabel(r'$t\,\Omega$') plt.ylabel(r'$<S_z>$') plt.legend() ``` ### 1. The model and its parameters We consider a system Hamiltonian $$ H_{S} = \frac{\Omega}{2} \hat{\sigma}_x \mathrm{,}$$ a bath Hamiltonian $$ H_{B} = \sum_k \omega_k \hat{b}^\dagger_k \hat{b}_k \mathrm{,}$$ and an interaction Hamiltonian $$ H_{I} = \frac{1}{2} \hat{\sigma}_z \sum_k \left( g_k \hat{b}^\dagger_k + g^*_k \hat{b}_k \right) \mathrm{,}$$ where $\hat{\sigma}_i$ are the Pauli operators, and the $g_k$ and $\omega_k$ are such that the spectral density $J(\omega)$ is $$ J(\omega) = \sum_k |g_k|^2 \delta(\omega - \omega_k) = 2 \, \alpha \, \omega \, \exp\left(-\frac{\omega}{\omega_\mathrm{cutoff}}\right) \mathrm{.} $$ Also, let's assume the initial density matrix of the spin is the up state $$ \rho(0) = \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix} $$ and the bath is initially at zero temperature. For the numerical simulation it is advisable to choose a characteristic frequency and express all other physical parameters in terms of this frequency. Here, we choose $\Omega$ for this and write: * $\Omega = 1.0 \Omega$ * $\omega_c = 5.0 \Omega$ * $\alpha = 0.3$ ``` Omega = 1.0 omega_cutoff = 5.0 alpha = 0.3 ``` ### 2. Create system, correlations and bath objects #### System $$ H_{S} = \frac{\Omega}{2} \hat{\sigma}_x \mathrm{,}$$ ``` system = oqupy.System(0.5 * Omega * sigma_x) ``` #### Correlations $$ J(\omega) = 2 \, \alpha \, \omega \, \exp\left(-\frac{\omega}{\omega_\mathrm{cutoff}}\right) $$ Because the spectral density is of the standard power-law form, $$ J(\omega) = 2 \alpha \frac{\omega^\zeta}{\omega_c^{\zeta-1}} X(\omega,\omega_c) $$ with $\zeta=1$ and $X$ of the type ``'exponential'`` we define the spectral density with: ``` correlations = oqupy.PowerLawSD(alpha=alpha, zeta=1, cutoff=omega_cutoff, cutoff_type='exponential') ``` #### Bath The bath couples with the operator $\frac{1}{2}\hat{\sigma}_z$ to the system. ``` bath = oqupy.Bath(0.5 * sigma_z, correlations) ``` ### 3. TEMPO computation Now, that we have the system and the bath objects ready we can compute the dynamics of the spin starting in the up state, from time $t=0$ to $t=5\,\Omega^{-1}$ ``` dynamics_1 = oqupy.tempo_compute(system=system, bath=bath, initial_state=up_density_matrix, start_time=0.0, end_time=5.0, tolerance=0.01) ``` and plot the result: ``` t_1, z_1 = dynamics_1.expectations(0.5*sigma_z, real=True) plt.plot(t_1, z_1, label=r'$\alpha=0.3$') plt.xlabel(r'$t\,\Omega$') plt.ylabel(r'$<S_z>$') plt.legend() ``` Yay! This looks like the plot in figure 2a [Strathearn2018]. Let's have a look at the above warning. It said: ``` WARNING: Estimating parameters for TEMPO calculation. No guarantie that resulting TEMPO calculation converges towards the correct dynamics! Please refere to the TEMPO documentation and check convergence by varying the parameters for TEMPO manually. ``` We got this message because we didn't tell the package what parameters to use for the TEMPO computation, but instead only specified a `tolerance`. The package tries it's best by implicitly calling the function `oqupy.guess_tempo_parameters()` to find parameters that are appropriate for the spectral density and system objects given. #### TEMPO Parameters There are **three key parameters** to a TEMPO computation: * `dt` - Length of a time step $\delta t$ - It should be small enough such that a trotterisation between the system Hamiltonian and the environment it valid, and the environment auto-correlation function is reasonably well sampled. * `dkmax` - Number of time steps $K \in \mathbb{N}$ - It must be large enough such that $\delta t \times K$ is larger than the neccessary memory time $\tau_\mathrm{cut}$. * `epsrel` - The maximal relative error $\epsilon_\mathrm{rel}$ in the singular value truncation - It must be small enough such that the numerical compression (using tensor network algorithms) does not truncate relevant correlations. To choose the right set of initial parameters, we recommend to first use the `oqupy.guess_tempo_parameters()` function and then check with the helper function `oqupy.helpers.plot_correlations_with_parameters()` whether it satisfies the above requirements: ``` parameters = oqupy.guess_tempo_parameters(system=system, bath=bath, start_time=0.0, end_time=5.0, tolerance=0.01) print(parameters) fig, ax = plt.subplots(1,1) oqupy.helpers.plot_correlations_with_parameters(bath.correlations, parameters, ax=ax) ``` In this plot you see the real and imaginary part of the environments auto-correlation as a function of the delay time $\tau$ and the sampling of it corresponding the the chosen parameters. The spacing and the number of sampling points is given by `dt` and `dkmax` respectively. We can see that the auto-correlation function is close to zero for delay times larger than approx $2 \Omega^{-1}$ and that the sampling points follow the curve reasonably well. Thus this is a reasonable set of parameters. We can choose a set of parameters by hand and bundle them into a `TempoParameters` object, ``` tempo_parameters = oqupy.TempoParameters(dt=0.1, dkmax=30, epsrel=10**(-4), name="my rough parameters") print(tempo_parameters) ``` and check again with the helper function: ``` fig, ax = plt.subplots(1,1) oqupy.helpers.plot_correlations_with_parameters(bath.correlations, tempo_parameters, ax=ax) ``` We could feed this object into the `oqupy.tempo_compute()` function to get the dynamics of the system. However, instead of that, we can split up the work that `oqupy.tempo_compute()` does into several steps, which allows us to resume a computation to get later system dynamics without having to start over. For this we start with creating a `Tempo` object: ``` tempo = oqupy.Tempo(system=system, bath=bath, parameters=tempo_parameters, initial_state=up_density_matrix, start_time=0.0) ``` We can start by computing the dynamics up to time $5.0\,\Omega^{-1}$, ``` tempo.compute(end_time=5.0) ``` then get and plot the dynamics of expecatation values, ``` dynamics_2 = tempo.get_dynamics() plt.plot(*dynamics_2.expectations(0.5*sigma_z, real=True), label=r'$\alpha=0.3$') plt.xlabel(r'$t\,\Omega$') plt.ylabel(r'$<S_z>$') plt.legend() ``` then continue the computation to $15.0\,\Omega^{-1}$, ``` tempo.compute(end_time=15.0) ``` and then again get and plot the dynamics of expecatation values. ``` dynamics_2 = tempo.get_dynamics() plt.plot(*dynamics_2.expectations(0.5*sigma_z, real=True), label=r'$\alpha=0.3$') plt.xlabel(r'$t\,\Omega$') plt.ylabel(r'$<S_z>$') plt.legend() ``` Finally, we note: to validate the accuracy the result **it vital to check the convergence of such a simulation by varying all three computational parameters!** For this we recommend repeating the same simulation with slightly "better" parameters (smaller `dt`, larger `dkmax`, smaller `epsrel`) and to consider the difference of the result as an estimate of the upper bound of the accuracy of the simulation. -------------------------------------------------
true
code
0.660364
null
null
null
null
## Exercise 3 In the videos you looked at how you would improve Fashion MNIST using Convolutions. For your exercise see if you can improve MNIST to 99.8% accuracy or more using only a single convolutional layer and a single MaxPooling 2D. You should stop training once the accuracy goes above this amount. It should happen in less than 20 epochs, so it's ok to hard code the number of epochs for training, but your training must end once it hits the above metric. If it doesn't, then you'll need to redesign your layers. I've started the code for you -- you need to finish it! When 99.8% accuracy has been hit, you should print out the string "Reached 99.8% accuracy so cancelling training!" ``` import tensorflow as tf from os import path, getcwd, chdir # DO NOT CHANGE THE LINE BELOW. If you are developing in a local # environment, then grab mnist.npz from the Coursera Jupyter Notebook # and place it inside a local folder and edit the path to that location path = f"{getcwd()}/../tmp2/mnist.npz" config = tf.ConfigProto() config.gpu_options.allow_growth = True sess = tf.Session(config=config) # GRADED FUNCTION: train_mnist_conv def train_mnist_conv(): # Please write your code only where you are indicated. # please do not remove model fitting inline comments. # YOUR CODE STARTS HERE class myCallback(tf.keras.callbacks.Callback): def on_epoch_end(self, epoch, logs={}): # Quick solution for "older" tensorflow to get rid of TypeError: Use 'acc' instead of 'accuracy' # The version of tf used here is 1.14.0 (old) if(logs.get('acc') >= 0.998): print('\nReached 99.8% accuracy so cancelling training!') self.model.stop_training = True # YOUR CODE ENDS HERE mnist = tf.keras.datasets.mnist (training_images, training_labels), (test_images, test_labels) = mnist.load_data(path=path) # YOUR CODE STARTS HERE training_images=training_images.reshape(60000, 28, 28, 1) training_images=training_images / 255.0 callbacks = myCallback() # YOUR CODE ENDS HERE model = tf.keras.models.Sequential([ # YOUR CODE STARTS HERE tf.keras.layers.Conv2D(64, (3, 3), activation='relu', input_shape=(28, 28, 1)), tf.keras.layers.MaxPooling2D(2, 2), tf.keras.layers.Flatten(), tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dense(10, activation='softmax') # YOUR CODE ENDS HERE ]) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) # model fitting history = model.fit( # YOUR CODE STARTS HERE training_images, training_labels, epochs=30, callbacks=[callbacks] # YOUR CODE ENDS HERE ) # model fitting return history.epoch, history.history['acc'][-1] _, _ = train_mnist_conv() # Now click the 'Submit Assignment' button above. # Once that is complete, please run the following two cells to save your work and close the notebook %%javascript <!-- Save the notebook --> IPython.notebook.save_checkpoint(); %%javascript IPython.notebook.session.delete(); window.onbeforeunload = null setTimeout(function() { window.close(); }, 1000); ```
true
code
0.634034
null
null
null
null
## AI for Medicine Course 1 Week 1 lecture exercises <a name="densenet"></a> # Densenet In this week's assignment, you'll be using a pre-trained Densenet model for image classification. Densenet is a convolutional network where each layer is connected to all other layers that are deeper in the network - The first layer is connected to the 2nd, 3rd, 4th etc. - The second layer is connected to the 3rd, 4th, 5th etc. Like this: <img src="densenet.png" alt="U-net Image" width="400" align="middle"/> For a detailed explanation of Densenet, check out the source of the image above, a paper by Gao Huang et al. 2018 called [Densely Connected Convolutional Networks](https://arxiv.org/pdf/1608.06993.pdf). The cells below are set up to provide an exploration of the Keras densenet implementation that you'll be using in the assignment. Run these cells to gain some insight into the network architecture. ``` # Import Densenet from Keras from keras.applications.densenet import DenseNet121 from keras.layers import Dense, GlobalAveragePooling2D from keras.models import Model from keras import backend as K ``` For your work in the assignment, you'll be loading a set of pre-trained weights to reduce training time. ``` # Create the base pre-trained model base_model = DenseNet121(weights='./nih/densenet.hdf5', include_top=False); ``` View a summary of the model ``` # Print the model summary base_model.summary() # Print out the first five layers layers_l = base_model.layers print("First 5 layers") layers_l[0:5] # Print out the last five layers print("Last 5 layers") layers_l[-6:-1] # Get the convolutional layers and print the first 5 conv2D_layers = [layer for layer in base_model.layers if str(type(layer)).find('Conv2D') > -1] print("The first five conv2D layers") conv2D_layers[0:5] # Print out the total number of convolutional layers print(f"There are {len(conv2D_layers)} convolutional layers") # Print the number of channels in the input print("The input has 3 channels") base_model.input # Print the number of output channels print("The output has 1024 channels") x = base_model.output x # Add a global spatial average pooling layer x_pool = GlobalAveragePooling2D()(x) x_pool # Define a set of five class labels to use as an example labels = ['Emphysema', 'Hernia', 'Mass', 'Pneumonia', 'Edema'] n_classes = len(labels) print(f"In this example, you want your model to identify {n_classes} classes") # Add a logistic layer the same size as the number of classes you're trying to predict predictions = Dense(n_classes, activation="sigmoid")(x_pool) print(f"Predictions have {n_classes} units, one for each class") predictions # Create an updated model model = Model(inputs=base_model.input, outputs=predictions) # Compile the model model.compile(optimizer='adam', loss='categorical_crossentropy') # (You'll customize the loss function in the assignment!) ``` #### This has been a brief exploration of the Densenet architecture you'll use in this week's graded assignment!
true
code
0.723749
null
null
null
null
[**Blueprints for Text Analysis Using Python**](https://github.com/blueprints-for-text-analytics-python/blueprints-text) Jens Albrecht, Sidharth Ramachandran, Christian Winkler **If you like the book or the code examples here, please leave a friendly comment on [Amazon.com](https://www.amazon.com/Blueprints-Text-Analytics-Using-Python/dp/149207408X)!** <img src="../rating.png" width="100"/> # Chapter 5:<div class='tocSkip'/> # Feature Engineering and Syntactic Similarity ## Remark<div class='tocSkip'/> The code in this notebook differs slightly from the printed book. Several layout and formatting commands, like `figsize` to control figure size or subplot commands are removed in the book. All of this is done to simplify the code in the book and put the focus on the important parts instead of formatting. ## Setup<div class='tocSkip'/> Set directory locations. If working on Google Colab: copy files and install required libraries. ``` import sys, os ON_COLAB = 'google.colab' in sys.modules if ON_COLAB: GIT_ROOT = 'https://github.com/blueprints-for-text-analytics-python/blueprints-text/raw/master' os.system(f'wget {GIT_ROOT}/ch05/setup.py') %run -i setup.py ``` ## Load Python Settings<div class="tocSkip"/> Common imports, defaults for formatting in Matplotlib, Pandas etc. ``` %run "$BASE_DIR/settings.py" %reload_ext autoreload %autoreload 2 %config InlineBackend.figure_format = 'png' ``` # Data preparation ``` sentences = ["It was the best of times", "it was the worst of times", "it was the age of wisdom", "it was the age of foolishness"] tokenized_sentences = [[t for t in sentence.split()] for sentence in sentences] vocabulary = set([w for s in tokenized_sentences for w in s]) import pandas as pd [[w, i] for i,w in enumerate(vocabulary)] ``` # One-hot by hand ``` def onehot_encode(tokenized_sentence): return [1 if w in tokenized_sentence else 0 for w in vocabulary] onehot = [onehot_encode(tokenized_sentence) for tokenized_sentence in tokenized_sentences] for (sentence, oh) in zip(sentences, onehot): print("%s: %s" % (oh, sentence)) pd.DataFrame(onehot, columns=vocabulary) sim = [onehot[0][i] & onehot[1][i] for i in range(0, len(vocabulary))] sum(sim) import numpy as np np.dot(onehot[0], onehot[1]) np.dot(onehot, onehot[1]) ``` ## Out of vocabulary ``` onehot_encode("the age of wisdom is the best of times".split()) onehot_encode("John likes to watch movies. Mary likes movies too.".split()) ``` ## document term matrix ``` onehot ``` ## similarities ``` import numpy as np np.dot(onehot, np.transpose(onehot)) ``` # scikit learn one-hot vectorization ``` from sklearn.preprocessing import MultiLabelBinarizer lb = MultiLabelBinarizer() lb.fit([vocabulary]) lb.transform(tokenized_sentences) ``` # CountVectorizer ``` from sklearn.feature_extraction.text import CountVectorizer cv = CountVectorizer() more_sentences = sentences + ["John likes to watch movies. Mary likes movies too.", "Mary also likes to watch football games."] pd.DataFrame(more_sentences) cv.fit(more_sentences) print(cv.get_feature_names()) dt = cv.transform(more_sentences) dt pd.DataFrame(dt.toarray(), columns=cv.get_feature_names()) from sklearn.metrics.pairwise import cosine_similarity cosine_similarity(dt[0], dt[1]) len(more_sentences) pd.DataFrame(cosine_similarity(dt, dt)) ``` # TF/IDF ``` from sklearn.feature_extraction.text import TfidfTransformer tfidf = TfidfTransformer() tfidf_dt = tfidf.fit_transform(dt) pd.DataFrame(tfidf_dt.toarray(), columns=cv.get_feature_names()) pd.DataFrame(cosine_similarity(tfidf_dt, tfidf_dt)) headlines = pd.read_csv(ABCNEWS_FILE, parse_dates=["publish_date"]) headlines.head() from sklearn.feature_extraction.text import TfidfVectorizer tfidf = TfidfVectorizer() dt = tfidf.fit_transform(headlines["headline_text"]) dt dt.data.nbytes %%time cosine_similarity(dt[0:10000], dt[0:10000]) ``` ## Stopwords ``` from spacy.lang.en.stop_words import STOP_WORDS as stopwords print(len(stopwords)) tfidf = TfidfVectorizer(stop_words=stopwords) dt = tfidf.fit_transform(headlines["headline_text"]) dt ``` ## min_df ``` tfidf = TfidfVectorizer(stop_words=stopwords, min_df=2) dt = tfidf.fit_transform(headlines["headline_text"]) dt tfidf = TfidfVectorizer(stop_words=stopwords, min_df=.0001) dt = tfidf.fit_transform(headlines["headline_text"]) dt ``` ## max_df ``` tfidf = TfidfVectorizer(stop_words=stopwords, max_df=0.1) dt = tfidf.fit_transform(headlines["headline_text"]) dt tfidf = TfidfVectorizer(max_df=0.1) dt = tfidf.fit_transform(headlines["headline_text"]) dt ``` ## n-grams ``` tfidf = TfidfVectorizer(stop_words=stopwords, ngram_range=(1,2), min_df=2) dt = tfidf.fit_transform(headlines["headline_text"]) print(dt.shape) print(dt.data.nbytes) tfidf = TfidfVectorizer(stop_words=stopwords, ngram_range=(1,3), min_df=2) dt = tfidf.fit_transform(headlines["headline_text"]) print(dt.shape) print(dt.data.nbytes) ``` ## Lemmas ``` from tqdm.auto import tqdm import spacy nlp = spacy.load("en") nouns_adjectives_verbs = ["NOUN", "PROPN", "ADJ", "ADV", "VERB"] for i, row in tqdm(headlines.iterrows(), total=len(headlines)): doc = nlp(str(row["headline_text"])) headlines.at[i, "lemmas"] = " ".join([token.lemma_ for token in doc]) headlines.at[i, "nav"] = " ".join([token.lemma_ for token in doc if token.pos_ in nouns_adjectives_verbs]) headlines.head() tfidf = TfidfVectorizer(stop_words=stopwords) dt = tfidf.fit_transform(headlines["lemmas"].map(str)) dt tfidf = TfidfVectorizer(stop_words=stopwords) dt = tfidf.fit_transform(headlines["nav"].map(str)) dt ``` ## remove top 10,000 ``` top_10000 = pd.read_csv("https://raw.githubusercontent.com/first20hours/google-10000-english/master/google-10000-english.txt", header=None) tfidf = TfidfVectorizer(stop_words=set(top_10000.iloc[:,0].values)) dt = tfidf.fit_transform(headlines["nav"].map(str)) dt tfidf = TfidfVectorizer(ngram_range=(1,2), stop_words=set(top_10000.iloc[:,0].values), min_df=2) dt = tfidf.fit_transform(headlines["nav"].map(str)) dt ``` ## Finding document most similar to made-up document ``` tfidf = TfidfVectorizer(stop_words=stopwords, min_df=2) dt = tfidf.fit_transform(headlines["lemmas"].map(str)) dt made_up = tfidf.transform(["australia and new zealand discuss optimal apple size"]) sim = cosine_similarity(made_up, dt) sim[0] headlines.iloc[np.argsort(sim[0])[::-1][0:5]][["publish_date", "lemmas"]] ``` # Finding the most similar documents ``` # there are "test" headlines in the corpus stopwords.add("test") tfidf = TfidfVectorizer(stop_words=stopwords, ngram_range=(1,2), min_df=2, norm='l2') dt = tfidf.fit_transform(headlines["headline_text"]) ``` ### Timing Cosine Similarity ``` %%time cosine_similarity(dt[0:10000], dt[0:10000], dense_output=False) %%time r = cosine_similarity(dt[0:10000], dt[0:10000]) r[r > 0.9999] = 0 print(np.argmax(r)) %%time r = cosine_similarity(dt[0:10000], dt[0:10000], dense_output=False) r[r > 0.9999] = 0 print(np.argmax(r)) ``` ### Timing Dot-Product ``` %%time r = np.dot(dt[0:10000], np.transpose(dt[0:10000])) r[r > 0.9999] = 0 print(np.argmax(r)) ``` ## Batch ``` %%time batch = 10000 max_sim = 0.0 max_a = None max_b = None for a in range(0, dt.shape[0], batch): for b in range(0, a+batch, batch): print(a, b) #r = np.dot(dt[a:a+batch], np.transpose(dt[b:b+batch])) r = cosine_similarity(dt[a:a+batch], dt[b:b+batch], dense_output=False) # eliminate identical vectors # by setting their similarity to np.nan which gets sorted out r[r > 0.9999] = 0 sim = r.max() if sim > max_sim: # argmax returns a single value which we have to # map to the two dimensions (max_a, max_b) = np.unravel_index(np.argmax(r), r.shape) # adjust offsets in corpus (this is a submatrix) max_a += a max_b += b max_sim = sim print(max_a, max_b) print(max_sim) pd.set_option('max_colwidth', -1) headlines.iloc[[max_a, max_b]][["publish_date", "headline_text"]] ``` # Finding most related words ``` tfidf_word = TfidfVectorizer(stop_words=stopwords, min_df=1000) dt_word = tfidf_word.fit_transform(headlines["headline_text"]) r = cosine_similarity(dt_word.T, dt_word.T) np.fill_diagonal(r, 0) voc = tfidf_word.get_feature_names() size = r.shape[0] # quadratic for index in np.argsort(r.flatten())[::-1][0:40]: a = int(index/size) b = index%size if a > b: # avoid repetitions print('"%s" related to "%s"' % (voc[a], voc[b])) ```
true
code
0.439567
null
null
null
null
# PixelCNN **Author:** [ADMoreau](https://github.com/ADMoreau)<br> **Date created:** 2020/05/17<br> **Last modified:** 2020/05/23<br> **Description:** PixelCNN implemented in Keras. ## Introduction PixelCNN is a generative model proposed in 2016 by van den Oord et al. (reference: [Conditional Image Generation with PixelCNN Decoders](https://arxiv.org/abs/1606.05328)). It is designed to generate images (or other data types) iteratively, from an input vector where the probability distribution of prior elements dictates the probability distribution of later elements. In the following example, images are generated in this fashion, pixel-by-pixel, via a masked convolution kernel that only looks at data from previously generated pixels (origin at the top left) to generate later pixels. During inference, the output of the network is used as a probability ditribution from which new pixel values are sampled to generate a new image (here, with MNIST, the pixels values are either black or white). ``` import numpy as np import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers from tqdm import tqdm ``` ## Getting the Data ``` # Model / data parameters num_classes = 10 input_shape = (28, 28, 1) n_residual_blocks = 5 # The data, split between train and test sets (x, _), (y, _) = keras.datasets.mnist.load_data() # Concatenate all of the images together data = np.concatenate((x, y), axis=0) # Round all pixel values less than 33% of the max 256 value to 0 # anything above this value gets rounded up to 1 so that all values are either # 0 or 1 data = np.where(data < (0.33 * 256), 0, 1) data = data.astype(np.float32) ``` ## Create two classes for the requisite Layers for the model ``` # The first layer is the PixelCNN layer. This layer simply # builds on the 2D convolutional layer, but includes masking. class PixelConvLayer(layers.Layer): def __init__(self, mask_type, **kwargs): super(PixelConvLayer, self).__init__() self.mask_type = mask_type self.conv = layers.Conv2D(**kwargs) def build(self, input_shape): # Build the conv2d layer to initialize kernel variables self.conv.build(input_shape) # Use the initialized kernel to create the mask kernel_shape = self.conv.kernel.get_shape() self.mask = np.zeros(shape=kernel_shape) self.mask[: kernel_shape[0] // 2, ...] = 1.0 self.mask[kernel_shape[0] // 2, : kernel_shape[1] // 2, ...] = 1.0 if self.mask_type == "B": self.mask[kernel_shape[0] // 2, kernel_shape[1] // 2, ...] = 1.0 def call(self, inputs): self.conv.kernel.assign(self.conv.kernel * self.mask) return self.conv(inputs) # Next, we build our residual block layer. # This is just a normal residual block, but based on the PixelConvLayer. class ResidualBlock(keras.layers.Layer): def __init__(self, filters, **kwargs): super(ResidualBlock, self).__init__(**kwargs) self.conv1 = keras.layers.Conv2D( filters=filters, kernel_size=1, activation="relu" ) self.pixel_conv = PixelConvLayer( mask_type="B", filters=filters // 2, kernel_size=3, activation="relu", padding="same", ) self.conv2 = keras.layers.Conv2D( filters=filters, kernel_size=1, activation="relu" ) def call(self, inputs): x = self.conv1(inputs) x = self.pixel_conv(x) x = self.conv2(x) return keras.layers.add([inputs, x]) ``` ## Build the model based on the original paper ``` inputs = keras.Input(shape=input_shape) x = PixelConvLayer( mask_type="A", filters=128, kernel_size=7, activation="relu", padding="same" )(inputs) for _ in range(n_residual_blocks): x = ResidualBlock(filters=128)(x) for _ in range(2): x = PixelConvLayer( mask_type="B", filters=128, kernel_size=1, strides=1, activation="relu", padding="valid", )(x) out = keras.layers.Conv2D( filters=1, kernel_size=1, strides=1, activation="sigmoid", padding="valid" )(x) pixel_cnn = keras.Model(inputs, out) adam = keras.optimizers.Adam(learning_rate=0.0005) pixel_cnn.compile(optimizer=adam, loss="binary_crossentropy") pixel_cnn.summary() pixel_cnn.fit( x=data, y=data, batch_size=128, epochs=50, validation_split=0.1, verbose=2 ) ``` ## Demonstration The PixelCNN cannot generate the full image at once, and must instead generate each pixel in order, append the last generated pixel to the current image, and feed the image back into the model to repeat the process. ``` from IPython.display import Image, display # Create an empty array of pixels. batch = 4 pixels = np.zeros(shape=(batch,) + (pixel_cnn.input_shape)[1:]) batch, rows, cols, channels = pixels.shape # Iterate the pixels because generation has to be done sequentially pixel by pixel. for row in tqdm(range(rows)): for col in range(cols): for channel in range(channels): # Feed the whole array and retrieving the pixel value probabilities for the next # pixel. probs = pixel_cnn.predict(pixels)[:, row, col, channel] # Use the probabilities to pick pixel values and append the values to the image # frame. pixels[:, row, col, channel] = tf.math.ceil( probs - tf.random.uniform(probs.shape) ) def deprocess_image(x): # Stack the single channeled black and white image to rgb values. x = np.stack((x, x, x), 2) # Undo preprocessing x *= 255.0 # Convert to uint8 and clip to the valid range [0, 255] x = np.clip(x, 0, 255).astype("uint8") return x # Iterate the generated images and plot them with matplotlib. for i, pic in enumerate(pixels): keras.preprocessing.image.save_img( "generated_image_{}.png".format(i), deprocess_image(np.squeeze(pic, -1)) ) display(Image("generated_image_0.png")) display(Image("generated_image_1.png")) display(Image("generated_image_2.png")) display(Image("generated_image_3.png")) ```
true
code
0.82324
null
null
null
null
# "Poleval 2021 through wav2vec2" > "Trying for pronunciation recovery" - toc: false - branch: master - comments: true - hidden: true - categories: [wav2vec2, poleval, colab] ``` %%capture !pip install gdown !gdown https://drive.google.com/uc?id=1b6MyyqgA9D1U7DX3Vtgda7f9ppkxjCXJ %%capture !tar zxvf poleval_wav.train.tar.gz && rm poleval_wav.train.tar.gz %%capture !pip install librosa webrtcvad #collapse-hide # VAD wrapper is taken from PyTorch Speaker Verification: # https://github.com/HarryVolek/PyTorch_Speaker_Verification # Copyright (c) 2019, HarryVolek # License: BSD-3-Clause # based on https://github.com/wiseman/py-webrtcvad/blob/master/example.py # Copyright (c) 2016 John Wiseman # License: MIT import collections import contextlib import numpy as np import sys import librosa import wave import webrtcvad #from hparam import hparam as hp sr = 16000 def read_wave(path, sr): """Reads a .wav file. Takes the path, and returns (PCM audio data, sample rate). Assumes sample width == 2 """ with contextlib.closing(wave.open(path, 'rb')) as wf: num_channels = wf.getnchannels() assert num_channels == 1 sample_width = wf.getsampwidth() assert sample_width == 2 sample_rate = wf.getframerate() assert sample_rate in (8000, 16000, 32000, 48000) pcm_data = wf.readframes(wf.getnframes()) data, _ = librosa.load(path, sr) assert len(data.shape) == 1 assert sr in (8000, 16000, 32000, 48000) return data, pcm_data class Frame(object): """Represents a "frame" of audio data.""" def __init__(self, bytes, timestamp, duration): self.bytes = bytes self.timestamp = timestamp self.duration = duration def frame_generator(frame_duration_ms, audio, sample_rate): """Generates audio frames from PCM audio data. Takes the desired frame duration in milliseconds, the PCM data, and the sample rate. Yields Frames of the requested duration. """ n = int(sample_rate * (frame_duration_ms / 1000.0) * 2) offset = 0 timestamp = 0.0 duration = (float(n) / sample_rate) / 2.0 while offset + n < len(audio): yield Frame(audio[offset:offset + n], timestamp, duration) timestamp += duration offset += n def vad_collector(sample_rate, frame_duration_ms, padding_duration_ms, vad, frames): """Filters out non-voiced audio frames. Given a webrtcvad.Vad and a source of audio frames, yields only the voiced audio. Uses a padded, sliding window algorithm over the audio frames. When more than 90% of the frames in the window are voiced (as reported by the VAD), the collector triggers and begins yielding audio frames. Then the collector waits until 90% of the frames in the window are unvoiced to detrigger. The window is padded at the front and back to provide a small amount of silence or the beginnings/endings of speech around the voiced frames. Arguments: sample_rate - The audio sample rate, in Hz. frame_duration_ms - The frame duration in milliseconds. padding_duration_ms - The amount to pad the window, in milliseconds. vad - An instance of webrtcvad.Vad. frames - a source of audio frames (sequence or generator). Returns: A generator that yields PCM audio data. """ num_padding_frames = int(padding_duration_ms / frame_duration_ms) # We use a deque for our sliding window/ring buffer. ring_buffer = collections.deque(maxlen=num_padding_frames) # We have two states: TRIGGERED and NOTTRIGGERED. We start in the # NOTTRIGGERED state. triggered = False voiced_frames = [] for frame in frames: is_speech = vad.is_speech(frame.bytes, sample_rate) if not triggered: ring_buffer.append((frame, is_speech)) num_voiced = len([f for f, speech in ring_buffer if speech]) # If we're NOTTRIGGERED and more than 90% of the frames in # the ring buffer are voiced frames, then enter the # TRIGGERED state. if num_voiced > 0.9 * ring_buffer.maxlen: triggered = True start = ring_buffer[0][0].timestamp # We want to yield all the audio we see from now until # we are NOTTRIGGERED, but we have to start with the # audio that's already in the ring buffer. for f, s in ring_buffer: voiced_frames.append(f) ring_buffer.clear() else: # We're in the TRIGGERED state, so collect the audio data # and add it to the ring buffer. voiced_frames.append(frame) ring_buffer.append((frame, is_speech)) num_unvoiced = len([f for f, speech in ring_buffer if not speech]) # If more than 90% of the frames in the ring buffer are # unvoiced, then enter NOTTRIGGERED and yield whatever # audio we've collected. if num_unvoiced > 0.9 * ring_buffer.maxlen: triggered = False yield (start, frame.timestamp + frame.duration) ring_buffer.clear() voiced_frames = [] # If we have any leftover voiced audio when we run out of input, # yield it. if voiced_frames: yield (start, frame.timestamp + frame.duration) def VAD_chunk(aggressiveness, path): audio, byte_audio = read_wave(path, sr) vad = webrtcvad.Vad(int(aggressiveness)) frames = frame_generator(20, byte_audio, sr) frames = list(frames) times = vad_collector(sr, 20, 200, vad, frames) speech_times = [] speech_segs = [] for i, time in enumerate(times): start = np.round(time[0],decimals=2) end = np.round(time[1],decimals=2) j = start while j + .4 < end: end_j = np.round(j+.4,decimals=2) speech_times.append((j, end_j)) speech_segs.append(audio[int(j*sr):int(end_j*sr)]) j = end_j else: speech_times.append((j, end)) speech_segs.append(audio[int(j*sr):int(end*sr)]) return speech_times, speech_segs #collapse-hide # Based on code from PyTorch Speaker Verification: # https://github.com/HarryVolek/PyTorch_Speaker_Verification # Copyright (c) 2019, HarryVolek # Additions Copyright (c) 2021, Jim O'Regan # License: MIT import numpy as np # wav2vec2's max duration is 40 seconds, using 39 by default # to be a little safer def vad_concat(times, segs, max_duration=39.0): """ Concatenate continuous times and their segments, where the end time of a segment is the same as the start time of the next Parameters: times: list of tuple (start, end) segs: list of segments (audio frames) max_duration: maximum duration of the resulting concatenated segments; the kernel size of wav2vec2 is 40 seconds, so the default max_duration is 39, to ensure the resulting list of segments will fit Returns: concat_times: list of tuple (start, end) concat_segs: list of segments (audio frames) """ absolute_maximum=40.0 if max_duration > absolute_maximum: raise Exception('`max_duration` {:.2f} larger than kernel size (40 seconds)'.format(max_duration)) # we take 0.0 to mean "don't concatenate" do_concat = (max_duration != 0.0) concat_seg = [] concat_times = [] seg_concat = segs[0] time_concat = times[0] for i in range(0, len(times)-1): can_concat = (times[i+1][1] - time_concat[0]) < max_duration if time_concat[1] == times[i+1][0] and do_concat and can_concat: seg_concat = np.concatenate((seg_concat, segs[i+1])) time_concat = (time_concat[0], times[i+1][1]) else: concat_seg.append(seg_concat) seg_concat = segs[i+1] concat_times.append(time_concat) time_concat = times[i+1] else: concat_seg.append(seg_concat) concat_times.append(time_concat) return concat_times, concat_seg def make_dataset(concat_times, concat_segs): starts = [s[0] for s in concat_times] ends = [s[1] for s in concat_times] return {'start': starts, 'end': ends, 'speech': concat_segs} %%capture !pip install datasets from datasets import Dataset def vad_to_dataset(path, max_duration): t,s = VAD_chunk(3, path) if max_duration > 0.0: ct, cs = vad_concat(t, s, max_duration) dset = make_dataset(ct, cs) else: dset = make_dataset(t, s) return Dataset.from_dict(dset) %%capture !pip install -q transformers %%capture from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC # load model and tokenizer processor = Wav2Vec2Processor.from_pretrained("mbien/wav2vec2-large-xlsr-polish") model = Wav2Vec2ForCTC.from_pretrained("mbien/wav2vec2-large-xlsr-polish") model.to("cuda") def speech_file_to_array_fn(batch): import torchaudio speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = speech_array[0].numpy() batch["sampling_rate"] = sampling_rate batch["target_text"] = batch["sentence"] return batch def evaluate(batch): import torch inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch import json def process_wave(filename, duration): import json dataset = vad_to_dataset(filename, duration) result = dataset.map(evaluate, batched=True, batch_size=16) speechless = result.remove_columns(['speech']) d=speechless.to_dict() tlog = list() for i in range(0, len(d['end']) - 1): out = dict() out['start'] = d['start'][i] out['end'] = d['end'][i] out['transcript'] = d['pred_strings'][i] tlog.append(out) with open('{}.tlog'.format(filename), 'w') as outfile: json.dump(tlog, outfile) import glob for f in glob.glob('/content/poleval_final_dataset_wav/train/*.wav'): print(f) process_wave(f, 10.0) !find . -name '*tlog'|zip poleval-train.zip -@ ```
true
code
0.734365
null
null
null
null
# Tune a CNN on MNIST This tutorial walks through using Ax to tune two hyperparameters (learning rate and momentum) for a PyTorch CNN on the MNIST dataset trained using SGD with momentum. ``` import torch import numpy as np from ax.plot.contour import plot_contour from ax.plot.trace import optimization_trace_single_method from ax.service.managed_loop import optimize from ax.utils.notebook.plotting import render, init_notebook_plotting from ax.utils.tutorials.cnn_utils import load_mnist, train, evaluate, CNN init_notebook_plotting() torch.manual_seed(12345) dtype = torch.float device = torch.device("cuda" if torch.cuda.is_available() else "cpu") ``` ## 1. Load MNIST data First, we need to load the MNIST data and partition it into training, validation, and test sets. Note: this will download the dataset if necessary. ``` BATCH_SIZE = 512 train_loader, valid_loader, test_loader = load_mnist(batch_size=BATCH_SIZE) ``` ## 2. Define function to optimize In this tutorial, we want to optimize classification accuracy on the validation set as a function of the learning rate and momentum. The function takes in a parameterization (set of parameter values), computes the classification accuracy, and returns a dictionary of metric name ('accuracy') to a tuple with the mean and standard error. ``` def train_evaluate(parameterization): net = CNN() net = train(net=net, train_loader=train_loader, parameters=parameterization, dtype=dtype, device=device) return evaluate( net=net, data_loader=valid_loader, dtype=dtype, device=device, ) ``` ## 3. Run the optimization loop Here, we set the bounds on the learning rate and momentum and set the parameter space for the learning rate to be on a log scale. ``` best_parameters, values, experiment, model = optimize( parameters=[ {"name": "lr", "type": "range", "bounds": [1e-6, 0.4], "log_scale": True}, {"name": "momentum", "type": "range", "bounds": [0.0, 1.0]}, ], evaluation_function=train_evaluate, objective_name='accuracy', ) ``` We can introspect the optimal parameters and their outcomes: ``` best_parameters means, covariances = values means, covariances ``` ## 4. Plot response surface Contour plot showing classification accuracy as a function of the two hyperparameters. The black squares show points that we have actually run, notice how they are clustered in the optimal region. ``` render(plot_contour(model=model, param_x='lr', param_y='momentum', metric_name='accuracy')) ``` ## 5. Plot best objective as function of the iteration Show the model accuracy improving as we identify better hyperparameters. ``` # `plot_single_method` expects a 2-d array of means, because it expects to average means from multiple # optimization runs, so we wrap out best objectives array in another array. best_objectives = np.array([[trial.objective_mean*100 for trial in experiment.trials.values()]]) best_objective_plot = optimization_trace_single_method( y=np.maximum.accumulate(best_objectives, axis=1), title="Model performance vs. # of iterations", ylabel="Classification Accuracy, %", ) render(best_objective_plot) ``` ## 6. Train CNN with best hyperparameters and evaluate on test set Note that the resulting accuracy on the test set might not be exactly the same as the maximum accuracy achieved on the evaluation set throughout optimization. ``` data = experiment.fetch_data() df = data.df best_arm_name = df.arm_name[df['mean'] == df['mean'].max()].values[0] best_arm = experiment.arms_by_name[best_arm_name] best_arm combined_train_valid_set = torch.utils.data.ConcatDataset([ train_loader.dataset.dataset, valid_loader.dataset.dataset, ]) combined_train_valid_loader = torch.utils.data.DataLoader( combined_train_valid_set, batch_size=BATCH_SIZE, shuffle=True, ) net = train( net=CNN(), train_loader=combined_train_valid_loader, parameters=best_arm.parameters, dtype=dtype, device=device, ) test_accuracy = evaluate( net=net, data_loader=test_loader, dtype=dtype, device=device, ) print(f"Classification Accuracy (test set): {round(test_accuracy*100, 2)}%") ```
true
code
0.795181
null
null
null
null
``` #import sys #!{sys.executable} -m pip install --user alerce ``` # light_transient_matching ## Matches DESI observations to ALERCE and DECAM ledger objects This code predominately takes in data from the ALERCE and DECAM ledger brokers and identifies DESI observations within 2 arcseconds of those objects, suspected to be transients. It then prepares those matches to be fed into our [CNN code](https://github.com/MatthewPortman/timedomain/blob/master/cronjobs/transient_matching/modified_cnn_classify_data_gradCAM.ipynb) which attempts to identify the class of these transients. The main matching algorithm uses astropy's **match_coordinate_sky** to match 1-to-1 targets with the objects from the two ledgers. Wrapping functions handle data retrieval from both the ledgers as well as from DESI and prepare this data to be fed into **match_coordinate_sky**. Since ALERCE returns a small enough (pandas) dataframe, we do not need to precondition the input much. However, DECAM has many more objects to match so we use a two-stage process: an initial 2 degree match to tile RA's/DEC's and a second closer 1 arcsecond match to individual targets. As the code is a work in progress, please forgive any redundancies. We are attempting to merge all of the above (neatly) into the same two or three matching/handling functions! ``` from astropy.io import fits from astropy.table import Table from astropy import units as u from astropy.time import Time from astropy.coordinates import SkyCoord, match_coordinates_sky, Angle import matplotlib.pyplot as plt import numpy as np import pandas as pd from glob import glob import sys import sqlite3 import os from desispec.io import read_spectra, write_spectra from desispec.spectra import Spectra # Some handy global variables global db_filename db_filename = '/global/cfs/cdirs/desi/science/td/daily-search/transients_search.db' global exposure_path exposure_path = os.environ["DESI_SPECTRO_REDUX"] global color_band color_band = "r" global minDist minDist = {} global today today = Time.now() ``` ## Necessary functions ``` # Grabbing the file names def all_candidate_filenames(transient_dir: str): # This function grabs the names of all input files in the transient directory and does some python string manipulation # to grab the names of the input files with full path and the filenames themselves. try: filenames_read = glob(transient_dir + "/*.fits") # Hardcoding is hopefully a temporary measure. except: print("Could not grab/find any fits in the transient spectra directory:") print(transient_dir) filenames_read = [] # Just in case #filenames_out = [] # Just in case raise SystemExit("Exiting.") #else: #filenames_out = [s.split(".")[0] for s in filenames_read] #filenames_out = [s.split("/")[-1] for s in filenames_read] #filenames_out = [s.replace("in", "out") for s in filenames_out] return filenames_read #, filenames_out #path_to_transient = "/global/cfs/cdirs/desi/science/td/daily-search/desitrip/out" #print(all_candidate_filenames(path_to_transient)[1]) # From ALeRCE_ledgermaker https://github.com/alercebroker/alerce_client # I have had trouble importing this before so I copy, paste it, and modify it here. # I also leave these imports here because why not? import requests from alerce.core import Alerce from alerce.exceptions import APIError alerce_client = Alerce() # Choose cone_radius of diameter of tile so that, whatever coord I choose for ra_in, dec_in, we cover the whole tile def access_alerts(lastmjd_in=[], ra_in = None, dec_in = None, cone_radius = 3600*4.01, classifier='stamp_classifier', class_names=['SN', 'AGN']): if type(class_names) is not list: raise TypeError('Argument `class_names` must be a list.') dataframes = [] if not lastmjd_in: date_range = 60 lastmjd_in = [Time.now().mjd - 60, Time.now().mjd] print('Defaulting to a lastmjd range of', str(date_range), 'days before today.') #print("lastmjd:", lastmjd_in) for class_name in class_names: data = alerce_client.query_objects(classifier=classifier, class_name=class_name, lastmjd=lastmjd_in, ra = ra_in, dec = dec_in, radius = cone_radius, # in arcseconds page_size = 5000, order_by='oid', order_mode='DESC', format='pandas') #if lastmjd is not None: # select = data['lastmjd'] >= lastmjd # data = data[select] dataframes.append(data) #print(pd.concat(dataframes).columns) return pd.concat(dataframes).sort_values(by = 'lastmjd') # From https://github.com/desihub/timedomain/blob/master/too_ledgers/decam_TAMU_ledgermaker.ipynb # Function to grab decam data from bs4 import BeautifulSoup import json import requests def access_decam_data(url, overwrite=False): """Download reduced DECam transient data from Texas A&M. Cache the data to avoid lengthy and expensive downloads. Parameters ---------- url : str URL for accessing the data. overwrite : bool Download new data and overwrite the cached data. Returns ------- decam_transients : pandas.DataFrame Table of transient data. """ folders = url.split('/') thedate = folders[-1] if len(folders[-1]) > 0 else folders[-2] outfile = '{}.csv'.format(thedate) if os.path.exists(outfile) and not overwrite: # Access cached data. decam_transients = pd.read_csv(outfile) else: # Download the DECam data index. # A try/except is needed because the datahub SSL certificate isn't playing well with URL requests. try: decam_dets = requests.get(url, auth=('decam','tamudecam')).text except: requests.packages.urllib3.disable_warnings(requests.packages.urllib3.exceptions.InsecureRequestWarning) decam_dets = requests.get(url, verify=False, auth=('decam','tamudecam')).text # Convert transient index page into scrapable data using BeautifulSoup. soup = BeautifulSoup(decam_dets) # Loop through transient object summary JSON files indexed in the main transient page. # Download the JSONs and dump the info into a Pandas table. decam_transients = None j = 0 for a in soup.find_all('a', href=True): if 'object-summary.json' in a: link = a['href'].replace('./', '') summary_url = url + link summary_text = requests.get(summary_url, verify=False, auth=('decam','tamudecam')).text summary_data = json.loads(summary_text) j += 1 #print('Accessing {:3d} {}'.format(j, summary_url)) # Modified by Matt if decam_transients is None: decam_transients = pd.DataFrame(summary_data, index=[0]) else: decam_transients = pd.concat([decam_transients, pd.DataFrame(summary_data, index=[0])]) # Cache the data for future access. print('Saving output to {}'.format(outfile)) decam_transients.to_csv(outfile, index=False) return decam_transients # Function to read in fits table info, RA, DEC, MJD and targetid if so desired # Uses control parameter tile to determine if opening tile exposure file or not since headers are different import logging def read_fits_info(filepath: str, transient_candidate = True): ''' if transient_candidate: hdu_num = 1 else: hdu_num = 5 ''' # Disabling INFO logging temporarily to suppress INFO level output/print from read_spectra logging.disable(logging.INFO) try: spec_info = read_spectra(filepath).fibermap except: filename = filepath.split("/")[-1] print("Could not open or use:", filename) #print("In path:", filepath) #print("Trying the next file...") return np.array([]), np.array([]), 0, 0 headers = ['TARGETID', 'TARGET_RA', 'TARGET_DEC', 'LAST_MJD'] targ_info = {} for head in headers: try: targ_info[head] = spec_info[head].data except: if not head == 'LAST_MJD': print("Failed to read in", head, "data. Continuing...") targ_info[head] = False # targ_id = spec_info['TARGETID'].data # targ_ra = spec_info['TARGET_RA'].data # Now it's a numpy array # targ_dec = spec_info['TARGET_DEC'].data # targ_mjd = spec_info['LAST_MJD'] #.data if np.any(targ_info['LAST_MJD']): targ_mjd = Time(targ_info['LAST_MJD'][0], format = 'mjd') elif transient_candidate: targ_mjd = filepath.split("/")[-1].split("_")[-2] #to grab the date targ_mjd = Time(targ_mjd, format = 'mjd') #.mjd else: print("Unable to determine observation mjd for", filename) print("This target will not be considered.") return np.array([]), np.array([]), 0, 0 ''' with fits.open(filepath) as hdu1: data_table = Table(hdu1[hdu_num].data) #columns targ_id = data_table['TARGETID'] targ_ra = data_table['TARGET_RA'].data # Now it's a numpy array targ_dec = data_table['TARGET_DEC'].data #targ_mjd = data_table['MJD'][0] some have different versions of this so this is a *bad* idea... at least now I know the try except works! if tile: targ_mjd = hdu1[hdu_num].header['MJD-OBS'] ''' # if tile and not np.all(targ_mjd): # print("Unable to grab mjd from spectra, taking it from the filename...") # targ_mjd = filepath.split("/")[-1].split("_")[-2] #to grab the date # #targ_mjd = targ_mjd[:4]+"-"+targ_mjd[4:6]+"-"+targ_mjd[6:] # Adding dashes for Time # targ_mjd = Time(targ_mjd, format = 'mjd') #.mjd # Re-enabling logging for future calls if necessary logging.disable(logging.NOTSET) return targ_info["TARGET_RA"], targ_info["TARGET_DEC"], targ_mjd, targ_info["TARGETID"] #targ_ra, targ_dec, targ_mjd, targ_id ``` ## Matching function More or less the prototype to the later rendition used for DECAM. Will not be around in later versions of this notebook as I will be able to repurpose the DECAM code to do both. Planned obsolescence? It may not be even worth it at this point... ah well! ``` # Prototype for the later, heftier matching function # Will be deprecated, please reference commentary in inner_matching later for operation notes def matching(path_in: str, max_sep: float, tile = False, date_dict = {}): max_sep *= u.arcsec #max_sep = Angle(max_sep*u.arcsec) #if not target_ra_dec_date: # target_ras, target_decs, obs_mjds = read_fits_ra_dec(path_in, tile) #else: # target_ras, target_decs, obs_mjds = target_ra_dec_date #Look back 60 days from the DESI observations days_back = 60 if not date_dict: print("No RA's/DEC's fed in. Quitting.") return np.array([]), np.array([]) all_trans_matches = [] all_alerts_matches = [] targetid_matches = [] for obs_mjd, ra_dec in date_dict.items(): # Grab RAs and DECs from input. target_ras = ra_dec[:, 0] target_decs = ra_dec[:, 1] target_ids = np.int64(ra_dec[:, 2]) # Check for NaN's and remove which don't play nice with match_coordinates_sky nan_ra = np.isnan(target_ras) nan_dec = np.isnan(target_decs) if np.any(nan_ra) or np.any(nan_dec): print("NaNs found, removing them from array (not FITS) before match.") #print("Original length (ra, dec): ", len(target_ras), len(target_decs)) nans = np.logical_not(np.logical_and(nan_ra, nan_dec)) target_ras = target_ras[nans] # Logic masking, probably more efficient target_decs = target_decs[nans] #print("Reduced length (ra, dec):", len(target_ras), len(target_decs)) # Some code used to test -- please ignore ****************** # Feed average to access alerts, perhaps that will speed things up/find better results #avg_ra = np.average(target_ras) #avg_dec = np.average(target_decs) # coo_trans_search = SkyCoord(target_ras*u.deg, target_decs*u.deg) # #print(coo_trans_search) # idxs, d2d, _ = match_coordinates_sky(coo_trans_search, coo_trans_search, nthneighbor = 2) # # for conesearch in alerce # max_sep = np.max(d2d).arcsec + 2.1 # to expand a bit further than the furthest neighbor # ra_in = coo_trans_search[0].ra # dec_in = coo_trans_search[0].dec # Some code used to test -- please ignore ****************** #print([obs_mjd - days_back, obs_mjd]) try: alerts = access_alerts(lastmjd_in = [obs_mjd - days_back, obs_mjd], ra_in = target_ras[0], dec_in = target_decs[0], #cone_radius = max_sep, class_names = ['SN'] ) # Modified Julian Day .mjd except: #print("No SN matches ("+str(days_back)+" day range) for", obs_mjd) #break continue # For each fits file, look at one month before the observation from Alerce # Not sure kdtrees matter # tree_name = "kdtree_" + str(obs_mjd - days_back) alerts_ra = alerts['meanra'].to_numpy() #print("Length of alerts: ", len(alerts_ra)) alerts_dec = alerts['meandec'].to_numpy() # Converting to SkyCoord type arrays (really quite handy) coo_trans_search = SkyCoord(target_ras*u.deg, target_decs*u.deg) coo_alerts = SkyCoord(alerts_ra*u.deg, alerts_dec*u.deg) # Some code used to test -- please ignore ****************** #ra_range = list(zip(*[(i, j) for i,j in zip(alerts_ra,alerts_dec) if (np.min(target_ras) < i and i < np.max(target_ras) and np.min(target_decs) < j and j < np.max(target_decs))])) #try: # ra_range = SkyCoord(ra_range[0]*u.deg, ra_range[1]*u.deg) #except: # continue #print(ra_range) #print(coo_trans_search) #idx_alerts, d2d_trans, d3d_trans = match_coordinates_sky(coo_trans_search, ra_range) #for i in coo_trans_search: #print(i.separation(ra_range[3])) #print(idx_alerts) #print(np.min(d2d_trans)) #break # Some code used to test -- please ignore ****************** idx_alerts, d2d_trans, d3d_trans = match_coordinates_sky(coo_trans_search, coo_alerts) # Filtering by maximum separation and closest match sep_constraint = d2d_trans < max_sep trans_matches = coo_trans_search[sep_constraint] alerts_matches = coo_alerts[idx_alerts[sep_constraint]] targetid_matches = target_ids[sep_constraint] #print(d2d_trans < max_sep) minDist[obs_mjd] = np.min(d2d_trans) # Adding everything to lists and outputting if trans_matches.size: all_trans_matches.append(trans_matches) all_alerts_matches.append(alerts_matches) sort_dist = np.sort(d2d_trans) #print("Minimum distance found: ", sort_dist[0]) #print() #break #else: #print("No matches found...\n") #break return all_trans_matches, all_alerts_matches, targetid_matches ``` ## Matching to ALERCE Runs a 5 arcsecond match of DESI to Alerce objects. Since everything is handled in functions, this part is quite clean. From back when I was going to use *if __name__ == "__main__":*... those were the days ``` # Transient dir path_to_transient = "/global/cfs/cdirs/desi/science/td/daily-search/desitrip/out" # Grab paths paths_to_fits = all_candidate_filenames(path_to_transient) #print(len(paths_to_fits)) desi_info_dict = {} target_ras, target_decs, obs_mjd, targ_ids = read_fits_info(paths_to_fits[0], transient_candidate = True) desi_info_dict[obs_mjd] = np.column_stack((target_ras, target_decs, targ_ids)) ''' To be used when functions are properly combined. initial_check(ledger_df = None, ledger_type = '') closer_check(matches_dict = {}, ledger_df = None, ledger_type = '', exclusion_list = []) ''' fail_count = 0 # Iterate through every fits file and grab all necessary info and plop it all together for path in paths_to_fits[1:]: target_ras, target_decs, obs_mjd, targ_ids = read_fits_info(path, transient_candidate = True) if not obs_mjd: fail_count += 1 continue #try: if obs_mjd in desi_info_dict.keys(): np.append(desi_info_dict[obs_mjd], np.array([target_ras, target_decs, targ_ids]).T, axis = 0) else: desi_info_dict[obs_mjd] = np.column_stack((target_ras, target_decs, targ_ids)) #desi_info_dict[obs_mjd].extend((target_ras, target_decs, targ_ids)) #except: # continue #desi_info_dict[obs_mjd] = np.column_stack((target_ras, target_decs, targ_ids)) #desi_info_dict[obs_mjd].append((target_ras, target_decs, targ_ids)) #trans_matches, _ = matching(path, 5.0, (all_desi_ras, all_desi_decs, all_obs_mjd)) # if trans_matches.size: # all_trans_matches.append(trans_matches) # all_alerts_matches.append(alerts_matches) #print([i.mjd for i in sorted(desi_info_dict.keys())]) print(len(paths_to_fits)) print(len(desi_info_dict)) #print(fail_count) ``` ``` # I was going to prepare everything by removing duplicate target ids but it's more trouble than it's worth and match_coordinates_sky can handle it # Takes quite a bit of time... not much more I can do to speed things up though since querying Alerce for every individual date is the hang-up. #print(len(paths_to_fits) - ledesi_info_dictfo_dict)) #print(fail_count) #trans_matches, _, target_id_matches = matching("", 2.0, date_dict = temp_dict) trans_matches, _, target_id_matches = matching("", 2.0, date_dict = desi_info_dict) print(trans_matches) print(target_id_matches) print(sorted(minDist.values())[:5]) #for i in minDist.values(): # print(i) ``` ## Matching to DECAM functions Overwrite *read_fits_info* with older version to accommodate *read_spectra* error ``` # Read useful data from fits file, RA, DEC, target ID, and mjd as a leftover from previous use def read_fits_info(filepath: str, transient_candidate = False): if transient_candidate: hdu_num = 1 else: hdu_num = 5 try: with fits.open(filepath) as hdu1: data_table = Table(hdu1[hdu_num].data) #columns targ_ID = data_table['TARGETID'] targ_ra = data_table['TARGET_RA'].data # Now it's a numpy array targ_dec = data_table['TARGET_DEC'].data #targ_mjd = data_table['MJD'][0] some have different versions of this so this is a *bad* idea... at least now I know the try except works! # if transient_candidate: # targ_mjd = hdu1[hdu_num].header['MJD-OBS'] # This is a string # else: # targ_mjd = data_table['MJD'].data # targ_mjd = Time(targ_mjd[0], format = 'mjd') except: filename = filepath.split("/")[-1] print("Could not open or use:", filename) #print("In path:", filepath) #print("Trying the next file...") return np.array([]), np.array([]), np.array([]) return targ_ra, targ_dec, targ_ID #targ_mjd, targ_ID # Grabbing the frame fits files def glob_frames(exp_d: str): # This function grabs the names of all input files in the transient directory and does some python string manipulation # to grab the names of the input files with full path and the filenames themselves. try: filenames_read = glob(exp_d + "/cframe-" + color_band + "*.fits") # Only need one of b, r, z # sframes not flux calibrated # May want to use tiles... coadd (will need later, but not now) except: try: filenames_read = glob(exp_d + "/frame-" + color_band + "*.fits") # Only need one of b, r, z except: print("Could not grab/find any fits in the exposure directory:") print(exp_d) filenames_read = [] # Just in case #filenames_out = [] # Just in case raise SystemExit("Exitting.") #else: #filenames_out = [s.split(".")[0] for s in filenames_read] #filenames_out = [s.split("/")[-1] for s in filenames_read] #filenames_out = [s.replace("in", "out") for s in filenames_out] return filenames_read #, filenames_out #path_to_transient = "/global/cfs/cdirs/desi/science/td/daily-search/desitrip/out" #print(all_candidate_filenames(path_to_transient)[1]) ``` ## Match handling routines The two functions below perform data handling/calling for the final match step. The first, **initial_check** grabs all the tile RAs and DECS from the exposures and tiles SQL table, does some filtering, and sends the necessary information to the matching function. Currently designed to handle ALERCE as well but work has to be done to make sure it operates correctly. ``` def initial_check(ledger_df = None, ledger_type = ''): query_date_start = "20210301" #today = Time.now() smushed_YMD = today.iso.split(" ")[0].replace("-","") query_date_end = smushed_YMD # Handy queries for debugging/useful info query2 = "PRAGMA table_info(exposures)" query3 = "PRAGMA table_info(tiles)" # Crossmatch across tiles and exposures to grab obsdate via tileid query_match = "SELECT distinct tilera, tiledec, obsdate, obsmjd, expid, exposures.tileid from exposures INNER JOIN tiles ON exposures.tileid = tiles.tileid where obsdate BETWEEN " + \ query_date_start + " AND " + query_date_end + ";" ''' Some handy code for debugging #cur.execute(query2) #row2 = cur.fetchall() #for i in row2: # print(i[:]) ''' # Querying sql and returning a data type called sqlite3 row, it's kind of like a namedtuple/dictionary conn = sqlite3.connect(db_filename) conn.row_factory = sqlite3.Row # https://docs.python.org/3/library/sqlite3.html#sqlite3.Row cur = conn.cursor() cur.execute(query_match) matches_list = cur.fetchall() cur.close() # I knew there was a way! THANK YOU! # https://stackoverflow.com/questions/11276473/append-to-a-dict-of-lists-with-a-dict-comprehension # Grabbing everything by obsdate from matches_list date_dict = {k['obsdate'] : list(filter(lambda x:x['obsdate'] == k['obsdate'], matches_list)) for k in matches_list} alert_matches_dict = {} all_trans_matches = [] all_alerts_matches = [] # Grabbing DECAM ledger if not already fed in if ledger_type.upper() == 'DECAM_TAMU': if ledger_df.empty: ledger_df = access_decam_data('https://datahub.geos.tamu.edu:8000/decam/LCData_Legacy/') # Iterating through the dates and checking each tile observed on each date # It is done in this way to cut down on calls to ALERCE since we go day by day # It's also a convenient way to organize things for date, row in date_dict.items(): date_str = str(date) date_str = date_str[:4]+"-"+date_str[4:6]+"-"+date_str[6:] # Adding dashes for Time obs_mjd = Time(date_str).mjd # This method is *technically* safer than doing a double list comprehension with set albeit slower # The lists are small enough that speed shouldn't matter here unique_tileid = {i['tileid']: (i['tilera'], i['tiledec']) for i in row} exposure_ras, exposure_decs = zip(*unique_tileid.values()) # Grabbing alerce ledger if not done already if ledger_type.upper() == 'ALERCE': if ledger_df.empty: ledger_df = access_alerts(lastmjd = obs_mjd - 28) # Modified Julian Day #.mjd elif ledger_type.upper() == 'DECAM_TAMU': pass else: print("Cannot use alerts broker/ledger provided. Stopping before match.") return {} #Reatin tileid tileid_arr = np.array(list(unique_tileid.keys())) # Where the magic/matching happens trans_matches, alert_matches, trans_ids, alerts_ids, _ = \ inner_matching(target_ids_in = tileid_arr, target_ras_in = exposure_ras, target_decs_in = exposure_decs, obs_mjd_in = obs_mjd, path_in = '', max_sep = 1.8, sep_units = 'deg', ledger_df_in = ledger_df, ledger_type_in = ledger_type) # Add everything into one giant list for both if trans_matches.size: #print(date, "-", len(trans_matches), "matches") all_trans_matches.append(trans_matches) all_alerts_matches.append(alert_matches) else: #print("No matches on", date) continue # Prepping output # Populating the dictionary by date (a common theme) # Each element in the dictionary thus contains the entire sqlite3 row (all info from sql tables with said headers) alert_matches_dict[date] = [] for tup in trans_matches: ra = tup.ra.deg dec = tup.dec.deg match_rows = [i for i in row if (i['tilera'], i['tiledec']) == (ra, dec)] # Just rebuilding for populating, this shouldn't change/exclude anything alert_matches_dict[date].extend(match_rows) return alert_matches_dict ``` ## closer_check **closer_check** is also a handling function but operates differently in that now it is checking individual targets. This *must* be run after **initial_check** because it takes as input the dictionary **initial_check** spits out. It then grabs all the targets from the DESI files and pipes that into the matching function but this time with a much more strict matching radius (in this case 2 arcseconds). It then preps the data for output and writing. ``` def closer_check(matches_dict = {}, ledger_df = None, ledger_type = '', exclusion_list = []): all_exp_matches = {} if not matches_dict: print("No far matches fed in for nearby matching. Returning none.") return {} # Again just in case the dataframe isn't fed in if ledger_type.upper() == 'DECAM_TAMU': id_head = 'ObjectID' ra_head = 'RA-OBJECT' dec_head = 'DEC-OBJECT' if ledger_df.empty: ledger_df = access_decam_data('https://datahub.geos.tamu.edu:8000/decam/LCData_Legacy/') count_flag=0 # Iterating through date and all tile information for that date for date, row in matches_dict.items(): print("\n", date) if date in exclusion_list: continue # Declaring some things all_exp_matches[date] = [] alert_exp_matches = [] file_indices = {} all_targ_ras = np.array([]) all_targ_decs = np.array([]) all_targ_ids = np.array([]) all_tileids = np.array([]) all_petals = np.array([]) # Iterating through each initial match tile for every date for i in row: # Grabbing the paths and iterating through them to grab the RA's/DEC's exp_paths = '/'.join((exposure_path, "daily/exposures", str(i['obsdate']), "000"+str(i['expid']))) #print(exp_paths) for path in glob_frames(exp_paths): #print(path) targ_ras, targ_decs, targ_ids = read_fits_info(path, transient_candidate = False) h=fits.open(path) tileid = h[0].header['TILEID'] tileids = np.full(len(targ_ras),tileid).tolist() petal = path.split("/")[-1].split("-")[1][-1] petals = np.full(len(targ_ras),petal).tolist() # This is to retain the row to debug/check the original FITS file # And to pull the info by row direct if you feel so inclined all_len = len(all_targ_ras) new_len = len(targ_ras) if all_len: all_len -= 1 file_indices[path] = (all_len, all_len + new_len) # The start and end index, modulo number else: file_indices[path] = (0, new_len) # The start and end index, modulo number if len(targ_ras) != len(targ_decs): print("Length of all ras vs. all decs do not match.") print("Something went wrong!") print("Continuing but not adding those to match...") continue # All the ras/decs together! all_targ_ras = np.append(all_targ_ras, targ_ras) all_targ_decs = np.append(all_targ_decs, targ_decs) all_targ_ids = np.append(all_targ_ids, targ_ids) all_tileids = np.append(all_tileids, tileids) all_petals = np.append(all_petals, petals) date_mjd = str(date)[:4]+"-"+str(date)[4:6] + "-" + str(date)[6:] # Adding dashes for Time date_mjd = Time(date_mjd).mjd # Grabbing ALERCE just in case # Slow if ledger_type.upper() == 'ALERCE': id_head = 'oid' ra_head = 'meanra' dec_head = 'meandec' if ledger_df.empty: ledger_df = access_alerts(lastmjd_in = obs_mjd - 45) # Modified Julian Day #.mjd # Checking for NaNs, again doesn't play nice with match_coordinates_sky nan_ra = np.isnan(all_targ_ras) nan_dec = np.isnan(all_targ_decs) if np.any(nan_ra) or np.any(nan_dec): print("NaNs found, removing them from array before match.") #print("Original length (ra, dec): ", len(target_ras), len(target_decs)) nans = np.logical_not(np.logical_and(nan_ra, nan_dec)) all_targ_ras = all_targ_ras[nans] # Logic masking, probably more efficient all_targ_decs = all_targ_decs[nans] all_targ_ids = all_targ_ids[nans] all_tileids = all_tileids[nans] all_petals = all_petals[nans] # Where the magic matching happens. This time with separation 2 arcseconds. # Will be cleaned up (eventually) alert_exp_matches, alerts_matches, targetid_exp_matches, id_alerts_matches, exp_idx = inner_matching(target_ids_in =all_targ_ids, \ target_ras_in = all_targ_ras, target_decs_in = all_targ_decs, obs_mjd_in = date_mjd, path_in = '', max_sep = 2, sep_units = 'arcsec', ledger_df_in = ledger_df, ledger_type_in = ledger_type) date_arr=np.full(alerts_matches.shape[0],date) #print(date_arr.shape,targetid_exp_matches.shape,alert_exp_matches.shape, id_alerts_matches.shape,alerts_matches.shape ) info_arr_date=np.column_stack((date_arr,all_tileids[exp_idx],all_petals[exp_idx], targetid_exp_matches,alert_exp_matches.ra.deg,alert_exp_matches.dec.deg, \ id_alerts_matches,alerts_matches.ra.deg,alerts_matches.dec.deg )) all_exp_matches[date].append(info_arr_date) if count_flag==0: all_exp_matches_arr=info_arr_date count_flag=1 else: #print(all_exp_matches_arr,info_arr_date) all_exp_matches_arr=np.concatenate((all_exp_matches_arr,info_arr_date)) # Does not easily output to a csv since we have multiple results for each date # so uh... custom file output for me return all_exp_matches_arr ``` ## inner_matching #### aka the bread & butter **inner_matching** is what ultimately does the final match and calls **match_coordinates_sky** with everything fed in. So really it doesn't do much other than take in all the goodies and make everyone happy. It may still be difficult to co-opt for alerce matching but that may be a project for another time. ``` def inner_matching(target_ids_in = np.array([]), target_ras_in = np.array([]), target_decs_in = np.array([]), obs_mjd_in = '', path_in = '', max_sep = 2, sep_units = 'arcsec', ledger_df_in = None, ledger_type_in = ''): # to be combined with the other matching thing in due time # Figuring out the units if sep_units == 'arcsec': max_sep *= u.arcsec elif sep_units == 'arcmin': max_sep *= u.arcmin elif sep_units == 'deg': max_sep *= u.deg else: print("Separation unit specified is invalid for matching. Defaulting to arcsecond.") max_sep *= u.arcsec if not np.array(target_ras_in).size: return np.array([]), np.array([]) # Checking for NaNs, again doesn't play nice with match_coordinates_sky nan_ra = np.isnan(target_ras_in) nan_dec = np.isnan(target_decs_in) if np.any(nan_ra) or np.any(nan_dec): print("NaNs found, removing them from array before match.") #print("Original length (ra, dec): ", len(target_ras), len(target_decs)) nans = np.logical_not(np.logical_and(nan_ra, nan_dec)) target_ras_in = target_ras_in[nans] # Logic masking, probably more efficient target_decs_in = target_decs_in[nans] target_ids_in = target_ids_in[nans] #print("Reduced length (ra, dec):", len(target_ras), len(target_decs)) # For quick matching if said kdtree actually does anything # Supposed to speed things up on subsequent runs *shrugs* tree_name = "_".join(("kdtree", ledger_type_in, str(obs_mjd_in))) # Selecting header string to use with the different alert brokers/ledgers if ledger_type_in.upper() == 'DECAM_TAMU': id_head = 'ObjectID' ra_head = 'RA-OBJECT' dec_head = 'DEC-OBJECT' elif ledger_type_in.upper() == 'ALERCE': id_head = 'oid' #Check this is how id is called! ra_head = 'meanra' dec_head = 'meandec' else: print("No ledger type specified. Quitting.") # lofty goals # Will try to figure it out assuming it's a pandas dataframe.") #print("Returning empty-handed for now until that is complete - Matthew P.") return np.array([]), np.array([]) # Convert df RA/DEC to numpy arrays alerts_id = ledger_df_in[id_head].to_numpy() alerts_ra = ledger_df_in[ra_head].to_numpy() alerts_dec = ledger_df_in[dec_head].to_numpy() # Convert everything to SkyCoord coo_trans_search = SkyCoord(target_ras_in*u.deg, target_decs_in*u.deg) coo_alerts = SkyCoord(alerts_ra*u.deg, alerts_dec*u.deg) # Do the matching! idx_alerts, d2d_trans, d3d_trans = match_coordinates_sky(coo_trans_search, coo_alerts, storekdtree = tree_name) # store tree to speed up subsequent results # Filter out the good stuff sep_constraint = d2d_trans < max_sep trans_matches = coo_trans_search[sep_constraint] trans_matches_ids = target_ids_in[sep_constraint] alerts_matches = coo_alerts[idx_alerts[sep_constraint]] alerts_matches_ids = alerts_id[idx_alerts[sep_constraint]] if trans_matches.size: print(len(trans_matches), "matches with separation -", max_sep) #sort_dist = np.sort(d2d_trans) #print("Minimum distance found: ", sort_dist[0]) return trans_matches, alerts_matches, trans_matches_ids, alerts_matches_ids, sep_constraint ``` ## Grab DECAM ledger as pandas dataframe ``` decam_transients = access_decam_data('https://datahub.geos.tamu.edu:8000/decam/LCData_Legacy/', overwrite = True) # If True, grabs a fresh batch decam_transients_agn = access_decam_data('https://datahub.geos.tamu.edu:8000/decam/LCData_Legacy_AGN/', overwrite = True) # If True, grabs a fresh batch decam_transients ``` ## Run initial check (on tiles) and closer check (on targets) ``` init_matches_by_date = initial_check(ledger_df = decam_transients, ledger_type = 'DECAM_TAMU') close_matches = closer_check(init_matches_by_date, ledger_df = decam_transients, ledger_type = 'DECAM_TAMU', exclusion_list = []) np.save('matches_DECam',close_matches, allow_pickle=True) init_matches_agn_by_date = initial_check(ledger_df = decam_transients_agn, ledger_type = 'DECAM_TAMU') close_matches_agn = closer_check(init_matches_agn_by_date, ledger_df = decam_transients_agn, ledger_type = 'DECAM_TAMU', exclusion_list = []) np.save('matches_DECam_agn',close_matches_agn, allow_pickle=True) np.save('matches_DECam_agn',close_matches_agn, allow_pickle=True) ``` ## A quick plot to see the distribution of target matches ``` plt.scatter(close_matches[:,4], close_matches[:,5],label='SN') plt.scatter(close_matches_agn[:,4], close_matches_agn[:,5],label='AGN') plt.legend() ``` ## End notes: Double matches are to be expected, could be worthwhile to compare the spectra of both
true
code
0.427337
null
null
null
null
# Contrasts Overview ``` from __future__ import print_function import numpy as np import statsmodels.api as sm ``` This document is based heavily on this excellent resource from UCLA http://www.ats.ucla.edu/stat/r/library/contrast_coding.htm A categorical variable of K categories, or levels, usually enters a regression as a sequence of K-1 dummy variables. This amounts to a linear hypothesis on the level means. That is, each test statistic for these variables amounts to testing whether the mean for that level is statistically significantly different from the mean of the base category. This dummy coding is called Treatment coding in R parlance, and we will follow this convention. There are, however, different coding methods that amount to different sets of linear hypotheses. In fact, the dummy coding is not technically a contrast coding. This is because the dummy variables add to one and are not functionally independent of the model's intercept. On the other hand, a set of *contrasts* for a categorical variable with `k` levels is a set of `k-1` functionally independent linear combinations of the factor level means that are also independent of the sum of the dummy variables. The dummy coding isn't wrong *per se*. It captures all of the coefficients, but it complicates matters when the model assumes independence of the coefficients such as in ANOVA. Linear regression models do not assume independence of the coefficients and thus dummy coding is often the only coding that is taught in this context. To have a look at the contrast matrices in Patsy, we will use data from UCLA ATS. First let's load the data. #### Example Data ``` import pandas as pd url = 'https://stats.idre.ucla.edu/stat/data/hsb2.csv' hsb2 = pd.read_table(url, delimiter=",") hsb2.head(10) ``` It will be instructive to look at the mean of the dependent variable, write, for each level of race ((1 = Hispanic, 2 = Asian, 3 = African American and 4 = Caucasian)). ``` hsb2.groupby('race')['write'].mean() ``` #### Treatment (Dummy) Coding Dummy coding is likely the most well known coding scheme. It compares each level of the categorical variable to a base reference level. The base reference level is the value of the intercept. It is the default contrast in Patsy for unordered categorical factors. The Treatment contrast matrix for race would be ``` from patsy.contrasts import Treatment levels = [1,2,3,4] contrast = Treatment(reference=0).code_without_intercept(levels) print(contrast.matrix) ``` Here we used `reference=0`, which implies that the first level, Hispanic, is the reference category against which the other level effects are measured. As mentioned above, the columns do not sum to zero and are thus not independent of the intercept. To be explicit, let's look at how this would encode the `race` variable. ``` hsb2.race.head(10) print(contrast.matrix[hsb2.race-1, :][:20]) sm.categorical(hsb2.race.values) ``` This is a bit of a trick, as the `race` category conveniently maps to zero-based indices. If it does not, this conversion happens under the hood, so this won't work in general but nonetheless is a useful exercise to fix ideas. The below illustrates the output using the three contrasts above ``` from statsmodels.formula.api import ols mod = ols("write ~ C(race, Treatment)", data=hsb2) res = mod.fit() print(res.summary()) ``` We explicitly gave the contrast for race; however, since Treatment is the default, we could have omitted this. ### Simple Coding Like Treatment Coding, Simple Coding compares each level to a fixed reference level. However, with simple coding, the intercept is the grand mean of all the levels of the factors. Patsy doesn't have the Simple contrast included, but you can easily define your own contrasts. To do so, write a class that contains a code_with_intercept and a code_without_intercept method that returns a patsy.contrast.ContrastMatrix instance ``` from patsy.contrasts import ContrastMatrix def _name_levels(prefix, levels): return ["[%s%s]" % (prefix, level) for level in levels] class Simple(object): def _simple_contrast(self, levels): nlevels = len(levels) contr = -1./nlevels * np.ones((nlevels, nlevels-1)) contr[1:][np.diag_indices(nlevels-1)] = (nlevels-1.)/nlevels return contr def code_with_intercept(self, levels): contrast = np.column_stack((np.ones(len(levels)), self._simple_contrast(levels))) return ContrastMatrix(contrast, _name_levels("Simp.", levels)) def code_without_intercept(self, levels): contrast = self._simple_contrast(levels) return ContrastMatrix(contrast, _name_levels("Simp.", levels[:-1])) hsb2.groupby('race')['write'].mean().mean() contrast = Simple().code_without_intercept(levels) print(contrast.matrix) mod = ols("write ~ C(race, Simple)", data=hsb2) res = mod.fit() print(res.summary()) ``` ### Sum (Deviation) Coding Sum coding compares the mean of the dependent variable for a given level to the overall mean of the dependent variable over all the levels. That is, it uses contrasts between each of the first k-1 levels and level k In this example, level 1 is compared to all the others, level 2 to all the others, and level 3 to all the others. ``` from patsy.contrasts import Sum contrast = Sum().code_without_intercept(levels) print(contrast.matrix) mod = ols("write ~ C(race, Sum)", data=hsb2) res = mod.fit() print(res.summary()) ``` This corresponds to a parameterization that forces all the coefficients to sum to zero. Notice that the intercept here is the grand mean where the grand mean is the mean of means of the dependent variable by each level. ``` hsb2.groupby('race')['write'].mean().mean() ``` ### Backward Difference Coding In backward difference coding, the mean of the dependent variable for a level is compared with the mean of the dependent variable for the prior level. This type of coding may be useful for a nominal or an ordinal variable. ``` from patsy.contrasts import Diff contrast = Diff().code_without_intercept(levels) print(contrast.matrix) mod = ols("write ~ C(race, Diff)", data=hsb2) res = mod.fit() print(res.summary()) ``` For example, here the coefficient on level 1 is the mean of `write` at level 2 compared with the mean at level 1. Ie., ``` res.params["C(race, Diff)[D.1]"] hsb2.groupby('race').mean()["write"][2] - \ hsb2.groupby('race').mean()["write"][1] ``` ### Helmert Coding Our version of Helmert coding is sometimes referred to as Reverse Helmert Coding. The mean of the dependent variable for a level is compared to the mean of the dependent variable over all previous levels. Hence, the name 'reverse' being sometimes applied to differentiate from forward Helmert coding. This comparison does not make much sense for a nominal variable such as race, but we would use the Helmert contrast like so: ``` from patsy.contrasts import Helmert contrast = Helmert().code_without_intercept(levels) print(contrast.matrix) mod = ols("write ~ C(race, Helmert)", data=hsb2) res = mod.fit() print(res.summary()) ``` To illustrate, the comparison on level 4 is the mean of the dependent variable at the previous three levels taken from the mean at level 4 ``` grouped = hsb2.groupby('race') grouped.mean()["write"][4] - grouped.mean()["write"][:3].mean() ``` As you can see, these are only equal up to a constant. Other versions of the Helmert contrast give the actual difference in means. Regardless, the hypothesis tests are the same. ``` k = 4 1./k * (grouped.mean()["write"][k] - grouped.mean()["write"][:k-1].mean()) k = 3 1./k * (grouped.mean()["write"][k] - grouped.mean()["write"][:k-1].mean()) ``` ### Orthogonal Polynomial Coding The coefficients taken on by polynomial coding for `k=4` levels are the linear, quadratic, and cubic trends in the categorical variable. The categorical variable here is assumed to be represented by an underlying, equally spaced numeric variable. Therefore, this type of encoding is used only for ordered categorical variables with equal spacing. In general, the polynomial contrast produces polynomials of order `k-1`. Since `race` is not an ordered factor variable let's use `read` as an example. First we need to create an ordered categorical from `read`. ``` hsb2['readcat'] = np.asarray(pd.cut(hsb2.read, bins=3)) hsb2.groupby('readcat').mean()['write'] from patsy.contrasts import Poly levels = hsb2.readcat.unique().tolist() contrast = Poly().code_without_intercept(levels) print(contrast.matrix) mod = ols("write ~ C(readcat, Poly)", data=hsb2) res = mod.fit() print(res.summary()) ``` As you can see, readcat has a significant linear effect on the dependent variable `write` but not a significant quadratic or cubic effect.
true
code
0.561215
null
null
null
null
<div align="right"><i>COM418 - Computers and Music</i></div> <div align="right"><a href="https://people.epfl.ch/paolo.prandoni">Lucie Perrotta</a>, <a href="https://www.epfl.ch/labs/lcav/">LCAV, EPFL</a></div> <p style="font-size: 30pt; font-weight: bold; color: #B51F1F;">Channel Vocoder</p> ``` %matplotlib inline import numpy as np import matplotlib.pyplot as plt from IPython.display import Audio from IPython.display import IFrame from scipy import signal import import_ipynb from Helpers import * figsize=(10,5) import matplotlib matplotlib.rcParams.update({'font.size': 16}); fs=44100 ``` In this notebook, we will implement and test an easy **channel vocoder**. A channel vocoder is a musical device that allows to sing while playing notes on a keyboard at the same time. The vocoder blends the voice (called the modulator) with the played notes on the keyboard (called the carrier) so that the resulting voice sings the note played on the keyboard. The resulting voice has a robotic, artificial sound that is rather popular in electronic music, with notable uses by bands such as Daft Punk, or Kraftwerk. <img src="https://www.bhphotovideo.com/images/images2000x2000/waldorf_stvc_string_synthesizer_1382081.jpg" alt="Drawing" style="width: 35%;"/> The implementation of a Channel vocoder is in fact quite simple. It takes 2 inputs, the carrier and the modulator signals, that must be of the same length. It divides each signal into frequency bands called **channels** (hence the name) using many parallel bandpass filters. The width of each channel can be equal, or logarithmically sized to match the human ear perception of frequency. For each channel, the envelope of the modulator signal is then computed, for instance using a rectifier and a moving average. It is simply multiplied to the carrier signal for each channel, before all channels are added back together. <img src="https://i.imgur.com/aIePutp.png" alt="Drawing" style="width: 65%;"/> To improve the intelligibility of the speech, it is also possible to add AWGN to each to the carrier of each band, helping to produce non-voiced sounds, such as the sound s, or f. As an example signal to test our vocoder with, we are going to use dry voice samples from the song "Nightcall" by french artist Kavinsky. ![Nightcall](https://upload.wikimedia.org/wikipedia/en/5/5b/Kavinsky_Nightcall_2010.png) First, let's listen to the original song: ``` IFrame(src="https://www.youtube.com/embed/46qo_V1zcOM?start=30", width="560", height="315", frameborder="0", allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture") ``` ## 1. The modulator and the carrier signals We are now going to recreate the lead vocoder using 2 signals: we need a modulator signal, a voice pronouning the lyrics, and a carrier signal, a synthesizer, containing the notes for the pitch. ### 1.1. The modulator Let's first import the modulator signal. It is simply the lyrics spoken at the right rhythm. No need to sing or pay attention to the pitch, only the prononciation and the rhythm of the text are going to matter. Note that the voice sample is available for free on **Splice**, an online resource for audio production. ``` nightcall_modulator = open_audio('snd/nightcall_modulator.wav') Audio('snd/nightcall_modulator.wav', autoplay=False) ``` ### 1.2. The carrier Second, we import a carrier signal, which is simply a synthesizer playing the chords that are gonna be used for the vocoder. Note that the carrier signal does not need to feature silent parts, since the modulator's silences will automatically mute the final vocoded track. The carrier and the modulator simply need to be in synch with each other. ``` nightcall_carrier = open_audio('snd/nightcall_carrier.wav') Audio("snd/nightcall_carrier.wav", autoplay=False) ``` ## 2. The channel vocoder ### 2.1. The channeler Let's now start implementing the phase vocoder. The first tool we need is an efficient filter to allow decomposing both the carrier and the modulator signals into channels (or bands). Let's call this function the **channeler** since it decomposes the input signals into frequency channels. It takes as input a signal to be filtered, a integer representing the number of bands, and a boolean for setting if we want white noise to be added to each band (used for the carrier). ``` def channeler(x, n_bands, add_noise=False): """ Separate a signal into log-sized frequency channels. x: the input signal n_bands: the number of frequency channels add_noise: add white noise or note to each channel """ band_freqs = np.logspace(2, 14, n_bands+1, base=2) # get all the limits between the bands, in log space x_bands = np.zeros((n_bands, x.size)) # Placeholder for all bands for i in range(n_bands): noise = 0.7*np.random.random(x.size) if add_noise else 0 # Create AWGN or not x_bands[i] = butter_pass_filter(x + noise, np.array((band_freqs[i], band_freqs[i+1])), fs, btype="band", order=5).astype(np.float32) # Carrier + uniform noise return x_bands # Example plot plt.figure(figsize=figsize) plt.magnitude_spectrum(nightcall_carrier) plt.title("Carrier signal before channeling") plt.xscale("log") plt.xlim(1e-4) plt.show() carrier_bands = channeler(nightcall_carrier, 8, add_noise=True) plt.figure(figsize=figsize) for i in range(8): plt.magnitude_spectrum(carrier_bands[i], alpha=.7) plt.title("Carrier channels after channeling and noise addition") plt.xscale("log") plt.xlim(1e-4) plt.show() ``` ### 2.2. The envelope computer Next, we can implement a simple envelope computer. Given a signal, this function computes its temporal envelope. ``` def envelope_computer(x): """ Envelope computation of one channels of the modulator x: the input signal """ x = np.abs(x) # Rectify the signal to positive x = moving_average(x, 1000) # Smooth the signal return 3*x # Normalize # Normalize plt.figure(figsize=figsize) plt.plot(np.abs(nightcall_modulator)[:150000] , label="Modulator") plt.plot(envelope_computer(nightcall_modulator)[:150000], label="Modulator envelope") plt.legend(loc="best") plt.title("Modulator signal and its envelope") plt.show() ``` ### 2.3. The channel vocoder (itself) We can now implement the channel vocoder itself! It takes as input both signals presented above, as well as an integer controlling the number of channels (bands) of the vocoder. A larger number of channels results in the finer grained vocoded sound, but also takes more time to compute. Some artists may voluntarily use a lower numer of bands to increase the artificial effect of the vocoder. Try playing with it! ``` def channel_vocoder(modulator, carrier, n_bands=32): """ Channel vocoder modulator: the modulator signal carrier: the carrier signal n_bands: the number of bands of the vocoder (better to be a power of 2) """ # Decompose both modulation and carrier signals into frequency channels modul_bands = channeler(modulator, n_bands, add_noise=False) carrier_bands = channeler(carrier, n_bands, add_noise=True) # Compute envelope of the modulator modul_bands = np.array([envelope_computer(modul_bands[i]) for i in range(n_bands)]) # Multiply carrier and modulator result_bands = np.prod([modul_bands, carrier_bands], axis=0) # Merge back all channels together and normalize result = np.sum(result_bands, axis=0) return normalize(result) # Normalize nightcall_vocoder = channel_vocoder(nightcall_modulator, nightcall_carrier, n_bands=32) Audio(nightcall_vocoder, rate=fs) ``` The vocoded voice is still perfectly intelligible, and it's easy to understand the lyrics. However, the pitch of the voice is now the synthesizer playing chords! One can try to deactivate the AWGN and compare the results. We finally plot the STFT of all 3 signals. One can notice that the vocoded signal has kept the general shape of the voice (modulator) signal, but is using the frequency information from the carrier! ``` # Plot f, t, Zxx = signal.stft(nightcall_modulator[:7*fs], fs, nperseg=1000) plt.figure(figsize=figsize) plt.pcolormesh(t, f[:100], np.abs(Zxx[:100,:]), cmap='nipy_spectral', shading='gouraud') plt.title("Original voice (modulator)") plt.ylabel('Frequency [Hz]') plt.xlabel('Time [sec]') plt.show() f, t, Zxx = signal.stft(nightcall_vocoder[:7*fs], fs, nperseg=1000) plt.figure(figsize=figsize) plt.pcolormesh(t, f[:100], np.abs(Zxx[:100,:]), cmap='nipy_spectral', shading='gouraud') plt.title("Vocoded voice") plt.ylabel('Frequency [Hz]') plt.xlabel('Time [sec]') plt.show() f, t, Zxx = signal.stft(nightcall_carrier[:7*fs], fs, nperseg=1000) plt.figure(figsize=figsize) plt.pcolormesh(t, f[:100], np.abs(Zxx[:100,:]), cmap='nipy_spectral', shading='gouraud') plt.title("Carrier") plt.ylabel('Frequency [Hz]') plt.xlabel('Time [sec]') plt.show() ``` ## 3. Playing it together with the music Finally, let's try to play it with the background music to see if it sounds like the original! ``` nightcall_instru = open_audio('snd/nightcall_instrumental.wav') nightcall_final = nightcall_vocoder + 0.6*nightcall_instru nightcall_final = normalize(nightcall_final) # Normalize Audio(nightcall_final, rate=fs) ```
true
code
0.756183
null
null
null
null
Authored by: Avani Gupta <br> Roll: 2019121004 **Note: dataset shape is version dependent hence final answer too will be dependent of sklearn version installed on machine** # Excercise: Eigen Face Here, we will look into ability of PCA to perform dimensionality reduction on a set of Labeled Faces in the Wild dataset made available from scikit-learn. Our images will be of shape (62, 47). This problem is also famously known as the eigenface problem. Mathematically, we would like to find the principal components (or eigenvectors) of the covariance matrix of the set of face images. These eigenvectors are essentially a set of orthonormal features depicts the amount of variation between face images. When plotted, these eigenvectors are called eigenfaces. #### Imports ``` import numpy as np import matplotlib.pyplot as plt from numpy import pi from sklearn.datasets import fetch_lfw_people import seaborn as sns; sns.set() import sklearn print(sklearn.__version__) ``` #### Setup data ``` faces = fetch_lfw_people(min_faces_per_person=8) X = faces.data y = faces.target print(faces.target_names) print(faces.images.shape) ``` Note: **images num is version dependent** <br> I get (4822, 62, 47) in my version of sklearn which is 0.22.2. <br> Since our images is of the shape (62, 47), we unroll each image into a single row vector of shape (1, 4822). This means that we have 4822 features defining each image. These 4822 features will result into 4822 principal components in the PCA projection space. Therefore, each image location contributes more or less to each principal component. #### Implement Eigen Faces ``` print(faces.images.shape) img_shape = faces.images.shape[1:] print(img_shape) def FindEigen(X_mat): X_mat -= np.mean(X_mat, axis=0, keepdims=True) temp = np.matmul(X_mat.T, X_mat) cov_mat = 1/X_mat.shape[0]* temp eigvals, eigvecs = np.linalg.eig(cov_mat) ind = eigvals.argsort()[::-1] return np.real(eigvals[ind]), np.real(eigvecs[:, ind]) def plotFace(faces, h=10, v=1): fig, axes = plt.subplots(v, h, figsize=(10, 2.5), subplot_kw={'xticks':[], 'yticks':[]}, gridspec_kw=dict(hspace=0.1, wspace=0.1)) for i, ax in enumerate(axes.flat): ax.imshow(faces[i].reshape(*img_shape), cmap='gray') def plotgraph(eigenvals): plt.plot(range(1, eigenvals.shape[0]+1), np.cumsum(eigenvals / np.sum(eigenvals))) plt.show() def PrincipalComponentsNum(X, eigenvals, threshold=0.95): num = np.argmax(np.cumsum(eigenvals / np.sum(eigenvals)) >= threshold) + 1 print(f"No. of principal components required to preserve {threshold*100} % variance is: {num}.") ``` ### Q1 How many principal components are required such that 95% of the vari- ance in the data is preserved? ``` eigenvals, eigenvecs = FindEigen(X) plotgraph(eigenvals) PrincipalComponentsNum(X, eigenvals) ``` ### Q2 Show the reconstruction of the first 10 face images using only 100 principal components. ``` def reconstructMat(X, eigvecs, num_c): return (np.matmul(X,np.matmul(eigvecs[:, :num_c], eigvecs[:, :num_c].T))) faceNum = 10 print('original faces') plotFace(X[:faceNum, :], faceNum) recFace = reconstructMat(X[:faceNum, :], eigenvecs, 100) print('reconstructed faces using only 100 principal components') plotFace(recFace, faceNum) ``` # Adding noise to images We now add gaussian noise to the images. Will PCA be able to effectively perform dimensionality reduction? ``` def plot_noisy_faces(noisy_faces): fig, axes = plt.subplots(2, 10, figsize=(10, 2.5), subplot_kw={'xticks':[], 'yticks':[]}, gridspec_kw=dict(hspace=0.1, wspace=0.1)) for i, ax in enumerate(axes.flat): ax.imshow(noisy_faces[i].reshape(62, 47), cmap='binary_r') ``` Below we plot first twenty noisy input face images. ``` np.random.seed(42) noisy_faces = np.random.normal(X, 15) plot_noisy_faces(noisy_faces) noisy_faces.shape noisy_eigenvals, noisy_eigenvecs = FindEigen(noisy_faces) ``` ### Q3.1 Show the above two results for a noisy face dataset. How many principal components are required such that 95% of the vari- ance in the data is preserved? ``` plotgraph(noisy_eigenvals) PrincipalComponentsNum(noisy_faces, noisy_eigenvals, 0.95) ``` ### Q3.2 Show the reconstruction of the first 10 face images using only 100 principal components. ``` faces = 10 noisy_recons = reconstructMat(noisy_faces[:faces, :], noisy_eigenvecs, 100) print('reconstructed faces for nosiy images only 100 principal components') plotFace(noisy_recons, faces) ```
true
code
0.660665
null
null
null
null
<div align="center"> <h1><img width="30" src="https://madewithml.com/static/images/rounded_logo.png">&nbsp;<a href="https://madewithml.com/">Made With ML</a></h1> Applied ML · MLOps · Production <br> Join 30K+ developers in learning how to responsibly <a href="https://madewithml.com/about/">deliver value</a> with ML. <br> </div> <br> <div align="center"> <a target="_blank" href="https://newsletter.madewithml.com"><img src="https://img.shields.io/badge/Subscribe-30K-brightgreen"></a>&nbsp; <a target="_blank" href="https://github.com/GokuMohandas/MadeWithML"><img src="https://img.shields.io/github/stars/GokuMohandas/MadeWithML.svg?style=social&label=Star"></a>&nbsp; <a target="_blank" href="https://www.linkedin.com/in/goku"><img src="https://img.shields.io/badge/style--5eba00.svg?label=LinkedIn&logo=linkedin&style=social"></a>&nbsp; <a target="_blank" href="https://twitter.com/GokuMohandas"><img src="https://img.shields.io/twitter/follow/GokuMohandas.svg?label=Follow&style=social"></a> <br> 🔥&nbsp; Among the <a href="https://github.com/topics/mlops" target="_blank">top MLOps</a> repositories on GitHub </div> <br> <hr> # Optimize (GPU) Use this notebooks to run hyperparameter optimization on Google Colab and utilize it's free GPUs. ## Clone repository ``` # Load repository !git clone https://github.com/GokuMohandas/MLOps.git mlops # Files % cd mlops !ls ``` ## Setup ``` %%bash !pip install --upgrade pip !python -m pip install -e ".[dev]" --no-cache-dir ``` # Download data We're going to download data directly from GitHub since our blob stores are local. But you can easily load the correct data versions from your cloud blob store using the *.json.dvc pointer files in the [data directory](https://github.com/GokuMohandas/MLOps/tree/main/data). ``` from app import cli # Download data cli.download_data() # Check if data downloaded !ls data ``` # Compute features ``` # Download data cli.compute_features() # Computed features !ls data ``` ## Optimize Now we're going to perform hyperparameter optimization using the objective and parameter distributions defined in the [main script](https://github.com/GokuMohandas/MLOps/blob/main/tagifai/main.py). The best parameters will be written to [config/params.json](https://raw.githubusercontent.com/GokuMohandas/MLOps/main/config/params.json) which will be used to train the best model below. ``` # Optimize cli.optimize(num_trials=100) ``` # Train Once we're identified the best hyperparameters, we're ready to train our best model and save the corresponding artifacts (label encoder, tokenizer, etc.) ``` # Train best model cli.train_model() ``` # Change metadata In order to transfer our trained model and it's artifacts to our local model registry, we should change the metadata to match. ``` from pathlib import Path from config import config import yaml def change_artifact_metadata(fp): with open(fp) as f: metadata = yaml.load(f) for key in ["artifact_location", "artifact_uri"]: if key in metadata: metadata[key] = metadata[key].replace( str(config.MODEL_REGISTRY), model_registry) with open(fp, "w") as f: yaml.dump(metadata, f) # Change this as necessary model_registry = "/Users/goku/Documents/madewithml/applied-ml/stores/model" # Change metadata in all meta.yaml files experiment_dir = Path(config.MODEL_REGISTRY, "1") for fp in list(Path(experiment_dir).glob("**/meta.yaml")): change_artifact_metadata(fp=fp) ``` ## Download Download and transfer the trained model's files to your local model registry. If you existing runs, just transfer that run's directory. ``` from google.colab import files # Download !zip -r model.zip model !zip -r run.zip stores/model/1 files.download("run.zip") ```
true
code
0.413951
null
null
null
null
# Expressions and Arithmetic **CS1302 Introduction to Computer Programming** ___ ## Operators The followings are common operators you can use to form an expression in Python: | Operator | Operation | Example | | --------: | :------------- | :-----: | | unary `-` | Negation | `-y` | | `+` | Addition | `x + y` | | `-` | Subtraction | `x - y` | | `*` | Multiplication | `x*y` | | `/` | Division | `x/y` | - `x` and `y` in the examples are called the *left and right operands* respectively. - The first operator is a *unary operator*, which operates on just one operand. (`+` can also be used as a unary operator, but that is not useful.) - All other operators are *binary operators*, which operate on two operands. Python also supports some more operators such as the followings: | Operator | Operation | Example | | -------: | :--------------- | :-----: | | `//` | Integer division | `x//y` | | `%` | Modulo | `x%y` | | `**` | Exponentiation | `x**y` | ``` # ipywidgets to demonstrate the operations of binary operators from ipywidgets import interact binary_operators = {'+':' + ','-':' - ','*':'*','/':'/','//':'//','%':'%','**':'**'} @interact(operand1=r'10', operator=binary_operators, operand2=r'3') def binary_operation(operand1,operator,operand2): expression = f"{operand1}{operator}{operand2}" value = eval(expression) print(f"""{'Expression:':>11} {expression}\n{'Value:':>11} {value}\n{'Type:':>11} {type(value)}""") ``` **Exercise** What is the difference between `/` and `//`? - `/` is the usual division, and so `10/3` returns the floating-point number $3.\dot{3}$. - `//` is integer division, and so `10//3` gives the integer quotient 3. **What does the modulo operator `%` do?** You can think of it as computing the remainder, but the [truth](https://docs.python.org/3/reference/expressions.html#binary-arithmetic-operations) is more complicated than required for the course. **Exercise** What does `'abc' * 3` mean? What about `10 * 'a'`? - The first expression means concatenating `'abc'` three times. - The second means concatenating `'a'` ten times. **Exercise** How can you change the default operands (`10` and `3`) for different operators so that the overall expression has type `float`. Do you need to change all the operands to `float`? - `/` already returns a `float`. - For all other operators, changing at least one of the operands to `float` will return a `float`. ## Operator Precedence and Associativity An expression can consist of a sequence of operations performed in a row such as `x + y*z`. **How to determine which operation should be performed first?** Like arithmetics, the order of operations is decided based on the following rules applied sequentially: 1. *grouping* by parentheses: inner grouping first 1. operator *precedence/priority*: higher precedence first 1. operator *associativity*: - left associativity: left operand first - right associativity: right operand first **What are the operator precedence and associativity?** The following table gives a concise summary: | Operators | Associativity | | :--------------- | :-----------: | | `**` | right | | `-` (unary) | right | | `*`,`/`,`//`,`%` | left | | `+`,`-` | left | **Exercise** Play with the following widget to understand the precedence and associativity of different operators. In particular, explain whether the expression `-10 ** 2*3` gives $(-10)^{2\times 3}= 10^6 = 1000000$. ``` from ipywidgets import fixed @interact(operator1={'None':'','unary -':'-'}, operand1=fixed(r'10'), operator2=binary_operators, operand2=fixed(r'2'), operator3=binary_operators, operand3=fixed(r'3') ) def three_operators(operator1,operand1,operator2,operand2,operator3,operand3): expression = f"{operator1}{operand1}{operator2}{operand2}{operator3}{operand3}" value = eval(expression) print(f"""{'Expression:':>11} {expression}\n{'Value:':>11} {value}\n{'Type:':>11} {type(value)}""") ``` The expression evaluates to $(-(10^2))\times 3=-300$ instead because the exponentiation operator `**` has higher precedence than both the multiplication `*` and the negation operators `-`. **Exercise** To avoid confusion in the order of operations, we should follow the [style guide](https://www.python.org/dev/peps/pep-0008/#other-recommendations) when writing expression. What is the proper way to write `-10 ** 2*3`? ``` print(-10**2 * 3) # can use use code-prettify extension to fix incorrect styles print((-10)**2 * 3) ``` ## Augmented Assignment Operators - For convenience, Python defines the [augmented assignment operators](https://docs.python.org/3/reference/simple_stmts.html#grammar-token-augmented-assignment-stmt) such as `+=`, where - `x += 1` means `x = x + 1`. The following widgets demonstrate other augmented assignment operators. ``` from ipywidgets import interact, fixed @interact(initial_value=fixed(r'10'), operator=['+=','-=','*=','/=','//=','%=','**='], operand=fixed(r'2')) def binary_operation(initial_value,operator,operand): assignment = f"x = {initial_value}\nx {operator} {operand}" _locals = {} exec(assignment,None,_locals) print(f"""Assignments:\n{assignment:>10}\nx: {_locals['x']} ({type(_locals['x'])})""") ``` **Exercise** Can we create an expression using (augmented) assignment operators? Try running the code to see the effect. ``` 3*(x = 15) ``` Assignment operators are used in assignment statements, which are not expressions because they cannot be evaluated.
true
code
0.346901
null
null
null
null
``` import os import numpy as np np.random.seed(0) import pandas as pd import matplotlib.pyplot as plt from sklearn import set_config set_config(display="diagram") DATA_PATH = os.path.abspath( r"C:\Users\jan\Dropbox\_Coding\UdemyML\Chapter13_CaseStudies\CaseStudyIncome\adult.xlsx" ) ``` ### Dataset ``` df = pd.read_excel(DATA_PATH) idx = np.where(df["native-country"] == "Holand-Netherlands")[0] data = df.to_numpy() x = data[:, :-1] x = np.delete(x, idx, axis=0) y = data[:, -1] y = np.delete(y, idx, axis=0) categorical_features = [1, 2, 3, 4, 5, 6, 7, 9] numerical_features = [0, 8] print(f"x shape: {x.shape}") print(f"y shape: {y.shape}") ``` ### y-Data ``` def one_hot(y): return np.array([0 if val == "<=50K" else 1 for val in y], dtype=np.int32) y = one_hot(y) ``` ### Helper ``` def print_grid_cv_results(grid_result): print( f"Best model score: {grid_result.best_score_} " f"Best model params: {grid_result.best_params_} " ) means = grid_result.cv_results_["mean_test_score"] stds = grid_result.cv_results_["std_test_score"] params = grid_result.cv_results_["params"] for mean, std, param in zip(means, stds, params): mean = round(mean, 4) std = round(std, 4) print(f"{mean} (+/- {2 * std}) with: {param}") ``` ### Sklearn Imports ``` from sklearn.compose import ColumnTransformer from sklearn.preprocessing import OrdinalEncoder from sklearn.preprocessing import OneHotEncoder from sklearn.preprocessing import StandardScaler from sklearn.model_selection import train_test_split from sklearn.ensemble import RandomForestClassifier from sklearn.pipeline import Pipeline from sklearn.model_selection import GridSearchCV x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.3) ``` ### Classifier and Params ``` params = { "classifier__n_estimators": [50, 100, 200], "classifier__max_depth": [None, 100, 200] } clf = RandomForestClassifier() ``` ### Ordinal Features ``` numeric_transformer = Pipeline( steps=[ ('scaler', StandardScaler()) ] ) categorical_transformer = Pipeline( steps=[ ('ordinal', OrdinalEncoder()) ] ) preprocessor_odinal = ColumnTransformer( transformers=[ ('numeric', numeric_transformer, numerical_features), ('categorical', categorical_transformer, categorical_features) ] ) preprocessor_odinal preprocessor_odinal.fit(x_train) x_train_ordinal = preprocessor_odinal.transform(x_train) x_test_ordinal = preprocessor_odinal.transform(x_test) print(f"Shape of odinal data: {x_train_ordinal.shape}") print(f"Shape of odinal data: {x_test_ordinal.shape}") pipe_ordinal = Pipeline( steps=[ ('preprocessor_odinal', preprocessor_odinal), ('classifier', clf) ] ) pipe_ordinal grid_ordinal = GridSearchCV(pipe_ordinal, params, cv=3) grid_results_ordinal = grid_ordinal.fit(x_train, y_train) print_grid_cv_results(grid_results_ordinal) ``` ### OneHot Features ``` numeric_transformer = Pipeline( steps=[ ('scaler', StandardScaler()) ] ) categorical_transformer = Pipeline( steps=[ ('onehot', OneHotEncoder(handle_unknown="ignore", sparse=False)) ] ) preprocessor_onehot = ColumnTransformer( transformers=[ ('numeric', numeric_transformer, numerical_features), ('categorical', categorical_transformer, categorical_features) ] ) preprocessor_onehot preprocessor_onehot.fit(x_train) x_train_onehot = preprocessor_onehot.transform(x_train) x_test_onehot = preprocessor_onehot.transform(x_test) print(f"Shape of onehot data: {x_train_onehot.shape}") print(f"Shape of onehot data: {x_test_onehot.shape}") pipe_onehot = Pipeline( steps=[ ('preprocessor_onehot', preprocessor_odinal), ('classifier', clf) ] ) pipe_onehot grid_onehot = GridSearchCV(pipe_onehot, params, cv=3) grid_results_onehot = grid_onehot.fit(x_train, y_train) print_grid_cv_results(grid_results_onehot) ``` ### TensorFlow Model ``` from tensorflow.keras.layers import Activation from tensorflow.keras.layers import Dense from tensorflow.keras.models import Sequential from tensorflow.keras.optimizers import SGD y_train = y_train.reshape(-1, 1) y_test = y_test.reshape(-1, 1) def build_model(input_dim, output_dim): model = Sequential() model.add(Dense(units=128, input_dim=input_dim)) model.add(Activation("relu")) model.add(Dense(units=64)) model.add(Activation("relu")) model.add(Dense(units=output_dim)) model.add(Activation("sigmoid")) return model ``` ### Neural Network with Ordinal Features ``` model = build_model( input_dim=x_test_ordinal.shape[1], output_dim=y_train.shape[1] ) model.compile( loss="binary_crossentropy", optimizer=SGD(learning_rate=0.001), metrics=["binary_accuracy"] ) history_ordinal = model.fit( x=x_train_ordinal, y=y_train, epochs=20, validation_data=(x_test_ordinal, y_test) ) val_binary_accuracy = history_ordinal.history["val_binary_accuracy"] plt.plot(range(len(val_binary_accuracy)), val_binary_accuracy) plt.show() ``` ### Neural Network with OneHot Features ``` model = build_model( input_dim=x_train_onehot.shape[1], output_dim=y_train.shape[1] ) model.compile( loss="binary_crossentropy", optimizer=SGD(learning_rate=0.001), metrics=["binary_accuracy"] ) history_onehot = model.fit( x=x_train_onehot, y=y_train, epochs=20, validation_data=(x_test_onehot, y_test) ) val_binary_accuracy = history_onehot.history["val_binary_accuracy"] plt.plot(range(len(val_binary_accuracy)), val_binary_accuracy) plt.show() ``` ### Pass in user-data ``` pipe_ordinal.fit(x_train, y_train) score = pipe_ordinal.score(x_test, y_test) print(f"Score: {score}") x_sample = [ 25, "Private", "11th", "Never-married", "Machine-op-inspct", "Own-child", "Black", "Male", 40, "United-States" ] y_sample = 0 y_pred_sample = pipe_ordinal.predict([x_sample]) print(f"Pred: {y_pred_sample}") ```
true
code
0.582966
null
null
null
null
# Prudential Life Insurance Assessment An example of the structured data lessons from Lesson 4 on another dataset. ``` %reload_ext autoreload %autoreload 2 %matplotlib inline import os from pathlib import Path import pandas as pd import numpy as np import torch from torch import nn import torch.nn.functional as F from fastai import structured from fastai.column_data import ColumnarModelData from fastai.dataset import get_cv_idxs from sklearn.metrics import cohen_kappa_score from ml_metrics import quadratic_weighted_kappa from torch.nn.init import kaiming_uniform, kaiming_normal PATH = Path('./data/prudential') PATH.mkdir(exist_ok=True) ``` ## Download dataset ``` !kaggle competitions download -c prudential-life-insurance-assessment --path={PATH} for file in os.listdir(PATH): if not file.endswith('zip'): continue !unzip -q -d {PATH} {PATH}/{file} train_df = pd.read_csv(PATH/'train.csv') train_df.head() ``` Extra feature engineering taken from the forum ``` train_df['Product_Info_2_char'] = train_df.Product_Info_2.str[0] train_df['Product_Info_2_num'] = train_df.Product_Info_2.str[1] train_df['BMI_Age'] = train_df['BMI'] * train_df['Ins_Age'] med_keyword_columns = train_df.columns[train_df.columns.str.startswith('Medical_Keyword_')] train_df['Med_Keywords_Count'] = train_df[med_keyword_columns].sum(axis=1) train_df['num_na'] = train_df.apply(lambda x: sum(x.isnull()), 1) categorical_columns = 'Product_Info_1, Product_Info_2, Product_Info_3, Product_Info_5, Product_Info_6, Product_Info_7, Employment_Info_2, Employment_Info_3, Employment_Info_5, InsuredInfo_1, InsuredInfo_2, InsuredInfo_3, InsuredInfo_4, InsuredInfo_5, InsuredInfo_6, InsuredInfo_7, Insurance_History_1, Insurance_History_2, Insurance_History_3, Insurance_History_4, Insurance_History_7, Insurance_History_8, Insurance_History_9, Family_Hist_1, Medical_History_2, Medical_History_3, Medical_History_4, Medical_History_5, Medical_History_6, Medical_History_7, Medical_History_8, Medical_History_9, Medical_History_11, Medical_History_12, Medical_History_13, Medical_History_14, Medical_History_16, Medical_History_17, Medical_History_18, Medical_History_19, Medical_History_20, Medical_History_21, Medical_History_22, Medical_History_23, Medical_History_25, Medical_History_26, Medical_History_27, Medical_History_28, Medical_History_29, Medical_History_30, Medical_History_31, Medical_History_33, Medical_History_34, Medical_History_35, Medical_History_36, Medical_History_37, Medical_History_38, Medical_History_39, Medical_History_40, Medical_History_41'.split(', ') categorical_columns += ['Product_Info_2_char', 'Product_Info_2_num'] cont_columns = 'Product_Info_4, Ins_Age, Ht, Wt, BMI, Employment_Info_1, Employment_Info_4, Employment_Info_6, Insurance_History_5, Family_Hist_2, Family_Hist_3, Family_Hist_4, Family_Hist_5, Medical_History_1, Medical_History_10, Medical_History_15, Medical_History_24, Medical_History_32'.split(', ') cont_columns += [c for c in train_df.columns if c.startswith('Medical_Keyword_')] + ['BMI_Age', 'Med_Keywords_Count', 'num_na'] train_df[categorical_columns].head() train_df[cont_columns].head() train_df = train_df[categorical_columns + cont_columns + ['Response']] len(train_df.columns) ``` ### Convert to categorical ``` for col in categorical_columns: train_df[col] = train_df[col].astype('category').cat.as_ordered() train_df['Product_Info_1'].dtype train_df.shape ``` ### Numericalise and process DataFrame ``` df, y, nas, mapper = structured.proc_df(train_df, 'Response', do_scale=True) y = y.astype('float') num_targets = len(set(y)) ``` ### Create ColumnData object (instead of ImageClassifierData) ``` cv_idx = get_cv_idxs(len(df)) cv_idx model_data = ColumnarModelData.from_data_frame( PATH, cv_idx, df, y, cat_flds=categorical_columns, is_reg=True) model_data.trn_ds[0][0].shape[0] + model_data.trn_ds[0][1].shape[0] model_data.trn_ds[0][1].shape ``` ### Get embedding sizes The formula Jeremy uses for getting embedding sizes is: cardinality / 2 (maxed out at 50). We reproduce that below: ``` categorical_column_sizes = [ (c, len(train_df[c].cat.categories) + 1) for c in categorical_columns] categorical_column_sizes[:5] embedding_sizes = [(c, min(50, (c+1)//2)) for _, c in categorical_column_sizes] embedding_sizes[:5] def emb_init(x): x = x.weight.data sc = 2/(x.size(1)+1) x.uniform_(-sc,sc) class MixedInputModel(nn.Module): def __init__(self, emb_sizes, num_cont): super().__init__() embedding_layers = [] for size, dim in emb_sizes: embedding_layers.append( nn.Embedding( num_embeddings=size, embedding_dim=dim)) self.embeddings = nn.ModuleList(embedding_layers) for emb in self.embeddings: emb_init(emb) self.embedding_dropout = nn.Dropout(0.04) self.batch_norm_cont = nn.BatchNorm1d(num_cont) num_emb = sum(e.embedding_dim for e in self.embeddings) self.fc1 = nn.Linear( in_features=num_emb + num_cont, out_features=1000) kaiming_normal(self.fc1.weight.data) self.dropout_fc1 = nn.Dropout(p=0.01) self.batch_norm_fc1 = nn.BatchNorm1d(1000) self.fc2 = nn.Linear( in_features=1000, out_features=500) kaiming_normal(self.fc2.weight.data) self.dropout_fc2 = nn.Dropout(p=0.01) self.batch_norm_fc2 = nn.BatchNorm1d(500) self.output_fc = nn.Linear( in_features=500, out_features=1 ) kaiming_normal(self.output_fc.weight.data) self.sigmoid = nn.Sigmoid() def forward(self, categorical_input, continuous_input): # Add categorical embeddings together categorical_embeddings = [e(categorical_input[:,i]) for i, e in enumerate(self.embeddings)] categorical_embeddings = torch.cat(categorical_embeddings, 1) categorical_embeddings_dropout = self.embedding_dropout(categorical_embeddings) # Batch normalise continuos vars continuous_input_batch_norm = self.batch_norm_cont(continuous_input) # Create a single vector x = torch.cat([ categorical_embeddings_dropout, continuous_input_batch_norm ], dim=1) # Fully-connected layer 1 fc1_output = self.fc1(x) fc1_relu_output = F.relu(fc1_output) fc1_dropout_output = self.dropout_fc1(fc1_relu_output) fc1_batch_norm = self.batch_norm_fc1(fc1_dropout_output) # Fully-connected layer 2 fc2_output = self.fc2(fc1_batch_norm) fc2_relu_output = F.relu(fc2_output) fc2_batch_norm = self.batch_norm_fc2(fc2_relu_output) fc2_dropout_output = self.dropout_fc2(fc2_batch_norm) output = self.output_fc(fc2_dropout_output) output = self.sigmoid(output) output = output * 7 output = output + 1 return output num_cont = len(df.columns) - len(categorical_columns) model = MixedInputModel( embedding_sizes, num_cont ) model from fastai.column_data import StructuredLearner def weighted_kappa_metric(probs, y): return quadratic_weighted_kappa(probs[:,0], y[:,0]) learner = StructuredLearner.from_model_data(model, model_data, metrics=[weighted_kappa_metric]) learner.lr_find() learner.sched.plot() learner.fit(0.0001, 3, use_wd_sched=True) learner.fit(0.0001, 5, cycle_len=1, cycle_mult=2, use_wd_sched=True) learner.fit(0.00001, 3, cycle_len=1, cycle_mult=2, use_wd_sched=True) ``` There's either a bug in my implementation, or a NN doesn't do that well at this problem.
true
code
0.739168
null
null
null
null
Carbon Insight: Carbon Emissions Visualization ============================================== This tutorial aims to showcase how to visualize anthropogenic CO2 emissions with a near-global coverage and track correlations between global carbon emissions and socioeconomic factors such as COVID-19 and GDP. ``` # Requirements %pip install numpy %pip install pandas %pip install matplotlib ``` # A. Process Carbon Emission Data This notebook helps you to process and visualize carbon emission data provided by [Carbon-Monitor](https://carbonmonitor.org/), which records human-caused carbon emissions from different countries, sources, and timeframes that are of interest to you. Overview: - [Process carbon emission data](#a1) - [Download data from Carbon Monitor](#a11) - [Calculate the rate of change](#a12) - [Expand country regions](#a13) - [Visualize carbon emission data](#a2) - [Observe carbon emission data from the perspective of time](#a21) - [Compare carbon emission data of different sectors](#a22) - [Examples](#a3) - [World carbon emission data](#a31) - [US carbon emission data](#a32) ``` import io from urllib.request import urlopen import numpy as np import pandas as pd import matplotlib.pyplot as plt from pandas.plotting import register_matplotlib_converters register_matplotlib_converters() import os # Optional Function: Export Data def export_data(file_name: str, df: pd.DataFrame): # df = country_region_name_to_code(df) export_path = os.path.join('export_data', file_name) print(f'Export Data to {export_path}') if not os.path.exists('export_data'): os.mkdir('export_data') with open(export_path, 'w', encoding='utf-8') as f: f.write(df.to_csv(index=None, line_terminator='\n', encoding='utf-8')) ``` ## <a id="a1"></a> 1. Process Data ### <a id="a11"></a> 1.1. Download data from Carbon Monitor We are going to download tabular carbon emission data and convert to Pandas Dataframe. Supported data types: - ```carbon_global``` includes carbon emission data of 11 countries and regions worldwide. - ```carbon_us``` includes carbon emission data of 51 states of the United States. - ```carbon_eu``` includes carbon emission data of 27 countries of the European Union. - ```carbon_china``` includes carbon emission data of 31 cities and provinces of China. ``` def get_data_from_carbon_monitor(data_type='carbon_global'): assert data_type in ['carbon_global', 'carbon_us', 'carbon_eu', 'carbon_china'] data_url = f'https://datas.carbonmonitor.org/API/downloadFullDataset.php?source={data_type}' data = urlopen(data_url).read().decode('utf-8-sig') df = pd.read_csv(io.StringIO(data)) df = df.drop(columns=['timestamp']) df = df.loc[pd.notna(df['date'])] df = df.rename(columns={'country': 'country_region'}) df['date'] = pd.to_datetime(df['date'], format='%d/%m/%Y') if data_type == 'carbon_us': df = df.loc[df['state'] != 'United States'] return df ``` ### <a id="a12"></a> 1.2. Calculate the rate of change The rate of change $\Delta(s, r, t)$ is defined as the ratio of current value and moving average of a certain window size: $$\begin{aligned} \Delta(s, r, t) = \left\{\begin{aligned} &\frac{TX(s, r, t)}{\sum_{\tau=t-T}^{t-1}X(\tau)}, &t\geq T\\ &1, &0<t<T \end{aligned}\right. \end{aligned}$$ Where $X(s, r, t)$ is the carbon emission value of sector $s$, region $r$ and date $t$; $T$ is the window size with default value $T=14$. ``` def calculate_rate_of_change(df, window_size=14): region_scope = 'state' if 'state' in df.columns else 'country_region' new_df = pd.DataFrame() for sector in set(df['sector']): sector_mask = df['sector'] == sector for region, values in df.loc[sector_mask].pivot(index='date', columns=region_scope, values='value').items(): values.fillna(0) rates = values / values.rolling(window_size).mean() rates.fillna(value=1, inplace=True) tmp_df = pd.DataFrame( index=values.index, columns=['value', 'rate_of_change'], data=np.array([values.to_numpy(), rates.to_numpy()]).T ) tmp_df['sector'] = sector tmp_df[region_scope] = region new_df = new_df.append(tmp_df.reset_index()) return new_df ``` ### <a id="a13"></a> 1.3. Expand country regions *Note: This step applies only to the ```carbon_global``` dataset.* The dataset ```carbon_global``` does not list all the countries/regions in the world. Instead, there are two groups which contains multiple countries/regions: ```ROW``` (i.e. the rest of the world) and ```EU27 & UK```. In order to obtain the carbon emission data of countries/regions in these two groups, we can refer to [the EDGAR dataset](https://edgar.jrc.ec.europa.eu/dataset_ghg60) and use [the table of CO2 emissions of all world countries in 2019](./static_data/Baseline.csv) as the baseline. Assume the the carbon emission of each non-listed country/region is linearly related to the carbon emission of the group it belongs to, we have: $$\begin{aligned} X(s, r, t) &= \frac{\sum_{r_i\in R(r)}X(s, r_i, t)}{\sum_{r_i\in R(r)}X(s, r_i, t_0)}X(s, r, t_0)\\ &= \frac{X_{Raw}(s, R(r), t)}{\sum_{r_i\in R(r)}X_{Baseline}(s, r_i)}X_{Baseline}(s, r) \end{aligned}$$ Where - $X(s, r, t)$ is the carbon emission value of sector $s$, country/region $r$ and date $t$. - $t_0$ is the date of the baseline table. - $R(r)$ is the group that contains country/region $r$. - $X_{Raw}(s, R, t)$ is the carbon emission value of sector $s$, country/region group $R$ and date $t$ in the ```carbon_global``` dataset. - $X_{Baseline}(s, r)$ is the carbon emission value of sector $s$ and country/region $r$ in the baseline table. Note that the baseline table does not contain the ```International Aviation``` sector. Therefore, the data for ```International Aviation``` is only available to countries listed in the ```carbon_global``` dataset. When we expand the ```ROW``` and the ```EU27 & UK``` groups to other countries/regions of the world, only the other five sectors are considered. ``` def expand_country_regions(df): sectors = set(df['sector']) assert 'country_region' in df.columns df = df.replace('US', 'United States') df = df.replace('UK', 'United Kingdom') original_country_regions = set(df['country_region']) country_region_df = pd.read_csv('static_data/CountryRegion.csv') base = {} name_to_code = {} for _, (name, code, source) in country_region_df.loc[:, ['Name', 'Code', 'DataSource']].iterrows(): if source.startswith('Simulated') and name not in original_country_regions: name_to_code[name] = code base[code] = 'ROW' if source.endswith('ROW') else 'EU27 & UK' baseline = pd.read_csv('static_data/Baseline.csv') baseline = baseline.set_index('CountryRegionCode') baseline = baseline.loc[:, [sector for sector in baseline.columns if sector in sectors]] group_baseline = {} for group in original_country_regions & set(['ROW', 'EU27 & UK']): group_baseline[group] = baseline.loc[[code for code, base_group in base.items() if base_group == group], :].sum() new_df = pd.DataFrame() sector_masks = {sector: df['sector'] == sector for sector in sectors} for country_region in set(country_region_df['Name']): if country_region in name_to_code: code = name_to_code[country_region] group = base[code] group_mask = df['country_region'] == group for sector, sum_value in group_baseline[group].items(): tmp_df = df.loc[sector_masks[sector] & group_mask, :].copy() tmp_df['value'] = tmp_df['value'] / sum_value * baseline.loc[code, sector] tmp_df['country_region'] = country_region new_df = new_df.append(tmp_df) elif country_region in original_country_regions: new_df = new_df.append(df.loc[df['country_region'] == country_region]) return new_df ``` ## 2. <a id="a2"></a> Visualize Data This is a auxiliary module for displaying data, which can be modified arbitrarily. ### <a id="a21"></a> 2.1. Plot by date In this part we are going to create a line chart, where the emission value and rate of change for given counties during the given time can be browsed. ``` def plot_by_date(df, start_date=None, end_date=None, sector=None, regions=None, title='Carbon Emission by Date'): if start_date is None: start_date = df['date'].min() if end_date is None: end_date = df['date'].max() tmp_df = df.loc[(df['date'] >= start_date) & (df['date'] <= end_date)] region_scope = 'state' if 'state' in tmp_df.columns else 'country_region' if regions is None or type(regions) == int: region_list = list(set(tmp_df[region_scope])) sector_mask = True if sector is None else tmp_df['sector'] == sector region_list.sort(key=lambda region: -tmp_df.loc[(tmp_df[region_scope] == region) & sector_mask, 'value'].sum()) regions = region_list[:3 if regions is None else regions] tmp_df = pd.concat([tmp_df.loc[tmp_df[region_scope] == region] for region in regions]) if sector not in set(tmp_df['sector']): tmp_df['rate_of_change'] = tmp_df['value'] / tmp_df['rate_of_change'] tmp_df = tmp_df.groupby(['date', region_scope]).sum().reset_index() value_df = tmp_df.pivot(index='date', columns=region_scope, values='value') rate_df = tmp_df.pivot(index='date', columns=region_scope, values='rate_of_change') rate_df = value_df / rate_df else: tmp_df = tmp_df.loc[tmp_df['sector'] == sector, [region_scope, 'date', 'value', 'rate_of_change']] value_df = tmp_df.pivot(index='date', columns=region_scope, values='value') rate_df = tmp_df.pivot(index='date', columns=region_scope, values='rate_of_change') value_df = value_df.loc[:, regions] rate_df = rate_df.loc[:, regions] fig = plt.figure(figsize=(10, 8)) fig.suptitle(title) plt.subplot(2, 1, 1) plt.plot(value_df) plt.ylabel('Carbon Emission Value / Mt CO2') plt.xticks(rotation=60) plt.legend(regions, loc='upper right') plt.subplot(2, 1, 2) plt.plot(rate_df) plt.ylabel('Rate of Change') plt.xticks(rotation=60) plt.legend(regions, loc='upper right') plt.subplots_adjust(hspace=0.3) ``` ### <a id="a22"></a> 2.2. Plot by sector Generally, sources of emissions can be divided into five or six categories: - Domestic Aviation - Ground Transport - Industry - Power - Residential - International Aviation Where the data of ```International Aviation``` are only available to ```carbon_global``` and ```carbon_us``` datasets. For ```carbon_global``` dataset, we can not expand the data for International Aviation of non-listed countries. Let's create a pie chart and a stacked column chart, where you can focus on details of specific countiy/regions’ emission data, including quantity and percentage breakdown by above sectors. ``` def plot_by_sector(df, start_date=None, end_date=None, sectors=None, region=None, title='Carbon Emission Data by Sector'): if start_date is None: start_date = df['date'].min() if end_date is None: end_date = df['date'].max() tmp_df = df.loc[(df['date'] >= start_date) & (df['date'] <= end_date)] region_scope = 'state' if 'state' in df.columns else 'country_region' if region in set(tmp_df[region_scope]): tmp_df = tmp_df.loc[tmp_df[region_scope] == region] if sectors is None: sectors = list(set(tmp_df['sector'])) sectors.sort(key=lambda sector: -tmp_df.loc[tmp_df['sector'] == sector, 'value'].sum()) tmp_df = tmp_df.loc[[sector in sectors for sector in tmp_df['sector']]] fig = plt.figure(figsize=(10, 8)) fig.suptitle(title) plt.subplot(2, 1, 1) data = np.array([tmp_df.loc[tmp_df['sector'] == sector, 'value'].sum() for sector in sectors]) total = tmp_df['value'].sum() bbox_props = dict(boxstyle="square,pad=0.3", fc="w", ec="k", lw=0.72) kw = dict(arrowprops=dict(arrowstyle="-"), bbox=bbox_props, zorder=0, va="center") wedges, texts = plt.pie(data, wedgeprops=dict(width=0.5), startangle=90) for i, p in enumerate(wedges): factor = data[i] / total * 100 if factor > 5: ang = (p.theta2 - p.theta1)/2. + p.theta1 y = np.sin(np.deg2rad(ang)) x = np.cos(np.deg2rad(ang)) horizontalalignment = {-1: "right", 1: "left"}[int(np.sign(x))] connectionstyle = "angle,angleA=0,angleB={}".format(ang) kw["arrowprops"].update({"connectionstyle": connectionstyle}) text = '{}\n{:.1f} Mt CO2 ({:.1f}%)'.format(sectors[i], data[i], factor) plt.annotate( text, xy=(x, y), xytext=(1.35 * np.sign(x), 1.4 * y), horizontalalignment=horizontalalignment, **kw ) plt.axis('equal') plt.subplot(2, 1, 2) labels = [] data = [[] for _ in sectors] date = pd.to_datetime(start_date) delta = pd.DateOffset(months=1) while date <= pd.to_datetime(end_date): sub_df = tmp_df.loc[(tmp_df['date'] >= date) & (tmp_df['date'] < date + delta)] for i, sector in enumerate(sectors): data[i].append(sub_df.loc[sub_df['sector'] == sector, 'value'].sum()) labels.append(date.strftime('%Y-%m')) date += delta data = np.array(data) for i, sector in enumerate(sectors): plt.bar(labels, data[i], bottom=data[:i].sum(axis=0), label=sector) plt.xticks(rotation=60) plt.legend() ``` ## <a id="a3"></a> 3. Examples ### <a id="a31"></a> 3.1. World carbon emission data ``` data_type = 'carbon_global' print(f'Download {data_type} data') global_df = get_data_from_carbon_monitor(data_type) print('Calculate rate of change') global_df = calculate_rate_of_change(global_df) print('Expand country / regions') global_df = expand_country_regions(global_df) export_data('global_carbon_emission_data.csv', global_df) global_df plot_by_date( global_df, start_date='2019-01-01', end_date='2020-12-31', sector='Residential', regions=['China', 'United States'], title='Residential Carbon Emission, China v.s. United States, 2019-2020' ) plot_by_sector( global_df, start_date='2019-01-01', end_date='2020-12-31', sectors=None, region=None, title='World Carbon Emission by Sectors, 2019-2020', ) ``` ### <a id="a32"></a> 3.2. US carbon emission data ``` data_type = 'carbon_us' print(f'Download {data_type} data') us_df = get_data_from_carbon_monitor(data_type) print('Calculate rate of change') us_df = calculate_rate_of_change(us_df) export_data('us_carbon_emission_data.csv', us_df) us_df plot_by_date( us_df, start_date='2019-01-01', end_date='2020-12-31', sector=None, regions=3, title='US Carbon Emission, Top 3 States, 2019-2020' ) plot_by_sector( us_df, start_date='2019-01-01', end_date='2020-12-31', sectors = None, region='California', title='California Carbon Emission by Sectors, 2019-2020', ) ``` # B. Co-Analysis of Carbon Emission Data v.s. COVID-19 Data This section will help you to visualize the relativity between carbon emissions in different countries and the trends of the COVID-19 pandemic since January 2020 provided by [Oxford COVID-19 Government Response Tracker](https://covidtracker.bsg.ox.ac.uk/). The severity of the epidemic can be shown in three aspects: the number of new diagnoses, the number of deaths and the stringency and policy indices of governments. Overview: - [Download data from Oxford COVID-19 Government Response Tracker](#b1) - [Visualize COVID-19 data and carbon emission data](#b2) - [Example: COVID-19 cases and stringency index v.s. carbon emission in US](#b3) ``` import json import datetime from urllib.request import urlopen ``` ## 1. <a id="b1"></a> Download COVID-19 Data We are going to download JSON-formatted COVID-19 data and convert to Pandas Dataframe. The Oxford COVID-19 Government Response Tracker dataset provides confirmed cases, deaths and stringency index data for all countries/regions since January 2020. - The ```confirmed``` measurement records the total number of confirmed COVID-19 cases since January 2020. We will convert it into incremental data. - The ```deaths``` measurement records the total number of patients who died due to infection with COVID-19 since January 2020. We will convert it into incremental data. - The ```stringency``` measurement means the Stringency Index, which is a float number from 0 to 100 that reflects how strict a country’s measures were, including lockdown, school closures, travel bans, etc. A higher score indicates a stricter response (i.e. 100 = strictest response). ``` def get_covid_data_from_oxford_covid_tracker(data_type='carbon_global'): data = json.loads(urlopen("https://covidtrackerapi.bsg.ox.ac.uk/api/v2/stringency/date-range/{}/{}".format( "2020-01-22", datetime.datetime.now().strftime("%Y-%m-%d") )).read().decode('utf-8-sig')) country_region_df = pd.read_csv('static_data/CountryRegion.csv') code_to_name = {code: name for _, (name, code) in country_region_df.loc[:, ['Name', 'Code']].iterrows()} last_df = 0 df = pd.DataFrame() for date in sorted(data['data'].keys()): sum_df = pd.DataFrame({name: data['data'][date][code] for code, name in code_to_name.items() if code in data['data'][date]}) sum_df = sum_df.T[['confirmed', 'deaths', 'stringency']].fillna(last_df).astype(np.float32) tmp_df = sum_df - last_df last_df = sum_df[['confirmed', 'deaths']] last_df['stringency'] = 0 tmp_df = tmp_df.reset_index().rename(columns={'index': 'country_region'}) tmp_df['date'] = pd.to_datetime(date) df = df.append(tmp_df) return df ``` ## <a id="b2"></a> 2. Visualize COVID-19 Data and Carbon Emission Data This part will guide you to create a line-column chart, where you can view the specified COVID-19 measurement (```confirmed```, ```deaths``` or ```stringency```) and carbon emissions in the specified country/region throughout time. ``` def plot_covid_data_vs_carbon_emission_data( covid_df, carbon_df, start_date=None, end_date=None, country_region=None, sector=None, covid_measurement='confirmed', title='Carbon Emission v.s. COVID-19 Confirmed Cases' ): if start_date is None: start_date = max(covid_df['date'].min(), carbon_df['date'].min()) if end_date is None: end_date = min(covid_df['date'].max(), carbon_df['date'].max()) x = pd.to_datetime(start_date) dates = [x] while x <= pd.to_datetime(end_date): x = x.replace(year=x.year+1, month=1) if x.month == 12 else x.replace(month=x.month+1) dates.append(x) dates = [f'{x.year}-{x.month}' for x in dates] plt.figure(figsize=(10, 6)) plt.title(title) plt.xticks(rotation=60) if sector in set(carbon_df['sector']): carbon_df = carbon_df[carbon_df['sector'] == sector] else: sector = 'All Sectors' if 'country_region' not in carbon_df.columns: raise ValueError('The carbon emission data need to be disaggregated by countries/regions.') if country_region in set(carbon_df['country_region']): carbon_df = carbon_df.loc[carbon_df['country_region'] == country_region] else: country_region = 'World' carbon_df = carbon_df[['date', 'value']] carbon_df = carbon_df.loc[(carbon_df['date'] >= f'{dates[0]}-01') & (carbon_df['date'] < f'{dates[-1]}-01')].set_index('date') carbon_df = carbon_df.groupby(carbon_df.index.year * 12 + carbon_df.index.month).sum() plt.bar(dates[:-1], carbon_df['value'], color='C1') plt.ylim(0) plt.legend([f'{country_region} {sector}\nCarbon Emission / Mt CO2'], loc='upper left') plt.twinx() if country_region in set(covid_df['country_region']): covid_df = covid_df.loc[covid_df['country_region'] == country_region] covid_df = covid_df[['date', covid_measurement]] covid_df = covid_df.loc[(covid_df['date'] >= f'{dates[0]}-01') & (covid_df['date'] < f'{dates[-1]}-01')].set_index('date') covid_df = covid_df.groupby(covid_df.index.year * 12 + covid_df.index.month) covid_df = covid_df.mean() if covid_measurement == 'stringency' else covid_df.sum() plt.plot(dates[:-1], covid_df[covid_measurement]) plt.ylim(0, 100 if covid_measurement == 'stringency' else None) plt.legend([f'COVID-19\n{covid_measurement}'], loc='upper right') ``` ## <a id="b3"></a> 3. Examples ``` print(f'Download COVID-19 data') covid_df = get_covid_data_from_oxford_covid_tracker(data_type) export_data('covid_data.csv', covid_df) covid_df plot_covid_data_vs_carbon_emission_data( covid_df, global_df, start_date=None, end_date=None, country_region='United States', sector=None, covid_measurement='confirmed', title = 'US Carbon Emission v.s. COVID-19 Confirmed Cases' ) plot_covid_data_vs_carbon_emission_data( covid_df, global_df, start_date=None, end_date=None, country_region='United States', sector=None, covid_measurement='stringency', title = 'US Carbon Emission v.s. COVID-19 Stringency Index' ) ``` # C. Co-Analysis of Historical Carbon Emission Data v.s. Population & GDP Data This section illustrates how to compare carbon intensity and per capita emissions of different countries/regions. Refer to [the EDGAR dataset](https://edgar.jrc.ec.europa.eu/dataset_ghg60) and [World Bank Open Data](https://data.worldbank.org/), carbon emissions, population and GDP data of countries/regions in the world from 1970 to 2018 are available. Overview: - [Process carbon emission & social economy data](#c1) - [Download data from EDGAR](#c11) - [Download data from World Bank](#c12) - [Merge datasets](#c13) - [Visualize carbon emission & social economy data](#c2) - [See how per capita emissions change over time in different countries/regions](#c21) - [Observe how *carbon intensity* reduced over time](#c22) - [Example: relationships of carbon emission and social economy in huge countries](#c3) *Carbon intensity* is the measure of CO2 produced per US dollar GDP. In other words, it’s a measure of how much CO2 we emit when we generate one dollar of domestic economy. A rapidly decreasing carbon intensity is beneficial for the environment and economy. ``` import zipfile ``` ## <a id="c1"></a> 1. Process Carbon Emission & Social Economy Data ### <a id="c11"></a> 1.1. Download 1970-2018 yearly carbon emission data from the EDGAR dataset ``` def get_historical_carbon_emission_data_from_edgar(): if not os.path.exists('download_data'): os.mkdir('download_data') site = 'https://cidportal.jrc.ec.europa.eu/ftp/jrc-opendata/EDGAR/datasets' dataset = 'v60_GHG/CO2_excl_short-cycle_org_C/v60_GHG_CO2_excl_short-cycle_org_C_1970_2018.zip' with open('download_data/historical_carbon_emission.zip', 'wb') as f: f.write(urlopen(f'{site}/{dataset}').read()) with zipfile.ZipFile('download_data/historical_carbon_emission.zip', 'r') as zip_ref: zip_ref.extractall('download_data/historical_carbon_emission') hist_carbon_df = pd.read_excel( 'download_data/historical_carbon_emission/v60_CO2_excl_short-cycle_org_C_1970_2018.xls', sheet_name='TOTALS BY COUNTRY', index_col=2, header=9, ).iloc[:, 4:] hist_carbon_df.columns = hist_carbon_df.columns.map(lambda x: pd.to_datetime(f'{x[-4:]}-01-01')) hist_carbon_df.index = hist_carbon_df.index.rename('country_region') hist_carbon_df *= 1000 return hist_carbon_df ``` ### <a id="c12"></a> 1.2. Download 1960-pressent yearly population and GDP data from World Bank ``` def read_worldbank_data(data_id): tmp_df = pd.read_excel( f'https://api.worldbank.org/v2/en/indicator/{data_id}?downloadformat=excel', sheet_name='Data', index_col=1, header=3, ).iloc[:, 3:] tmp_df.columns = tmp_df.columns.map(lambda x: pd.to_datetime(x, format='%Y')) tmp_df.index = tmp_df.index.rename('country_region') return tmp_df def get_population_and_gdp_data_from_worldbank(): return read_worldbank_data('SP.POP.TOTL'), read_worldbank_data('NY.GDP.MKTP.CD') ``` ### <a id="c13"></a> 1.3. Merge the three datasets ``` def melt_table_by_years(df, value_name, country_region_codes, code_to_name, years): return df.loc[country_region_codes, years].rename(index=code_to_name).reset_index().melt( id_vars=['country_region'], value_vars=years, var_name='date', value_name=value_name ) def merge_historical_data(hist_carbon_df, pop_df, gdp_df): country_region_df = pd.read_csv('static_data/CountryRegion.csv') code_to_name = {code: name for _, (name, code) in country_region_df.loc[:, ['Name', 'Code']].iterrows()} country_region_codes = sorted(set(pop_df.index) & set(gdp_df.index) & set(hist_carbon_df.index) & set(code_to_name.keys())) years = sorted(set(pop_df.columns) & set(gdp_df.columns) & set(hist_carbon_df.columns)) pop_df = melt_table_by_years(pop_df, 'population', country_region_codes, code_to_name, years) gdp_df = melt_table_by_years(gdp_df, 'gdp', country_region_codes, code_to_name, years) hist_carbon_df = melt_table_by_years(hist_carbon_df, 'carbon_emission', country_region_codes, code_to_name, years) hist_carbon_df['population'] = pop_df['population'] hist_carbon_df['gdp'] = gdp_df['gdp'] return hist_carbon_df.fillna(0) ``` ## <a id="c2"></a> 2. Visualize Carbon Emission & Social Economy Data ## <a id="c21"></a> 2.1. Plot changes in per capita emissions We now will walk you through how to plot a bubble chart of per capita GDP and per capita emissions of different countries/regions for a given year. ``` def plot_carbon_emission_data_vs_gdp(df, year=None, countries_regions=None, title='Carbon Emission per Capita v.s. GDP per Capita'): if year is None: date = df['date'].max() else: date = min(max(pd.to_datetime(year, format='%Y'), df['date'].min()), df['date'].max()) df = df[df['date'] == date] if countries_regions is None or type(countries_regions) == int: country_region_list = list(set(df['country_region'])) country_region_list.sort(key=lambda country_region: -df.loc[df['country_region'] == country_region, 'population'].to_numpy()) countries_regions = country_region_list[:10 if countries_regions is None else countries_regions] plt.figure(figsize=(10, 6)) plt.title(title) max_pop = df['population'].max() for country_region in countries_regions: row = df.loc[df['country_region'] == country_region] plt.scatter( x=row['gdp'] / row['population'], y=row['carbon_emission'] / row['population'], s=row['population'] / max_pop * 1000, ) for lgnd in plt.legend(countries_regions).legendHandles: lgnd._sizes = [50] plt.xlabel('GDP per Capita (USD)') plt.ylabel('Carbon Emission per Capita (tCO2)') ``` ## <a id="c22"></a> 2.2. Plot changes in carbon intensity To see changes in Carbon Intensity of different countries overtime, let’s plot a line chart. ``` def plot_carbon_indensity_data(df, start_year=None, end_year=None, countries_regions=None, title='Carbon Indensity'): start_date = df['date'].min() if start_year is None else pd.to_datetime(start_year, format='%Y') end_date = df['date'].max() if end_year is None else pd.to_datetime(end_year, format='%Y') df = df[(df['date'] >= start_date) & (df['date'] <= end_date)] if countries_regions is None or type(countries_regions) == int: country_region_list = list(set(df['country_region'])) country_region_list.sort(key=lambda country_region: -df.loc[df['country_region'] == country_region, 'population'].sum()) countries_regions = country_region_list[:3 if countries_regions is None else countries_regions] df = pd.concat([df[df['country_region'] == country_region] for country_region in countries_regions]) df['carbon_indensity'] = df['carbon_emission'] / df['gdp'] indensity_df = df.pivot(index='date', columns='country_region', values='carbon_indensity')[countries_regions] emission_df = df.pivot(index='date', columns='country_region', values='carbon_emission')[countries_regions] plt.figure(figsize=(10, 8)) plt.subplot(211) plt.title(title) plt.plot(indensity_df) plt.legend(countries_regions) plt.ylabel('Carbon Emission (tCO2) per Dollar GDP') plt.subplot(212) plt.plot(emission_df) plt.legend(countries_regions) plt.ylabel('Carbon Emission (tCO2)') ``` ## <a id="c3"></a> 3. Examples ``` print('Download historical carbon emission data') hist_carbon_df = get_historical_carbon_emission_data_from_edgar() print('Download population & GDP data') pop_df, gdp_df = get_population_and_gdp_data_from_worldbank() print('Merge data') hist_carbon_df = merge_historical_data(hist_carbon_df, pop_df, gdp_df) export_data('historical_carbon_emission_data.csv', hist_carbon_df) hist_carbon_df plot_carbon_emission_data_vs_gdp( hist_carbon_df, year=2018, countries_regions=10, title = 'Carbon Emission per Capita v.s. GDP per Capita, Top 10 Populous Countries/Regions, 2018' ) plot_carbon_indensity_data( hist_carbon_df, start_year=None, end_year=None, countries_regions=['United States', 'China'], title='Carbon Indensity & Carbon Emission, US v.s. China, 1970-2018' ) ```
true
code
0.453383
null
null
null
null
# Homework 2 - Deep Learning ## Liberatori Benedetta ``` import torch import numpy as np # A class defining the model for the Multi Layer Perceptron class MLP(torch.nn.Module): def __init__(self): super().__init__() self.layer1 = torch.nn.Linear(in_features=6, out_features=2, bias= True) self.layer2 = torch.nn.Linear(in_features=2, out_features=1, bias= True) def forward(self, X): out = self.layer1(X) out = self.layer2(out) out = torch.nn.functional.sigmoid(out) return out # Initialization of weights: uniformly distributed between -0.3 and 0.3 W = (0.3 + 0.3) * torch.rand(6, 1 ) - 0.3 # Inizialization of Data: 50% symmetric randomly generated tensors # 50% not necessarily symmetric firsthalf= torch.rand([32,3]) secondhalf=torch.zeros([32,3]) secondhalf[:, 2:3 ]=firsthalf[:, 0:1] secondhalf[:, 1:2 ]=firsthalf[:, 1:2] secondhalf[:, 0:1 ]=firsthalf[:, 2:3] y1=torch.ones([32,1]) y0=torch.zeros([32,1]) simmetric = torch.cat((firsthalf, secondhalf, y1), dim=1) notsimmetric = torch.rand([32,6]) notsimmetric= torch.cat((notsimmetric, y0), dim=1) data= torch.cat((notsimmetric, simmetric), dim=0) # Permutation of the concatenated dataset data= data[torch.randperm(data.size()[0])] def train_epoch(model, data, loss_fn, optimizer): X=data[:,0:6] y=data[:,6] # 1. reset the gradients previously accumulated by the optimizer # this will avoid re-using gradients from previous loops optimizer.zero_grad() # 2. get the predictions from the current state of the model # this is the forward pass y_hat = model(X) # 3. calculate the loss on the current mini-batch loss = loss_fn(y_hat, y.unsqueeze(1)) # 4. execute the backward pass given the current loss loss.backward() # 5. update the value of the params optimizer.step() return model def train_model(model, data, loss_fn, optimizer, num_epochs): model.train() for epoch in range(num_epochs): model=train_epoch(model, data, loss_fn, optimizer) for i in model.state_dict(): print(model.state_dict()[i]) # Parameters set as defined in the paper learn_rate = 0.1 num_epochs = 1425 beta= 0.9 model = MLP() # I have judged the loss function (3) reported in the paper paper a general one for the discussion # Since the problem of interest is a binary classification and that loss is mostly suited for # regression problems I have used instead a Binary Cross Entropy loss loss_fn = torch.nn.BCELoss() # Gradient descent optimizer with momentum optimizer = torch.optim.SGD(model.parameters(), lr=learn_rate, momentum=beta) train_model(model, data, loss_fn, optimizer, num_epochs) ``` ## Some conclusions: Even if the original protocol has been followed as deep as possible, the results obtained in the same number of epochs are fare from the ones stated in the paper. Not only the numbers, indeed those are not even near to be symmetric. I assume this could depend on the inizialization of the data, which was not reported and thus a completely autonomous choice.
true
code
0.87046
null
null
null
null
# GLM: Negative Binomial Regression ``` %matplotlib inline import numpy as np import pandas as pd import pymc3 as pm from scipy import stats import matplotlib.pyplot as plt plt.style.use('seaborn-darkgrid') import seaborn as sns import re print('Running on PyMC3 v{}'.format(pm.__version__)) ``` This notebook demos negative binomial regression using the `glm` submodule. It closely follows the GLM Poisson regression example by [Jonathan Sedar](https://github.com/jonsedar) (which is in turn inspired by [a project by Ian Osvald](http://ianozsvald.com/2016/05/07/statistically-solving-sneezes-and-sniffles-a-work-in-progress-report-at-pydatalondon-2016/)) except the data here is negative binomially distributed instead of Poisson distributed. Negative binomial regression is used to model count data for which the variance is higher than the mean. The [negative binomial distribution](https://en.wikipedia.org/wiki/Negative_binomial_distribution) can be thought of as a Poisson distribution whose rate parameter is gamma distributed, so that rate parameter can be adjusted to account for the increased variance. ### Convenience Functions Taken from the Poisson regression example. ``` def plot_traces(trcs, varnames=None): '''Plot traces with overlaid means and values''' nrows = len(trcs.varnames) if varnames is not None: nrows = len(varnames) ax = pm.traceplot(trcs, varnames=varnames, figsize=(12,nrows*1.4), lines={k: v['mean'] for k, v in pm.summary(trcs,varnames=varnames).iterrows()}) for i, mn in enumerate(pm.summary(trcs, varnames=varnames)['mean']): ax[i,0].annotate('{:.2f}'.format(mn), xy=(mn,0), xycoords='data', xytext=(5,10), textcoords='offset points', rotation=90, va='bottom', fontsize='large', color='#AA0022') def strip_derived_rvs(rvs): '''Remove PyMC3-generated RVs from a list''' ret_rvs = [] for rv in rvs: if not (re.search('_log',rv.name) or re.search('_interval',rv.name)): ret_rvs.append(rv) return ret_rvs ``` ### Generate Data As in the Poisson regression example, we assume that sneezing occurs at some baseline rate, and that consuming alcohol, not taking antihistamines, or doing both, increase its frequency. #### Poisson Data First, let's look at some Poisson distributed data from the Poisson regression example. ``` np.random.seed(123) # Mean Poisson values theta_noalcohol_meds = 1 # no alcohol, took an antihist theta_alcohol_meds = 3 # alcohol, took an antihist theta_noalcohol_nomeds = 6 # no alcohol, no antihist theta_alcohol_nomeds = 36 # alcohol, no antihist # Create samples q = 1000 df_pois = pd.DataFrame({ 'nsneeze': np.concatenate((np.random.poisson(theta_noalcohol_meds, q), np.random.poisson(theta_alcohol_meds, q), np.random.poisson(theta_noalcohol_nomeds, q), np.random.poisson(theta_alcohol_nomeds, q))), 'alcohol': np.concatenate((np.repeat(False, q), np.repeat(True, q), np.repeat(False, q), np.repeat(True, q))), 'nomeds': np.concatenate((np.repeat(False, q), np.repeat(False, q), np.repeat(True, q), np.repeat(True, q)))}) df_pois.groupby(['nomeds', 'alcohol'])['nsneeze'].agg(['mean', 'var']) ``` Since the mean and variance of a Poisson distributed random variable are equal, the sample means and variances are very close. #### Negative Binomial Data Now, suppose every subject in the dataset had the flu, increasing the variance of their sneezing (and causing an unfortunate few to sneeze over 70 times a day). If the mean number of sneezes stays the same but variance increases, the data might follow a negative binomial distribution. ``` # Gamma shape parameter alpha = 10 def get_nb_vals(mu, alpha, size): """Generate negative binomially distributed samples by drawing a sample from a gamma distribution with mean `mu` and shape parameter `alpha', then drawing from a Poisson distribution whose rate parameter is given by the sampled gamma variable. """ g = stats.gamma.rvs(alpha, scale=mu / alpha, size=size) return stats.poisson.rvs(g) # Create samples n = 1000 df = pd.DataFrame({ 'nsneeze': np.concatenate((get_nb_vals(theta_noalcohol_meds, alpha, n), get_nb_vals(theta_alcohol_meds, alpha, n), get_nb_vals(theta_noalcohol_nomeds, alpha, n), get_nb_vals(theta_alcohol_nomeds, alpha, n))), 'alcohol': np.concatenate((np.repeat(False, n), np.repeat(True, n), np.repeat(False, n), np.repeat(True, n))), 'nomeds': np.concatenate((np.repeat(False, n), np.repeat(False, n), np.repeat(True, n), np.repeat(True, n)))}) df.groupby(['nomeds', 'alcohol'])['nsneeze'].agg(['mean', 'var']) ``` As in the Poisson regression example, we see that drinking alcohol and/or not taking antihistamines increase the sneezing rate to varying degrees. Unlike in that example, for each combination of `alcohol` and `nomeds`, the variance of `nsneeze` is higher than the mean. This suggests that a Poisson distrubution would be a poor fit for the data since the mean and variance of a Poisson distribution are equal. ### Visualize the Data ``` g = sns.factorplot(x='nsneeze', row='nomeds', col='alcohol', data=df, kind='count', aspect=1.5) # Make x-axis ticklabels less crowded ax = g.axes[1, 0] labels = range(len(ax.get_xticklabels(which='both'))) ax.set_xticks(labels[::5]) ax.set_xticklabels(labels[::5]); ``` ## Negative Binomial Regression ### Create GLM Model ``` fml = 'nsneeze ~ alcohol + nomeds + alcohol:nomeds' with pm.Model() as model: pm.glm.GLM.from_formula(formula=fml, data=df, family=pm.glm.families.NegativeBinomial()) # Old initialization # start = pm.find_MAP(fmin=optimize.fmin_powell) # C = pm.approx_hessian(start) # trace = pm.sample(4000, step=pm.NUTS(scaling=C)) trace = pm.sample(1000, tune=2000, cores=2) ``` ### View Results ``` rvs = [rv.name for rv in strip_derived_rvs(model.unobserved_RVs)] plot_traces(trace, varnames=rvs); # Transform coefficients to recover parameter values np.exp(pm.summary(trace, varnames=rvs)[['mean','hpd_2.5','hpd_97.5']]) ``` The mean values are close to the values we specified when generating the data: - The base rate is a constant 1. - Drinking alcohol triples the base rate. - Not taking antihistamines increases the base rate by 6 times. - Drinking alcohol and not taking antihistamines doubles the rate that would be expected if their rates were independent. If they were independent, then doing both would increase the base rate by 3\*6=18 times, but instead the base rate is increased by 3\*6\*2=16 times. Finally, even though the sample for `mu` is highly skewed, its median value is close to the sample mean, and the mean of `alpha` is also quite close to its actual value of 10. ``` np.percentile(trace['mu'], [25,50,75]) df.nsneeze.mean() trace['alpha'].mean() ```
true
code
0.603873
null
null
null
null
# Multi-qubit quantum circuit In this exercise we creates a two qubit circuit, with two qubits in superposition, and then measures the individual qubits, resulting in two coin toss results with the following possible outcomes with equal probability: $|00\rangle$, $|01\rangle$, $|10\rangle$, and $|11\rangle$. This is like tossing two coins. Import the required libraries, including the IBM Q library for working with IBM Q hardware. ``` import numpy as np from qiskit import QuantumCircuit, execute, Aer from qiskit.tools.monitor import job_monitor # Import visualization from qiskit.visualization import plot_histogram, plot_bloch_multivector, iplot_bloch_multivector, plot_state_qsphere, iplot_state_qsphere # Add the state vector calculation function def get_psi(circuit, vis): global psi backend = Aer.get_backend('statevector_simulator') psi = execute(circuit, backend).result().get_statevector(circuit) if vis=="IQ": display(iplot_state_qsphere(psi)) elif vis=="Q": display(plot_state_qsphere(psi)) elif vis=="M": print(psi) elif vis=="B": display(plot_bloch_multivector(psi)) else: # vis="IB" display(iplot_bloch_multivector(psi)) vis="" ``` How many qubits do we want to use. The notebook let's you set up multi-qubit circuits of various sizes. Keep in mind that the biggest publicly available IBM quantum computer is 14 qubits in size. ``` #n_qubits=int(input("Enter number of qubits:")) n_qubits=2 ``` Create quantum circuit that includes the quantum register and the classic register. Then add a Hadamard (super position) gate to all the qubits. Add measurement gates. ``` qc1 = QuantumCircuit(n_qubits,n_qubits) qc_measure = QuantumCircuit(n_qubits,n_qubits) for qubit in range (0,n_qubits): qc1.h(qubit) #A Hadamard gate that creates a superposition for qubit in range (0,n_qubits): qc_measure.measure(qubit,qubit) display(qc1.draw(output="mpl")) ``` Now that we have more than one qubit it is starting to become a bit difficult to visualize the outcomes when running the circuit. To alleviate this we can instead have the get_psi return the statevector itself by by calling it with the vis parameter set to `"M"`. We can also have it display a Qiskit-unique visualization called a Q Sphere by passing the parameter `"Q"` or `"q"`. Big Q returns an interactive Q-sphere, and little q a static one. ``` get_psi(qc1,"M") print (abs(np.square(psi))) get_psi(qc1,"B") ``` Now we see the statevector for multiple qubits, and can calculate the probabilities for the different outcomes by squaring the complex parameters in the vector. The Q Sphere visualization provides the same informaton in a visual form, with |0..0> at the north pole, |1..1> at the bottom, and other combinations on latitude circles. In the dynamicc version, you can hover over the tips of the vectors to see the state, probability, and phase data. In the static version, the size of the vector tip represents the relative probability of getting that specific result, and the color represents the phase angle for that specific output. More on that later! Now add your circuit with the measurement circuit and run a 1,000 shots to get statistics on the possible outcomes. ``` backend = Aer.get_backend('qasm_simulator') qc_final=qc1+qc_measure job = execute(qc_final, backend, shots=1000) counts1 = job.result().get_counts(qc_final) print(counts1) plot_histogram(counts1) ``` As you might expect, with two independednt qubits ea h in a superposition, the resulting outcomes should be spread evenly accross th epossible outcomes, all the combinations of 0 and 1. **Time for you to do some work!** To get an understanding of the probable outcomes and how these are displayed on the interactive (or static) Q Sphere, change the `n_qubits=2` value in the cell above, and run the cells again for a different number of qubits. When you are done, set the value back to 2, and continue on. ``` n_qubits=2 ``` # Entangled-qubit quantum circuit - The Bell state Now we are going to do something different. We will entangle the qubits. Create quantum circuit that includes the quantum register and the classic register. Then add a Hadamard (super position) gate to the first qubit. Then add a controlled-NOT gate (cx) between the first and second qubit, entangling them. Add measurement gates. We then take a look at using the CX (Controlled-NOT) gate to entangle the two qubits in a so called Bell state. This surprisingly results in the following possible outcomes with equal probability: $|00\rangle$ and $|11\rangle$. Two entangled qubits do not at all behave like two tossed coins. We then run the circuit a large number of times to see what the statistical behavior of the qubits are. Finally, we run the circuit on real IBM Q hardware to see how real physical qubits behave. In this exercise we introduce the CX gate, which creates entanglement between two qubits, by flipping the controlled qubit (q_1) if the controlling qubit (q_0) is 1. ![Hadamard circuit](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAN8AAACoCAYAAABg3jtVAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjAsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+17YcXAAAMfklEQVR4nO3df0zUd57H8Se/BERQgYIegpZCFRC4U+ueq13P6y/O9NI7a2p01Y10pVKaHjYmu8Y0vaQptta022Y3DZvWq7nWTVfU6qW6vdptqVvWXVx/VKjbUbECSrGgVBBBfsz94UpvSi1YZ+YN8309kknkO8OXdyLP+XznO8A3yO12uxERvwu2HkDEqRSfiBHFJ2JE8YkYUXwiRhSfiBHFJ2JE8YkYUXwiRhSfiBHFJ2JE8YkYUXwiRhSfiBHFJ2JE8YkYUXwiRhSfiBHFJ2JE8YkYUXwiRhSfiBHFJ2JE8YkYUXwiRhSfiBHFJ2Ik1HoAGZziN22+7i9+bPN1nUArn4gRxSdiRPGJGFF8IkYUn4gRxSdiRPGJGFF8Ikb0JrsMKfXnoaoernRDfDRMmwQRYdZT+YbjVr7e3l42btxIeno6ERER5ObmUl5ezuTJkykoKLAez2tefezvqPrwNY9tbrebV34aw4nKHUZTXV9bB/zyPdi4B353FH5/DH77Z3hyG3z0mfV0vuG4lS8/P58dO3bw5JNPMn36dCoqKli8eDFffvklTzzxhPV4XtF2/gyXWhq4JSXXY/tX52q40tFKYuoMo8m+3ZVu+NX78EVL//u6emD7AQgJhtnp/p/Nlxy18m3ZsoXNmzeza9cu1qxZw7x581i3bh2zZs2iu7ub6dOnA9DY2Mi9997LyJEjyc3N5dChQ8aT35jGmkqCgkOIm5Dlsb2p9ggjRycSHZdsNNm3+8vn0NAC7u94zDuHobvHXxP5h6PiW79+PXl5ecydO9dje1paGmFhYWRnZwNQWFjIlClTaG5upqioiIULF9LTM3z+5xtrKhk77nZCR0R6bP+y9ggJtw6tVQ/gjycgaIDHtF+BqjN+GcdvHHPYWV9fT1VVFatXr+53X21tLVlZWYSHh9Pa2so777zDmTNniIyMpKCggJKSEvbv38/s2bN9MltQ0EDfevAfb3zXuuCpsaaSlsYTlK6K99je1dnGjH9d6/XZbtZPf3mWqDHjB3xc/qonOLTnRZ/Pc7Pc7sH9XzkqPoBx48Z5bL98+TLl5eXMnz8fgOPHjxMXF0d8/NffuNnZ2Xz66ac+i8/bGk8d4AcL/pOMOcs9tr+5NpvEIbjydba3MHJ0IkFB330gdqX9Kz9N5B+OOey8FpPL5fLYvmHDBhoaGpg2bRoAly5dIiYmxuMxMTExtLW1+Ww2t9s94G2wWr44QeelC0zMuY/ouAl9t56uDjrbW0i4wZMtg5ntZm9L7ssYMLyQYNj/u9f8Ms/N3gbLMStfamoqOTk5lJSUEBsbS1JSEmVlZezevRug72RLVFQUra2tHp978eJFRo0a5feZv4/GmkpCw0f2O9PZcLyCUXHJRI1ONJrs+malw4d/hc6u6590mZUGoyL8OpbPOWblCw4OZuvWrWRlZVFYWMiKFSuIj4+nqKiI0NBQcnJyAEhPT6epqYnm5ua+z62qqiIzM9Nq9BvSWFNJ4q13EBzi+bzacOKPQ/KQE2B0JKz65/5vpl97tZmdDP82ze9j+VyQ+0bWyQC0bNkyjhw5wieffNK3bcGCBaSkpPDss8/yxhtvUFJSwvHjxwkJCTGb0wl/RqK9E/58Ct7+y9WP/2Ei/DAN0hLBD+d9/M4xK9/1HDhwoO+Q85pXXnmF6upqxo4dy8svv8y2bdtMw3OKkeHwT1O+/vgncyB9XGCGBw56zfdt2tracLlcPProox7bExMTee+994ymEqdwdHyjRo0aVm+eS2Bx/GGniBXFJ2JE8YkYUXwiRhSfiBHFJ2LE0W81DCe6YEng0conYkTxiRhRfCJGFJ+IEcUnYkTxiRhRfCJGFJ+IEcUnYkTxiRhRfCJGFJ+IEcUnYkTxiRhRfCJGFJ+IEcUnYkTxiRhRfDKkfNX+9b/rzkNXAP9BccdfpUjs1Z+Hj49DdT1c7PC8LzgIksbCD26DGbf2v4zYcKb4xExbB5RVwuHawT0+Mgz+fQbccWtgXLlI8YmJz5vg1Q+hrfPGP/fvU2DpDyF0mF+1TfGJ39U2w6/2Qmf3999HVhLk/+jqtdqHq2E8ugxHHV3wXx99d3i/+PHAf6e0+gy8/6l3Z/M3xSd+9T+H4EL7wI8bjHePwhdfeWdfFhwXX29vLxs3biQ9PZ2IiAhyc3MpLy9n8uTJFBQUWI8X0C5ehv0nvbe/nl74/TBe/RwXX35+Pk8//TSPPPIIe/bs4aGHHmLx4sXU1NT0uza7eNefTl4NxpsOnob273HSZihwVHxbtmxh8+bN7Nq1izVr1jBv3jzWrVvHrFmz6O7u7ovvqaeeIjMzk+DgYMrKyoynDhyuL7y/z+4eONXk/f36g6PiW79+PXl5ecydO9dje1paGmFhYWRnZwOQnp7OSy+9xMyZMy3GDEhuN9Rf8M2+65p9s19fc8xViurr66mqqmL16tX97qutrSUrK4vw8HAAli5dCsAzzzzjl9mCAuEd4wGEhUfx6GttHtsGOqN5vfuL3/T8+PmXfs2/bHrkJqbzrsG+e+eYla++vh6AcePGeWy/fPky5eXler3naz58ghmuT16OWfni4+MBcLlczJ8/v2/7hg0baGhoYNq0aVajDfqZcjjrdcPP34Ir/+8Hpb+5gl1zbcW73v3fVPzYSva+uvLmBjTgmPhSU1PJycmhpKSE2NhYkpKSKCsrY/fu3QBa+Xzs2g9I++LkSHKs9/fpD4457AwODmbr1q1kZWVRWFjIihUriI+Pp6ioiNDQUHJycqxHDHipCd7fZxAw6Rbv79cfHLPyAdx+++188MEHHtuWLVtGRkYGkZGRfdu6urro6emht7eXrq4uOjo6CA8PH7avLYaKf0zz/o+ETZ0AoyMHftxQ5JiV73oOHDjQ75Bz5cqVREZGsm/fPpYsWUJkZCSnT582mjBw3BINOcne3ee8DO/uz58cHV9bWxsul6vfyZbXX38dt9vtcZs0aZLNkAHmwRlXfy/PG2an++ZQ1l/0K0Xid0frYNNHcDPfeBNi4bG7h/dvtjt65RMb2cmwfM73/128lDhYNW94hwda+cTQ2QuwZf/Vv+EyGMFBcFcm3Jc9/H+LHRSfGOvphU/q4A8uOHnu2x8TEQYzU6++xksc7d/5fEnxyZDRfgXOnIemtqtRRoZBUiwkRENwAL5AUnwiRgLw+URkeFB8IkYUn4gRxSdiRPGJGFF8IkYUn4gRxSdiRPGJGFF8IkYUn4gRxSdiRPGJGFF8IkYUn4gRxSdiRPGJGFF8IkYUn4gRxecQdXV13HXXXWRkZDB16lTWrl1rPZLjKT6HCA0N5bnnnuPYsWMcPHiQiooKdu7caT2WoznqKkVONn78eMaPHw/AiBEjyMnJoba21ngqZ9PK50DNzc28/fbb3HPPPdajOJric5jOzk4WLlxIcXExU6ZMsR7H0fRHcx2kp6eHRYsWkZKSwgsvvGA9juMpPgd5+OGH6e3tZdOmTbrK7hCg+Bzi448/Zs6cOUydOpWQkKuX+MnPz+fxxx/H7XYrRgOKT/jNrveJGxPDvT+6w3oUR9EJl2/YuXMn999/PwkJCYSHhzNx4kSWLFnC0aNHrUfzibONTRw5dpLgQLwM0BCn9/n+pru7m6VLl/LWW28xYcIEFixYQExMDC6Xi23btrF8+XKys7Otx/S69ysOEhE+gtkzplqP4jg67PybVatWUVpaysqVK3nxxReJiorqu6+uro4xY8YQHR3tk6/98+d+7ZP9io1nf1YwqMdp5QP27dtHaWkpeXl5lJaW9jv5kJycbDSZBDKtfMCDDz7I9u3bOXz4MLm5udbj+MXZxiZefn07d8+ezt1zpluP40iKD4iJiSEuLo5Tp06ZfH0ddgaWwR52Ov4UV0tLC62trUyaNMl6FHEYx698Fy5cIDY2lszMTKqrq63H8Yv/3vG/nDx9lp+tWkxkRLj1OI7l+JVv7Nix3HbbbRw7doy9e/f2u/+zzz4zmMp3zjY2Ue36nDkzshWeMZ3tBEpKSli0aBF5eXk88MADpKWlce7cOSoqKsjMzGTHjh3WI3rN+ZZWYsdE6329IcDxh53XvPvuuzz//PNUVlbS0dFBQkICM2fOpLi4mDvvvNN6PK/q7e3VT7QMAYpPxIie/kSMKD4RI4pPxIjiEzGi+ESMKD4RI4pPxIjiEzGi+ESMKD4RI4pPxIjiEzGi+ESMKD4RI4pPxIjiEzGi+ESMKD4RI4pPxIjiEzGi+ESMKD4RI4pPxIjiEzGi+ESMKD4RI4pPxMj/AXGCTKH8QWZKAAAAAElFTkSuQmCC) ``` qc2 = QuantumCircuit(n_qubits,n_qubits) qc2_measure = QuantumCircuit(n_qubits, n_qubits) for qubit in range (0,n_qubits): qc2_measure.measure(qubit,qubit) qc2.h(0) # A Hadamard gate that puts the first qubit in superposition display(qc2.draw(output="mpl")) get_psi(qc2,"M") get_psi(qc2,"B") for qubit in range (1,n_qubits): qc2.cx(0,qubit) #A controlled NOT gate that entangles the qubits. display(qc2.draw(output="mpl")) get_psi(qc2, "B") ``` Now we notice something peculiar; after we add the CX gate, entangling the qubits the Bloch spheres display nonsense. Why is that? It turns out that once your qubits are entangled they can no longer be described individually, but only as a combined object. Let's take a look at the state vector and Q sphere. ``` get_psi(qc2,"M") print (abs(np.square(psi))) get_psi(qc2,"Q") ``` Set the backend to a local simulator. Then create a quantum job for the circuit, the selected backend, that runs just one shot to simulate a coin toss with two simultaneously tossed coins, then run the job. Display the result; either 0 for up (base) or 1 for down (excited) for each qubit. Display the result as a histogram. Either |00> or |11> with 100% probability. ``` backend = Aer.get_backend('qasm_simulator') qc2_final=qc2+qc2_measure job = execute(qc2_final, backend, shots=1) counts2 = job.result().get_counts(qc2_final) print(counts2) plot_histogram(counts2) ``` Note how the qubits completely agree. They are entangled. **Do some work..** Run the cell above a few times to verify that you only get the results 00 or 11. Now, lets run quite a few more shots, and display the statistsics for the two results. This time, as we are no longer just talking about two qubits, but the amassed results of thousands of runs on these qubits. ``` job = execute(qc2_final, backend, shots=1000) result = job.result() counts = result.get_counts() print(counts) plot_histogram(counts) ``` And look at that, we are back at our coin toss results, fifty-fifty. Every time one of the coins comes up heads (|0>) the other one follows suit. Tossing one coin we immediately know what the other one will come up as; the coins (qubits) are entangled. # Run your entangled circuit on an IBM quantum computer **Important:** With the simulator we get perfect results, only |00> or |11>. On a real NISQ (Noisy Intermediate Scale Quantum computer) we do not expect perfect results like this. Let's run the Bell state once more, but on an actual IBM Q quantum computer. **Time for some work!** Before you can run your program on IBM Q you must load your API key. If you are running this notebook in an IBM Qx environment, your API key is already stored in the system, but if you are running on your own machine you [must first store the key](https://qiskit.org/documentation/install.html#access-ibm-q-systems). ``` #Save and store API key locally. from qiskit import IBMQ #IBMQ.save_account('MY_API_TOKEN') <- Uncomment this line if you need to store your API key #Load account information IBMQ.load_account() provider = IBMQ.get_provider() ``` Grab the least busy IBM Q backend. ``` from qiskit.providers.ibmq import least_busy backend = least_busy(provider.backends(operational=True, simulator=False)) #backend = provider.get_backend('ibmqx2') print("Selected backend:",backend.status().backend_name) print("Number of qubits(n_qubits):", backend.configuration().n_qubits) print("Pending jobs:", backend.status().pending_jobs) ``` Lets run a large number of shots, and display the statistsics for the two results: $|00\rangle$ and $|11\rangle$ on the real hardware. Monitor the job and display our place in the queue. ``` if n_qubits > backend.configuration().n_qubits: print("Your circuit contains too many qubits (",n_qubits,"). Start over!") else: job = execute(qc2_final, backend, shots=1000) job_monitor(job) ``` Get the results, and display in a histogram. Notice how we no longer just get the perfect entangled results, but also a few results that include non-entangled qubit results. At this stage, quantum computers are not perfect calculating machines, but pretty noisy. ``` result = job.result() counts = result.get_counts(qc2_final) print(counts) plot_histogram(counts) ``` That was the simple readout. Let's take a look at the whole returned results: ``` print(result) ```
true
code
0.459319
null
null
null
null
# Simulating Power Spectra In this notebook we will explore how to simulate the data that we will use to investigate how different spectral parameters can influence band ratios. Simulated power spectra will be created with varying aperiodic and periodic parameters, and are created using the [FOOOF](https://github.com/fooof-tools/fooof) tool. In the first set of simulations, each set of simulated spectra will vary across a single parameter while the remaining parameters remain constant. In a secondary set of simulated power spectra, we will simulate pairs of parameters changing together. For this part of the project, this notebook demonstrates the simulations with some examples, but does not create the actual set simulations used in the project. The full set of simulations for the project are created by the standalone scripts, available in the `scripts` folder. ``` %matplotlib inline import matplotlib.pyplot as plt import numpy as np from fooof.sim import * from fooof.plts import plot_spectra # Import custom project code import sys sys.path.append('../bratios') from settings import * from paths import DATA_PATHS as dp # Settings FREQ_RANGE = [1, 40] LO_BAND = [4, 8] HI_BAND = [13, 30] # Define default parameters EXP_DEF = [0, 1] CF_LO_DEF = np.mean(LO_BAND) CF_HI_DEF = np.mean(HI_BAND) PW_DEF = 0.4 BW_DEF = 1 # Set a range of values for the band power to take PW_START = 0 PW_END = 1 W_INC = .1 # Set a range of values for the aperiodic exponent to take EXP_START = .25 EXP_END = 3 EXP_INC = .25 ``` ## Simulate power spectra with one parameter varying First we will make several power spectra with varying band power. To do so, we will continue to use the example of the theta beta ratio, and vary the power of the higher (beta) band. ``` # The Stepper object iterates through a range of values pw_step = Stepper(PW_START, PW_END, PW_INC) num_spectra = len(pw_step) # `param_iter` creates a generator can be used to step across ranges of parameters pw_iter = param_iter([[CF_LO_DEF, PW_DEF, BW_DEF], [CF_HI_DEF, pw_step, BW_DEF]]) # Simulate power spectra pw_fs, pw_ps, pw_syns = gen_group_power_spectra(num_spectra, FREQ_RANGE, EXP_DEF, pw_iter) # Collect together simulated data pw_data = [pw_fs, pw_ps, pw_syns] # Save out data, to access from other notebooks np.save(dp.make_file_path(dp.demo, 'PW_DEMO', 'npy'), pw_data) # Plot our series of generated power spectra, with varying high-band power plot_spectra(pw_fs, pw_ps, log_powers=True) ``` Above, we can see each of the spectra we generated plotted, with the same properties for all parameters, except for beta power. The same approach can be used to simulate data that vary only in one parameter, for each isolated spectral feature. ## Simulate power spectra with two parameters varying In this section we will explore generating data in which two parameters vary simultaneously. Specifically, we will simulate the case in which the aperiodic exponent varies while power for a higher band oscillation also varies. The total number of trials will be: `(n_pw_changes) * (n_exp_changes)`. ``` data = [] exp_step = Stepper(EXP_START, EXP_END, EXP_INC) for exp in exp_step: # Low band sweeps through power range pw_step = Stepper(PW_START, PW_END, PW_INC) pw_iter = param_iter([[CF_LO_DEF, PW_DEF, BW_DEF], [CF_HIGH_DEF, pw_step, BW_DEF]]) # Generates data pw_apc_fs, pw_apc_ps, pw_apc_syns = gen_group_power_spectra( len(pw_step), FREQ_RANGE, [0, exp], pw_iter) # Collect together all simulated data data.append(np.array([exp, pw_apc_fs, pw_apc_ps], dtype=object)) # Save out data, to access from other notebooks np.save(dp.make_file_path(dp.demo, 'EXP_PW_DEMO', 'npy'), data) # Extract some example power spectra, sub-sampling ones that vary in both exp & power # Note: this is just a shortcut to step across the diagonal of the matrix of simulated spectra plot_psds = [data[ii][2][ii, :] for ii in range(min(len(exp_step), len(pw_step)))] # Plot a selection of power spectra in the paired parameter simulations plot_spectra(pw_apc_fs, plot_psds, log_powers=True) ``` In the plot above, we can see a selection of the data we just simulated, selecting a group of power spectra that vary across both exponent and beta power. In the next notebook we will calculate band ratios and see how changing these parameters affects ratio measures. ### Simulating the full set of data Here we just simulated example data, to show how the simulations work. The full set of simulations for this project are re-created with scripts, available in the `scripts` folder. To simulate full set of single parameter simulation for this project, run this script: `python gen_single_param_sims.py` To simulate full set of interacting parameter simulation for this project, run this script: `python gen_interacting_param_sims.py` These scripts will automatically save all the regenerated data into the `data` folder. ``` # Check all the available data files for the single parameter simulations dp.list_files('sims_single') # Check all the available data files for the interacting parameter simulations dp.list_files('sims_interacting') ```
true
code
0.516535
null
null
null
null
# Symbolic System Create a symbolic three-state system: ``` import markoviandynamics as md sym_system = md.SymbolicDiscreteSystem(3) ``` Get the symbolic equilibrium distribution: ``` sym_system.equilibrium() ``` Create a symbolic three-state system with potential energy barriers: ``` sym_system = md.SymbolicDiscreteSystemArrhenius(3) ``` It's the same object as the previous one, only with additional symbolic barriers: ``` sym_system.B_ij ``` We can assing values to the free parameters in the equilibrium distribution: ``` sym_system.equilibrium(energies=[0, 0.1, 1]) sym_system.equilibrium(energies=[0, 0.1, 1], temperature=1.5) ``` and create multiple equilibrium points by assigning temperature sequence: ``` import numpy as np temperature_range = np.linspace(0.01, 10, 300) equilibrium_line = sym_system.equilibrium([0, 0.1, 1], temperature_range) equilibrium_line.shape ``` # Symbolic rate matrix Create a symbolic rate matrix with Arrhenius process transitions: ``` sym_rate_matrix = md.SymbolicRateMatrixArrhenius(3) sym_rate_matrix ``` Energies and barriers can be substituted at once: ``` energies = [0, 0.1, 1] barriers = [[0, 0.11, 1.1], [0.11, 0, 10], [1.1, 10, 0]] sym_rate_matrix.subs_symbols(energies, barriers) sym_rate_matrix.subs_symbols(energies, barriers, temperature=2.5) ``` A symbolic rate matrix can be also lambdified (transform to lambda function): ``` rate_matrix_lambdified = sym_rate_matrix.lambdify() ``` The parameters of this function are the free symbols in the rate matrix: ``` rate_matrix_lambdified.__code__.co_varnames ``` They are positioned in ascending order. First the temperature, then the energies and the barriers. Sequence of rate matrices can be created by calling this function with a sequence for each parameter. # Dynamics We start by computing an initial probability distribution by assigning the energies and temperature: ``` p_initial = sym_system.equilibrium(energies, 0.5) p_initial ``` ## Trajectory - evolve by a fixed rate matrix Compute the rate matrix by substituting free symbols: ``` rate_matrix = md.rate_matrix_arrhenius(energies, barriers, 1.2) rate_matrix ``` Create trajectory of probability distributions in time: ``` import numpy as np # Create time sequence t_range = np.linspace(0, 5, 100) trajectory = md.evolve(p_initial, rate_matrix, t_range) trajectory.shape import matplotlib.pyplot as plt %matplotlib inline for i in [0, 1, 2]: plt.plot(t_range, trajectory[i,0,:], label='$p_{}(t)$'.format(i + 1)) plt.xlabel('$t$') plt.legend() ``` ## Trajectory - evolve by a time-dependent rate matrix Create a temperature sequence in time: ``` temperature_time = 1.4 + np.sin(4. * t_range) ``` Create a rate matrix as a function of the temperature sequence: ``` # Array of stacked rate matrices that corresponds to ``temperature_time`` rate_matrix_time = md.rate_matrix_arrhenius(energies, barriers, temperature_time) rate_matrix_time.shape crazy_trajectory = md.evolve(p_initial, rate_matrix_time, t_range) crazy_trajectory.shape for i in [0, 1, 2]: plt.plot(t_range, crazy_trajectory[i,0,:], label='$p_{}(t)$'.format(i + 1)) plt.xlabel('$t$') plt.legend() ``` # Diagonalize the rate matrix Calculate the eigenvalues, left and right eigenvectors: ``` U, eigenvalues, V = md.eigensystem(rate_matrix) U.shape, eigenvalues.shape, V.shape ``` The eigenvalues are in descending order (the eigenvectors are ordered accordingly): ``` eigenvalues ``` We can also compute the eigensystem for multiple rate matrices at once (or evolution of a rate matrix, i.e., `rate_matrix_time`): ``` U, eigenvalues, V = md.eigensystem(rate_matrix_time) U.shape, eigenvalues.shape, V.shape ``` # Decompose to rate matrix eigenvectors A probability distribution, in general, can be decomposed to the right eigenvectors of the rate matrix: $$\left|p\right\rangle = a_1\left|v_1\right\rangle + a_2\left|v_2\right\rangle + a_3\left|v_3\right\rangle$$ where $a_i$ is the coefficient of the i'th right eigenvector $\left|v_i\right\rangle$. A rate matrix that satisfies detailed balance has its first eigenvector as the equilibrium distribution $\left|\pi\right\rangle$. Therefore, *markovian-dynamics* normalizes $a_1$ to $1$ and decompose a probability distribution to $$\left|p\right\rangle = \left|\pi\right\rangle + a_2\left|v_2\right\rangle + a_3\left|v_3\right\rangle$$ Decompose ``p_initial``: ``` md.decompose(p_initial, rate_matrix) ``` We can decompose also multiple points and/or by multiple rate matrices. For example, decompose multiple points: ``` first_decomposition = md.decompose(equilibrium_line, rate_matrix) first_decomposition.shape for i in [0, 1, 2]: plt.plot(temperature_range, first_decomposition[i,:], label='$a_{}(T)$'.format(i + 1)) plt.xlabel('$T$') plt.legend() ``` or decompose a trajectory: ``` second_decomposition = md.decompose(trajectory, rate_matrix) second_decomposition.shape for i in [0, 1, 2]: plt.plot(t_range, second_decomposition[i,0,:], label='$a_{}(t)$'.format(i + 1)) plt.xlabel('$t$') plt.legend() ``` Decompose single point using multiple rate matrices: ``` third_decomposition = md.decompose(p_initial, rate_matrix_time) third_decomposition.shape for i in [0, 1, 2]: plt.plot(t_range, third_decomposition[i,0,:], label='$a_{}(t)$'.format(i + 1)) plt.legend() ``` Decompose, for every time $t$, the corresponding point $\left|p(t)\right\rangle$ using the temporal rate matrix $R(t)$ ``` fourth_decomposition = md.decompose(trajectory, rate_matrix_time) fourth_decomposition.shape for i in [0, 1, 2]: plt.plot(t_range, fourth_decomposition[i,0,:], label='$a_{}(t)$'.format(i + 1)) plt.legend() ``` # Plotting the 2D probability simplex for three-state system The probability space of a three-state system is a three dimensional space. However, the normalization constraint $\sum_{i}p_i=1$ together with $0 < p_i \le 1$, form a 2D triangular plane in which all of the possible probability points reside. We'll start by importing the plotting module: ``` import markoviandynamics.plotting.plotting2d as plt2d # Use latex rendering plt2d.latex() ``` Plot the probability plane: ``` plt2d.figure(figsize=(7, 5.5)) plt2d.equilibrium_line(equilibrium_line) plt2d.legend() ``` We can plot many objects on the probability plane, such as trajectories, points, and eigenvectors of the rate matrix: ``` # Final equilibrium point p_final = sym_system.equilibrium(energies, 1.2) plt2d.figure(focus=True, figsize=(7, 5.5)) plt2d.equilibrium_line(equilibrium_line) # Plot trajectory plt2d.plot(trajectory, c='r', label=r'$\left|p(t)\right>$') # Initial & final points plt2d.point(p_initial, c='k', label=r'$\left|p_0\right>$') plt2d.point(p_final, c='r', label=r'$\left|\pi\right>$') # Eigenvectors plt2d.eigenvectors(md.eigensystem(rate_matrix), kwargs_arrow={'zorder': 1}) plt2d.legend() ``` Plot multiple trajectories at once: ``` # Create temperature sequence temperature_range = np.logspace(np.log10(0.01), np.log10(10), 50) # Create the equilibrium line points equilibrium_line = sym_system.equilibrium(energies, temperature_range) # Create a trajectory for every point on ``equilibrium_line`` equilibrium_line_trajectory = md.evolve(equilibrium_line, rate_matrix, t_range) plt2d.figure(focus=True, figsize=(7, 5)) plt2d.equilibrium_line(equilibrium_line) plt2d.plot(equilibrium_line_trajectory, c='g', alpha=0.2) plt2d.point(p_final, c='r', label=r'$\left|\pi\right>$') plt2d.legend() # Create a trajectory for every point on ``equilibrium_line`` equilibrium_line_crazy_trajectory = md.evolve(equilibrium_line, rate_matrix_time, t_range) plt2d.figure(focus=True, figsize=(7, 5)) plt2d.equilibrium_line(equilibrium_line) plt2d.plot(equilibrium_line_crazy_trajectory, c='r', alpha=0.1) plt2d.text(p_final, r'Text $\alpha$', delta_x=0.05) plt2d.legend() ```
true
code
0.806033
null
null
null
null
``` import pandas as pd import numpy as np import matplotlib as mpl import matplotlib.pyplot as plt import scipy.stats as sts import seaborn as sns sns.set() %matplotlib inline ``` # 01. Smooth function optimization Рассмотрим все ту же функцию из задания по линейной алгебре: $ f(x) = \sin{\frac{x}{5}} * e^{\frac{x}{10}} + 5 * e^{-\frac{x}{2}} $ , но теперь уже на промежутке `[1, 30]`. В первом задании будем искать минимум этой функции на заданном промежутке с помощью `scipy.optimize`. Разумеется, в дальнейшем вы будете использовать методы оптимизации для более сложных функций, а `f(x)` мы рассмотрим как удобный учебный пример. Напишите на Питоне функцию, вычисляющую значение `f(x)` по известному `x`. Будьте внимательны: не забывайте про то, что по умолчанию в питоне целые числа делятся нацело, и о том, что функции `sin` и `exp` нужно импортировать из модуля `math`. ``` from math import sin, exp, sqrt def f(x): return sin(x / 5) * exp(x / 10) + 5 * exp(-x / 2) f(10) xs = np.arange(41, 60, 0.1) ys = np.array([f(x) for x in xs]) plt.plot(xs, ys) ``` Изучите примеры использования `scipy.optimize.minimize` в документации `Scipy` (см. "Материалы"). Попробуйте найти минимум, используя стандартные параметры в функции `scipy.optimize.minimize` (т.е. задав только функцию и начальное приближение). Попробуйте менять начальное приближение и изучить, меняется ли результат. ``` from scipy.optimize import minimize, rosen, rosen_der, differential_evolution x0 = 60 minimize(f, x0) # поиграемся с розенброком x0 = [1., 10.] minimize(rosen, x0, method='BFGS') ``` ___ ## Submission #1 Укажите в `scipy.optimize.minimize` в качестве метода `BFGS` (один из самых точных в большинстве случаев градиентных методов оптимизации), запустите из начального приближения $ x = 2 $. Градиент функции при этом указывать не нужно – он будет оценен численно. Полученное значение функции в точке минимума - ваш первый ответ по заданию 1, его надо записать с точностью до 2 знака после запятой. Теперь измените начальное приближение на x=30. Значение функции в точке минимума - ваш второй ответ по заданию 1, его надо записать через пробел после первого, с точностью до 2 знака после запятой. Стоит обдумать полученный результат. Почему ответ отличается в зависимости от начального приближения? Если нарисовать график функции (например, как это делалось в видео, где мы знакомились с Numpy, Scipy и Matplotlib), можно увидеть, в какие именно минимумы мы попали. В самом деле, градиентные методы обычно не решают задачу глобальной оптимизации, поэтому результаты работы ожидаемые и вполне корректные. ``` # 1. x0 = 2 x0 = 2 res1 = minimize(f, x0, method='BFGS') # 2. x0 = 30 x0 = 30 res2 = minimize(f, x0, method='BFGS') with open('out/06. submission1.txt', 'w') as f_out: output = '{0:.2f} {1:.2f}'.format(res1.fun, res2.fun) print(output) f_out.write(output) ``` # 02. Глобальная оптимизация Теперь попробуем применить к той же функции $ f(x) $ метод глобальной оптимизации — дифференциальную эволюцию. Изучите документацию и примеры использования функции `scipy.optimize.differential_evolution`. Обратите внимание, что границы значений аргументов функции представляют собой список кортежей (list, в который помещены объекты типа tuple). Даже если у вас функция одного аргумента, возьмите границы его значений в квадратные скобки, чтобы передавать в этом параметре список из одного кортежа, т.к. в реализации `scipy.optimize.differential_evolution` длина этого списка используется чтобы определить количество аргументов функции. Запустите поиск минимума функции f(x) с помощью дифференциальной эволюции на промежутке [1, 30]. Полученное значение функции в точке минимума - ответ в задаче 2. Запишите его с точностью до второго знака после запятой. В этой задаче ответ - только одно число. Заметьте, дифференциальная эволюция справилась с задачей поиска глобального минимума на отрезке, т.к. по своему устройству она предполагает борьбу с попаданием в локальные минимумы. Сравните количество итераций, потребовавшихся BFGS для нахождения минимума при хорошем начальном приближении, с количеством итераций, потребовавшихся дифференциальной эволюции. При повторных запусках дифференциальной эволюции количество итераций будет меняться, но в этом примере, скорее всего, оно всегда будет сравнимым с количеством итераций BFGS. Однако в дифференциальной эволюции за одну итерацию требуется выполнить гораздо больше действий, чем в BFGS. Например, можно обратить внимание на количество вычислений значения функции (nfev) и увидеть, что у BFGS оно значительно меньше. Кроме того, время работы дифференциальной эволюции очень быстро растет с увеличением числа аргументов функции. ``` res = differential_evolution(f, [(1, 30)]) res ``` ___ ## Submission #2 ``` res = differential_evolution(f, [(1, 30)]) with open('out/06. submission2.txt', 'w') as f_out: output = '{0:.2f}'.format(res.fun) print(output) f_out.write(output) ``` # 03. Минимизация негладкой функции Теперь рассмотрим функцию $ h(x) = int(f(x)) $ на том же отрезке `[1, 30]`, т.е. теперь каждое значение $ f(x) $ приводится к типу int и функция принимает только целые значения. Такая функция будет негладкой и даже разрывной, а ее график будет иметь ступенчатый вид. Убедитесь в этом, построив график $ h(x) $ с помощью `matplotlib`. ``` def h(x): return int(f(x)) xs = np.arange(0, 70, 1) ys = [h(x) for x in xs] plt.plot(xs, ys) minimize(h, 40.3) ``` Попробуйте найти минимум функции $ h(x) $ с помощью BFGS, взяв в качестве начального приближения $ x = 30 $. Получившееся значение функции – ваш первый ответ в этой задаче. ``` res_bfgs = minimize(h, 30) res_bfgs ``` Теперь попробуйте найти минимум $ h(x) $ на отрезке `[1, 30]` с помощью дифференциальной эволюции. Значение функции $ h(x) $ в точке минимума – это ваш второй ответ в этом задании. Запишите его через пробел после предыдущего. ``` res_diff_evol = differential_evolution(h, [(1, 30)]) res_diff_evol ``` Обратите внимание на то, что полученные ответы различаются. Это ожидаемый результат, ведь BFGS использует градиент (в одномерном случае – производную) и явно не пригоден для минимизации рассмотренной нами разрывной функции. Попробуйте понять, почему минимум, найденный BFGS, именно такой (возможно в этом вам поможет выбор разных начальных приближений). Выполнив это задание, вы увидели на практике, чем поиск минимума функции отличается от глобальной оптимизации, и когда может быть полезно применить вместо градиентного метода оптимизации метод, не использующий градиент. Кроме того, вы попрактиковались в использовании библиотеки SciPy для решения оптимизационных задач, и теперь знаете, насколько это просто и удобно. ___ ## Submission #3 ``` with open('out/06. submission3.txt', 'w') as f_out: output = '{0:.2f} {1:.2f}'.format(res_bfgs.fun, res_diff_evol.fun) print(output) f_out.write(output) ``` ___ Дальше играюсь с визуализацией ф-ии розенброка ``` lb = -10 rb = 10 step = 0.2 gen_xs = np.arange(lb, rb, step) xs = np.meshgrid(np.arange(-1, 1, 0.1), np.arange(-10, 10, 0.1)) ys = rosen(xs) print(xs[0].shape, xs[1].shape, ys.shape) plt.contour(xs[0], xs[1], ys, 30) lb = 0 rb = 4 step = 0.3 gen_xs = np.arange(lb, rb, step) #xs = np.meshgrid(gen_xs, gen_xs) #ys = (xs[0]**2 + xs[1]**2)**0.5 xs = np.meshgrid(np.arange(-2, 2, 0.1), np.arange(-10, 10, 0.1)) ys = rosen(xs) print(xs[0].shape, xs[1].shape, ys.shape) cmap = sns.cubehelix_palette(light=1, as_cmap=True) plt.contour(xs[0], xs[1], ys, 30, cmap=cmap) #plt.plot(xs[0], xs[1], marker='.', color='k', linestyle='none', alpha=0.1) plt.show() from mpl_toolkits.mplot3d import Axes3D fig = plt.figure() ax = fig.gca(projection='3d') surf = ax.plot_surface(xs[0], xs[1], ys, cmap=cmap, linewidth=0, antialiased=False) plt.show() x0 = [1.3, 0.7, 0.8, 1.9, 1.2] res = minimize(rosen, x0, method='Nelder-Mead', tol=1e-6) res.x ```
true
code
0.452294
null
null
null
null
<a href="https://colab.research.google.com/github/gpdsec/Residual-Neural-Network/blob/main/Custom_Resnet_1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> *It's custom ResNet trained demonstration purpose, not for accuracy. Dataset used is cats_vs_dogs dataset from tensorflow_dataset with **Custom Augmentatior** for data augmentation* --- ``` from google.colab import drive drive.mount('/content/drive') ``` ### **1. Importing Libraries** ``` import tensorflow as tf from tensorflow.keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPooling2D, BatchNormalization, Input, GlobalMaxPooling2D, add, ReLU from tensorflow.keras import layers from tensorflow.keras import Sequential import tensorflow_datasets as tfds import pandas as pd import numpy as np from tensorflow.keras import Model import matplotlib.pyplot as plt from sklearn.metrics import confusion_matrix from PIL import Image from tqdm.notebook import tqdm import os import time %matplotlib inline ``` ### **2. Loading & Processing Data** ##### **Loading Data** ``` (train_ds, val_ds, test_ds), info = tfds.load( 'cats_vs_dogs', split=['train[:80%]', 'train[80%:90%]', 'train[90%:]'], with_info=True, as_supervised=True) ## Image preprocessing function def preprocess(img, lbl): image = tf.image.resize_with_pad(img, target_height=224, target_width=224) image = tf.divide(image, 255) label = [0,0] if int(lbl) == 1: label[1]=1 else: label[0]=1 return image, tf.cast(label, tf.float32) train_ds = train_ds.map(preprocess) test_ds = test_ds.map(preprocess) val_ds = val_ds.map(preprocess) info ``` #### **Data Augmentation layer** ``` ###### Important Variables batch_size = 32 shape = (224, 224, 3) training_steps = int(18610/batch_size) validation_steps = int(2326/batch_size) path = '/content/drive/MyDrive/Colab Notebooks/cats_v_dogs.h5' ####### Data agumentation layer # RandomFlip and RandomRotation Suits my need for Data Agumentation augmentation=Sequential([ layers.experimental.preprocessing.RandomFlip("horizontal_and_vertical"), layers.experimental.preprocessing.RandomRotation(0.2), ]) ####### Data Shuffle and batch Function def shufle_batch(train_set, val_set, batch_size): train_set=(train_set.shuffle(1000).batch(batch_size)) train_set = train_set.map(lambda x, y: (augmentation(x, training=True), y)) val_set = (val_set.shuffle(1000).batch(batch_size)) val_set = val_set.map(lambda x, y: (augmentation(x, training=True), y)) return train_set, val_set train_set, val_set = shufle_batch(train_ds, val_ds, batch_size) ``` ## **3. Creating Model** ##### **Creating Residual block** ``` def residual_block(x, feature_map, filter=(3,3) , _strides=(1,1), _network_shortcut=False): shortcut = x x = Conv2D(feature_map, filter, strides=_strides, activation='relu', padding='same')(x) x = BatchNormalization()(x) x = Conv2D(feature_map, filter, strides=_strides, activation='relu', padding='same')(x) x = BatchNormalization()(x) if _network_shortcut : shortcut = Conv2D(feature_map, filter, strides=_strides, activation='relu', padding='same')(shortcut) shortcut = BatchNormalization()(shortcut) x = add([shortcut, x]) x = ReLU()(x) return x # Build the model using the functional API i = Input(shape) x = Conv2D(32, (3, 3), activation='relu', padding='same')(i) x = BatchNormalization()(x) x = residual_block(x, 32, filter=(3,3) , _strides=(1,1), _network_shortcut=False) #x = Conv2D(32, (3, 3), activation='relu', padding='same')(x) #x = BatchNormalization()(x) x = MaxPooling2D((2, 2))(x) x = Conv2D(64, (3, 3), activation='relu', padding='same')(x) x = BatchNormalization()(x) x = residual_block(x,64, filter=(3,3) , _strides=(1,1), _network_shortcut=False) x = MaxPooling2D((2, 2))(x) x = Conv2D(64, (3, 3), activation='relu', padding='same')(x) x = BatchNormalization()(x) x = MaxPooling2D((2, 2))(x) x = Conv2D(128, (3, 3), activation='relu', padding='same')(x) x = BatchNormalization()(x) x = Conv2D(128, (3, 3), activation='relu', padding='same')(x) x = BatchNormalization()(x) x = MaxPooling2D((2, 2))(x) x = Flatten()(x) x = Dropout(0.2)(x) x = Dense(512, activation='relu')(x) x = Dropout(0.2)(x) x = Dense(2, activation='sigmoid')(x) model = Model(i, x) model.compile() model.summary() ``` ### **4. Optimizer and loss Function** ``` loss_object = tf.keras.losses.BinaryCrossentropy(from_logits=False) Optimiser = tf.keras.optimizers.Adam() ``` ### **5. Metrics For Loss and Acuracy** ``` train_loss = tf.keras.metrics.Mean(name='train_loss') train_accuracy = tf.keras.metrics.BinaryAccuracy(name='train_accuracy') test_loss = tf.keras.metrics.Mean(name="test_loss") test_accuracy = tf.keras.metrics.BinaryAccuracy(name='test_accuracy') ``` ### **6. Function for training and Testing** ``` @tf.function def train_step(images, labels): with tf.GradientTape() as tape: prediction = model(images, training=True) loss = loss_object(labels,prediction) gradient = tape.gradient(loss, model.trainable_variables) Optimiser.apply_gradients(zip(gradient, model.trainable_variables)) train_loss(loss) train_accuracy(labels, prediction) @tf.function def test_step(images, labels): prediction = model(images, training = False) t_loss = loss_object(labels, prediction) test_loss(t_loss) test_accuracy(labels, prediction) ``` ### **7. Training Model** ``` EPOCHS = 25 Train_LOSS = [] TRain_Accuracy = [] Test_LOSS = [] Test_Accuracy = [] for epoch in range(EPOCHS): train_loss.reset_states() train_accuracy.reset_states() test_loss.reset_states() test_accuracy.reset_states() print(f'Epoch : {epoch+1}') count = 0 # variable to keep tab how much data steps of training desc = "EPOCHS {:0>4d}".format(epoch+1) for images, labels in tqdm(train_set, total=training_steps, desc=desc): train_step(images, labels) for test_images, test_labels in val_set: test_step(test_images, test_labels) print( f'Loss: {train_loss.result()}, ' f'Accuracy: {train_accuracy.result()*100}, ' f'Test Loss: {test_loss.result()}, ' f'Test Accuracy: {test_accuracy.result()*100}' ) Train_LOSS.append(train_loss.result()) TRain_Accuracy.append(train_accuracy.result()*100) Test_LOSS.append(test_loss.result()) Test_Accuracy.append(test_accuracy.result()*100) ### Saving BestModel if epoch==0: min_Loss = test_loss.result() min_Accuracy = test_accuracy.result()*100 elif (min_Loss>test_loss.result()): if (min_Accuracy <= test_accuracy.result()*100) : min_Loss = test_loss.result() min_Accuracy = ( test_accuracy.result()*100) print(f"Saving Best Model {epoch+1}") model.save_weights(path) # Saving Model To drive ``` ### **8. Ploting Loss and Accuracy Per Iteration** ``` # Plot loss per iteration plt.plot(Train_LOSS, label='loss') plt.plot(Test_LOSS, label='val_loss') plt.title('Plot loss per iteration') plt.legend() # Plot Accuracy per iteration plt.plot(TRain_Accuracy, label='loss') plt.plot(Test_Accuracy, label='val_loss') plt.title('Plot Accuracy per iteration') plt.legend() ``` ## 9. Evoluting model ##### **Note-** Testing Accuracy of Model with Complete Unseen DataSet. ``` model.load_weights(path) len(test_ds) test_set = test_ds.shuffle(50).batch(2326) for images, labels in test_set: prediction = model.predict(images) break ## Function For Accuracy def accuracy(prediction, labels): corect =0 for i in range(len(prediction)): pred = prediction[i] labe = labels[i] if pred[0]>pred[1] and labe[0]>labe[1]: corect+=1 elif pred[0]<pred[1] and labe[0]<labe[1]: corect+=1 return (corect/len(prediction))*100 print(accuracy(prediction, labels)) ```
true
code
0.6705
null
null
null
null
Quick study to investigate oscillations in reported infections in Germany. Here is the plot of the data in question: ``` import coronavirus import numpy as np import matplotlib.pyplot as plt %config InlineBackend.figure_formats = ['svg'] coronavirus.display_binder_link("2020-05-10-notebook-weekly-fluctuations-in-data-from-germany.ipynb") # get data cases, deaths, country_label = coronavirus.get_country_data("Germany") # plot daily changes fig, ax = plt.subplots(figsize=(8, 4)) coronavirus.plot_daily_change(ax, cases, 'C1') ``` The working assumption is that during the weekend fewer numbers are captured or reported. The analysis below seems to confirm this. We compute a discrete Fourier transform of the data, and expect a peak at a frequency corresponding to a period of 7 days. ## Data selection We start with data from 1st March as numbers before were small. It is convenient to take a number of days that can be divided by seven (for alignment of the freuency axis in Fourier space, so we choose 63 days from 1st of March): ``` data = cases['2020-03-01':'2020-05-03'] # compute daily change diff = data.diff().dropna() # plot data points (corresponding to bars in figure above:) fig, ax = plt.subplots() ax.plot(diff.index, diff, '-C1', label='daily new cases Germany') fig.autofmt_xdate() # avoid x-labels overlap # How many data points (=days) have we got? diff.size diff2 = diff.resample("24h").asfreq() # ensure we have one data point every day diff2.size ``` ## Compute the frequency spectrum ``` fig, ax = plt.subplots() # compute power density spectrum change_F = abs(np.fft.fft(diff2))**2 # determine appropriate frequencies n = change_F.size freq = np.fft.fftfreq(n, d=1) # We skip values at indices 0, 1 and 2: these are large because we have a finite # sequence and not substracted the mean from the data set # We also only plot the the first n/2 frequencies as for high n, we get negative # frequencies with the same data content as the positive ones. ax.plot(freq[3:n//2], change_F[3:n//2], 'o-C3') ax.set_xlabel('frequency [cycles per day]'); ``` A signal with oscillations on a weekly basis would correspond to a frequency of 1/7 as frequency is measured in `per day`. We thus expect the peak above to be at 1/7 $\approx 0.1428$. We can show this more easily by changing the frequency scale from cycles per day to cycles per week: ``` fig, ax = plt.subplots() ax.plot(freq[3:n//2] * 7, change_F[3:n//2], 'o-C3') ax.set_xlabel('frequency [cycles per week]'); ``` In other words: there as a strong component of the data with a frequency corresponding to one week. This is the end of the notebook. # Fourier transform basics A little playground to explore properties of discrete Fourier transforms. ``` time = np.linspace(0, 4, 1000) signal_frequency = 3 # choose this freely signal = np.sin(time * 2 * np.pi * signal_frequency) fourier = np.abs(np.fft.fft(signal)) # compute frequencies in fourier spectrum n = signal.size timestep = time[1] - time[0] freqs = np.fft.fftfreq(n, d=timestep) fig, ax = plt.subplots() ax.plot(time, signal, 'oC9', label=f'signal, frequency={signal_frequency}') ax.set_xlabel('time') ax.legend() fig, ax = plt.subplots() ax.plot(freqs[0:n//2][:20], fourier[0:n//2][0:20], 'o-C8', label="Fourier transform") ax.legend() ax.set_xlabel('frequency'); coronavirus.display_binder_link("2020-05-10-notebook-weekly-fluctuations-in-data-from-germany.ipynb") ```
true
code
0.625009
null
null
null
null
<a href="https://colab.research.google.com/github/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_12_04_atari.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # T81-558: Applications of Deep Neural Networks **Module 12: Reinforcement Learning** * Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx) * For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). # Module 12 Video Material * Part 12.1: Introduction to the OpenAI Gym [[Video]](https://www.youtube.com/watch?v=_KbUxgyisjM&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_01_ai_gym.ipynb) * Part 12.2: Introduction to Q-Learning [[Video]](https://www.youtube.com/watch?v=A3sYFcJY3lA&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_02_qlearningreinforcement.ipynb) * Part 12.3: Keras Q-Learning in the OpenAI Gym [[Video]](https://www.youtube.com/watch?v=qy1SJmsRhvM&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_03_keras_reinforce.ipynb) * **Part 12.4: Atari Games with Keras Neural Networks** [[Video]](https://www.youtube.com/watch?v=co0SwPWoZh0&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_04_atari.ipynb) * Part 12.5: Application of Reinforcement Learning [[Video]](https://www.youtube.com/watch?v=1jQPP3RfwMI&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_12_05_apply_rl.ipynb) # Google CoLab Instructions The following code ensures that Google CoLab is running the correct version of TensorFlow, and has the necessary Python libraries installed. ``` try: from google.colab import drive %tensorflow_version 2.x COLAB = True print("Note: using Google CoLab") except: print("Note: not using Google CoLab") COLAB = False if COLAB: !sudo apt-get install -y xvfb ffmpeg !pip install -q 'gym==0.10.11' !pip install -q 'imageio==2.4.0' !pip install -q PILLOW !pip install -q 'pyglet==1.3.2' !pip install -q pyvirtualdisplay !pip install -q --upgrade tensorflow-probability !pip install -q tf-agents ``` # Part 12.4: Atari Games with Keras Neural Networks The Atari 2600 is a home video game console from Atari, Inc. Released on September 11, 1977. It is credited with popularizing the use of microprocessor-based hardware and games stored on ROM cartridges instead of dedicated hardware with games physically built into the unit. The 2600 was bundled with two joystick controllers, a conjoined pair of paddle controllers, and a game cartridge: initially [Combat](https://en.wikipedia.org/wiki/Combat_(Atari_2600)), and later [Pac-Man](https://en.wikipedia.org/wiki/Pac-Man_(Atari_2600)). Atari emulators are popular and allow many of the old Atari video games to be played on modern computers. They are even available as JavaScript. * [Virtual Atari](http://www.virtualatari.org/listP.html) Atari games have become popular benchmarks for AI systems, particularly reinforcement learning. OpenAI Gym internally uses the [Stella Atari Emulator](https://stella-emu.github.io/). The Atari 2600 is shown in Figure 12.ATARI. **Figure 12.ATARI: The Atari 2600** ![Atari 2600 Console](https://raw.githubusercontent.com/jeffheaton/t81_558_deep_learning/master/images/atari-1.png "Atari 2600 Console") ### Actual Atari 2600 Specs * CPU: 1.19 MHz MOS Technology 6507 * Audio + Video processor: Television Interface Adapter (TIA) * Playfield resolution: 40 x 192 pixels (NTSC). Uses a 20-pixel register that is mirrored or copied, left side to right side, to achieve the width of 40 pixels. * Player sprites: 8 x 192 pixels (NTSC). Player, ball, and missile sprites use pixels that are 1/4 the width of playfield pixels (unless stretched). * Ball and missile sprites: 1 x 192 pixels (NTSC). * Maximum resolution: 160 x 192 pixels (NTSC). Max resolution is only somewhat achievable with programming tricks that combine sprite pixels with playfield pixels. * 128 colors (NTSC). 128 possible on screen. Max of 4 per line: background, playfield, player0 sprite, and player1 sprite. Palette switching between lines is common. Palette switching mid line is possible but not common due to resource limitations. * 2 channels of 1-bit monaural sound with 4-bit volume control. ### OpenAI Lab Atari Pong OpenAI Gym can be used with Windows; however, it requires a special [installation procedure].(https://towardsdatascience.com/how-to-install-openai-gym-in-a-windows-environment-338969e24d30) This chapter demonstrates playing [Atari Pong](https://github.com/wau/keras-rl2/blob/master/examples/dqn_atari.py). Pong is a two-dimensional sports game that simulates table tennis. The player controls an in-game paddle by moving it vertically across the left or right side of the screen. They can compete against another player controlling a second paddle on the opposing side. Players use the paddles to hit a ball back and forth. The goal is for each player to reach eleven points before the opponent; you earn points when one fails to return it to the other. For the Atari 2600 version of Pong, a computer player (controlled by the 2600) is the opposing player. This section shows how to adapt TF-Agents to an Atari game. Some changes are necessary when compared to the pole-cart game presented earlier in this chapter. You can quickly adapt this example to any Atari game by simply changing the environment name. However, I tuned the code presented here for Pong, and it may not perform as well for other games. Some tuning will likely be necessary to produce a good agent for other games. We begin by importing the needed Python packages. ``` import base64 import imageio import IPython import matplotlib import matplotlib.pyplot as plt import numpy as np import PIL.Image import pyvirtualdisplay import tensorflow as tf from tf_agents.agents.dqn import dqn_agent from tf_agents.drivers import dynamic_step_driver from tf_agents.environments import suite_gym, suite_atari from tf_agents.environments import tf_py_environment, batched_py_environment from tf_agents.eval import metric_utils from tf_agents.metrics import tf_metrics from tf_agents.networks import q_network from tf_agents.policies import random_tf_policy from tf_agents.replay_buffers import tf_uniform_replay_buffer from tf_agents.trajectories import trajectory from tf_agents.utils import common from tf_agents.specs import tensor_spec from tf_agents.trajectories import time_step as ts # Set up a virtual display for rendering OpenAI gym environments. display = pyvirtualdisplay.Display(visible=0, size=(1400, 900)).start() ``` ## Hyperparameters The hyperparameter names are the same as the previous DQN example; however, I tuned the numeric values for the more complex Atari game. ``` num_iterations = 250000 initial_collect_steps = 200 collect_steps_per_iteration = 10 replay_buffer_max_length = 100000 batch_size = 32 learning_rate = 2.5e-3 log_interval = 5000 num_eval_episodes = 5 eval_interval = 25000 ``` The algorithm needs more iterations for an Atari game. I also found that increasing the number of collection steps helped the algorithm to train. ## Atari Environment's You must handle Atari environments differently than games like cart-poll. Atari games typically use their 2D displays as the environment state. AI Gym represents Atari games as either a 3D (height by width by color) state spaced based on their screens, or a vector representing the state of the gam's computer RAM. To preprocess Atari games for greater computational efficiency, we generally skip several frames, decrease the resolution, and discard color information. The following code shows how we can set up an Atari environment. ``` #env_name = 'Breakout-v4' env_name = 'Pong-v0' #env_name = 'BreakoutDeterministic-v4' #env = suite_gym.load(env_name) # AtariPreprocessing runs 4 frames at a time, max-pooling over the last 2 # frames. We need to account for this when computing things like update # intervals. ATARI_FRAME_SKIP = 4 max_episode_frames=108000 # ALE frames env = suite_atari.load( env_name, max_episode_steps=max_episode_frames / ATARI_FRAME_SKIP, gym_env_wrappers=suite_atari.DEFAULT_ATARI_GYM_WRAPPERS_WITH_STACKING) #env = batched_py_environment.BatchedPyEnvironment([env]) ``` We can now reset the environment and display one step. The following image shows how the Pong game environment appears to a user. ``` env.reset() PIL.Image.fromarray(env.render()) ``` We are now ready to load and wrap the two environments for TF-Agents. The algorithm uses the first environment for evaluation, and the second to train. ``` train_py_env = suite_atari.load( env_name, max_episode_steps=max_episode_frames / ATARI_FRAME_SKIP, gym_env_wrappers=suite_atari.DEFAULT_ATARI_GYM_WRAPPERS_WITH_STACKING) eval_py_env = suite_atari.load( env_name, max_episode_steps=max_episode_frames / ATARI_FRAME_SKIP, gym_env_wrappers=suite_atari.DEFAULT_ATARI_GYM_WRAPPERS_WITH_STACKING) train_env = tf_py_environment.TFPyEnvironment(train_py_env) eval_env = tf_py_environment.TFPyEnvironment(eval_py_env) ``` ## Agent I used the following class, from TF-Agents examples, to wrap the regular Q-network class. The AtariQNetwork class ensures that the pixel values from the Atari screen are divided by 255. This division assists the neural network by normalizing the pixel values to between 0 and 1. ``` class AtariQNetwork(q_network.QNetwork): """QNetwork subclass that divides observations by 255.""" def call(self, observation, step_type=None, network_state=(), training=False): state = tf.cast(observation, tf.float32) # We divide the grayscale pixel values by 255 here rather than storing # normalized values beause uint8s are 4x cheaper to store than float32s. state = state / 255 return super(AtariQNetwork, self).call( state, step_type=step_type, network_state=network_state, training=training) ``` Next, we introduce two hyperparameters that are specific to the neural network we are about to define. ``` fc_layer_params = (512,) conv_layer_params=((32, (8, 8), 4), (64, (4, 4), 2), (64, (3, 3), 1)) q_net = AtariQNetwork( train_env.observation_spec(), train_env.action_spec(), conv_layer_params=conv_layer_params, fc_layer_params=fc_layer_params) ``` Convolutional neural networks usually are made up of several alternating pairs of convolution and max-pooling layers, ultimately culminating in one or more dense layers. These layers are the same types as previously seen in this course. The QNetwork accepts two parameters that define the convolutional neural network structure. The more simple of the two parameters is **fc_layer_params**. This parameter specifies the size of each of the dense layers. A tuple specifies the size of each of the layers in a list. The second parameter, named **conv_layer_params**, is a list of convolution layers parameters, where each item is a length-three tuple indicating (filters, kernel_size, stride). This implementation of QNetwork supports only convolution layers. If you desire a more complex convolutional neural network, you must define your variant of the QNetwork. The QNetwork defined here is not the agent, instead, the QNetwork is used by the DQN agent to implement the actual neural network. This allows flexibility as you can set your own class if needed. Next, we define the optimizer. For this example, I used RMSPropOptimizer. However, AdamOptimizer is another popular choice. We also create the DQN and reference the Q-network we just created. ``` optimizer = tf.compat.v1.train.RMSPropOptimizer( learning_rate=learning_rate, decay=0.95, momentum=0.0, epsilon=0.00001, centered=True) train_step_counter = tf.Variable(0) observation_spec = tensor_spec.from_spec(train_env.observation_spec()) time_step_spec = ts.time_step_spec(observation_spec) action_spec = tensor_spec.from_spec(train_env.action_spec()) target_update_period=32000 # ALE frames update_period=16 # ALE frames _update_period = update_period / ATARI_FRAME_SKIP _global_step = tf.compat.v1.train.get_or_create_global_step() agent = dqn_agent.DqnAgent( time_step_spec, action_spec, q_network=q_net, optimizer=optimizer, epsilon_greedy=0.01, n_step_update=1.0, target_update_tau=1.0, target_update_period=( target_update_period / ATARI_FRAME_SKIP / _update_period), td_errors_loss_fn=common.element_wise_huber_loss, gamma=0.99, reward_scale_factor=1.0, gradient_clipping=None, debug_summaries=False, summarize_grads_and_vars=False, train_step_counter=_global_step) agent.initialize() ``` ## Metrics and Evaluation There are many different ways to measure the effectiveness of a model trained with reinforcement learning. The loss function of the internal Q-network is not a good measure of the entire DQN algorithm's overall fitness. The network loss function measures how close the Q-network was fit to the collected data and did not indicate how effective the DQN is in maximizing rewards. The method used for this example tracks the average reward received over several episodes. ``` def compute_avg_return(environment, policy, num_episodes=10): total_return = 0.0 for _ in range(num_episodes): time_step = environment.reset() episode_return = 0.0 while not time_step.is_last(): action_step = policy.action(time_step) time_step = environment.step(action_step.action) episode_return += time_step.reward total_return += episode_return avg_return = total_return / num_episodes return avg_return.numpy()[0] # See also the metrics module for standard implementations of different metrics. # https://github.com/tensorflow/agents/tree/master/tf_agents/metrics ``` ## Replay Buffer DQN works by training a neural network to predict the Q-values for every possible environment-state. A neural network needs training data, so the algorithm accumulates this training data as it runs episodes. The replay buffer is where this data is stored. Only the most recent episodes are stored, older episode data rolls off the queue as the queue accumulates new data. ``` replay_buffer = tf_uniform_replay_buffer.TFUniformReplayBuffer( data_spec=agent.collect_data_spec, batch_size=train_env.batch_size, max_length=replay_buffer_max_length) # Dataset generates trajectories with shape [Bx2x...] dataset = replay_buffer.as_dataset( num_parallel_calls=3, sample_batch_size=batch_size, num_steps=2).prefetch(3) ``` ## Random Collection The algorithm must prime the pump. Training cannot begin on an empty replay buffer. The following code performs a predefined number of steps to generate initial training data. ``` random_policy = random_tf_policy.RandomTFPolicy(train_env.time_step_spec(), train_env.action_spec()) def collect_step(environment, policy, buffer): time_step = environment.current_time_step() action_step = policy.action(time_step) next_time_step = environment.step(action_step.action) traj = trajectory.from_transition(time_step, action_step, next_time_step) # Add trajectory to the replay buffer buffer.add_batch(traj) def collect_data(env, policy, buffer, steps): for _ in range(steps): collect_step(env, policy, buffer) collect_data(train_env, random_policy, replay_buffer, steps=initial_collect_steps) ``` ## Training the agent We are now ready to train the DQN. This process can take many hours, depending on how many episodes you wish to run through. As training occurs, this code will update on both the loss and average return. As training becomes more successful, the average return should increase. The losses reported reflecting the average loss for individual training batches. ``` iterator = iter(dataset) # (Optional) Optimize by wrapping some of the code in a graph using TF function. agent.train = common.function(agent.train) # Reset the train step agent.train_step_counter.assign(0) # Evaluate the agent's policy once before training. avg_return = compute_avg_return(eval_env, agent.policy, num_eval_episodes) returns = [avg_return] for _ in range(num_iterations): # Collect a few steps using collect_policy and save to the replay buffer. for _ in range(collect_steps_per_iteration): collect_step(train_env, agent.collect_policy, replay_buffer) # Sample a batch of data from the buffer and update the agent's network. experience, unused_info = next(iterator) train_loss = agent.train(experience).loss step = agent.train_step_counter.numpy() if step % log_interval == 0: print('step = {0}: loss = {1}'.format(step, train_loss)) if step % eval_interval == 0: avg_return = compute_avg_return(eval_env, agent.policy, num_eval_episodes) print('step = {0}: Average Return = {1}'.format(step, avg_return)) returns.append(avg_return) ``` ## Visualization The notebook can plot the average return over training iterations. The average return should increase as the program performs more training iterations. ``` iterations = range(0, num_iterations + 1, eval_interval) plt.plot(iterations, returns) plt.ylabel('Average Return') plt.xlabel('Iterations') plt.ylim(top=10) ``` ### Videos We now have a trained model and observed its training progress on a graph. Perhaps the most compelling way to view an Atari game's results is a video that allows us to see the agent play the game. The following functions are defined so that we can watch the agent play the game in the notebook. ``` def embed_mp4(filename): """Embeds an mp4 file in the notebook.""" video = open(filename,'rb').read() b64 = base64.b64encode(video) tag = ''' <video width="640" height="480" controls> <source src="data:video/mp4;base64,{0}" type="video/mp4"> Your browser does not support the video tag. </video>'''.format(b64.decode()) return IPython.display.HTML(tag) def create_policy_eval_video(policy, filename, num_episodes=5, fps=30): filename = filename + ".mp4" with imageio.get_writer(filename, fps=fps) as video: for _ in range(num_episodes): time_step = eval_env.reset() video.append_data(eval_py_env.render()) while not time_step.is_last(): action_step = policy.action(time_step) time_step = eval_env.step(action_step.action) video.append_data(eval_py_env.render()) return embed_mp4(filename) ``` First, we will observe the trained agent play the game. ``` create_policy_eval_video(agent.policy, "trained-agent") ``` For comparison, we observe a random agent play. While the trained agent is far from perfect, it does outperform the random agent by a considerable amount. ``` create_policy_eval_video(random_policy, "random-agent") ```
true
code
0.685607
null
null
null
null
# Disclaimer Released under the CC BY 4.0 License (https://creativecommons.org/licenses/by/4.0/) # Purpose of this notebook The purpose of this document is to show how I approached the presented problem and to record my learning experience in how to use Tensorflow 2 and CatBoost to perform a classification task on text data. If, while reading this document, you think _"Why didn't you do `<this>` instead of `<that>`?"_, the answer could be simply because I don't know about `<this>`. Comments, questions and constructive criticism are of course welcome. # Intro This simple classification task has been developed to get familiarized with Tensorflow 2 and CatBoost handling of text data. In summary, the task is to predict the author of a short text. To get a number of train/test examples, it is enough to create a twitter app and, using the python client library for twitter, read the user timeline of multiple accounts. This process is not covered here. If you are interested in this topic, feel free to contact me. ## Features It is assumed the collected raw data consists of: 1. The author handle (the label that will be predicted) 2. The timestamp of the post 3. The raw text of the post ### Preparing the dataset When preparing the dataset, the content of the post is preprocessed using these rules: 1. Newlines are replaced with a space 2. Links are replaced with a placeholder (e.g. `<link>`) 3. For each possible unicode char category, the number of chars in that category is added as a feature 4. The number of words for each tweet is added as a feature 5. Retweets (even retweets with comment) are discarded. Only responses and original tweets are taken into account The dataset has been randomly split into three different files for train (70%), validation (10%) and test (20%). For each label, it has been verified that the same percentages hold in all three files. Before fitting the data and before evaluation on the test dataset, the timestamp values are normalized, using the mean and standard deviation computed on the train dataset. # TensorFlow 2 model The model has four different input features: 1. The normalized timestamp. 2. The input text, represented as the whole sentence. This will be transformed in a 128-dimensional vector by an embedding layer. 3. The input text, this time represented as a sequence of words, expressed as indexes of tokens. This representation will be used by a LSTM layer to try to extract some meaning from the actual sequence of the used words. 4. The unicode character category usage. This should help in identify handles that use emojis, a lot of punctuation or unusual chars. The resulting layers are concatenated, then after a sequence of two dense layers (with an applied dropout) the final layer computes the logits for the different classes. The used loss function is *sparse categorical crossentropy*, since the labels are represented as indexes of a list of twitter handles. ## Imports for the TensorFlow 2 model ``` import functools import os from tensorflow.keras import Input, layers import tensorflow as tf import tensorflow_hub as hub from tensorflow.keras import regularizers import pandas as pd import numpy as np import copy import calendar import datetime import re from tensorflow.keras.preprocessing.text import Tokenizer import unicodedata #masking layers and GPU don't mix os.environ["CUDA_VISIBLE_DEVICES"] = "-1" ``` ## Definitions for the TensorFlow 2 model ``` #Download size: ~446MB hub_layer = hub.KerasLayer( "https://tfhub.dev/google/tf2-preview/nnlm-en-dim128/1", output_shape=[512], input_shape=[], dtype=tf.string, trainable=False ) embed = hub.load("https://tfhub.dev/google/tf2-preview/nnlm-en-dim128/1") unicode_data_categories = [ "Cc", "Cf", "Cn", "Co", "Cs", "LC", "Ll", "Lm", "Lo", "Lt", "Lu", "Mc", "Me", "Mn", "Nd", "Nl", "No", "Pc", "Pd", "Pe", "Pf", "Pi", "Po", "Ps", "Sc", "Sk", "Sm", "So", "Zl", "Zp", "Zs" ] column_names = [ "handle", "timestamp", "text" ] column_names.extend(unicode_data_categories) train_file = os.path.realpath("input.csv") n_tokens = 100000 tokenizer = Tokenizer(n_tokens, oov_token='<OOV>') #List of handles (labels) #Fill with the handles you want to consider in your dataset handles = [ ] end_token = "XEND" train_file = os.path.realpath("data/train.csv") val_file = os.path.realpath("data/val.csv") test_file = os.path.realpath("data/test.csv") ``` ## Preprocessing and computing dataset features ``` def get_pandas_dataset(input_file, fit_tokenizer=False, timestamp_mean=None, timestamp_std=None, pad_sequence=None): pd_dat = pd.read_csv(input_file, names=column_names) pd_dat = pd_dat[pd_dat.handle.isin(handles)] if(timestamp_mean is None): timestamp_mean = pd_dat.timestamp.mean() if(timestamp_std is None): timestamp_std = pd_dat.timestamp.std() pd_dat.timestamp = (pd_dat.timestamp - timestamp_mean) / timestamp_std pd_dat["handle_index"] = pd_dat['handle'].map(lambda x: handles.index(x)) if(fit_tokenizer): tokenizer.fit_on_texts(pd_dat["text"]) pad_sequence = tokenizer.texts_to_sequences([[end_token]])[0][0] pd_dat["sequence"] = tokenizer.texts_to_sequences(pd_dat["text"]) max_seq_length = 30 pd_dat = pd_dat.reset_index(drop=True) #max length pd_dat["sequence"] = pd.Series(el[0:max_seq_length] for el in pd_dat["sequence"]) #padding pd_dat["sequence"] = pd.Series([el + ([pad_sequence] * (max_seq_length - len(el))) for el in pd_dat["sequence"]]) pd_dat["words_in_tweet"] = pd_dat["text"].str.strip().str.split(" ").str.len() + 1 return pd_dat, timestamp_mean, timestamp_std, pad_sequence train_dataset, timestamp_mean, timestamp_std, pad_sequence = get_pandas_dataset(train_file, fit_tokenizer=True) test_dataset, _, _, _= get_pandas_dataset(test_file, timestamp_mean=timestamp_mean, timestamp_std=timestamp_std, pad_sequence=pad_sequence) val_dataset, _, _, _ = get_pandas_dataset(val_file, timestamp_mean=timestamp_mean, timestamp_std=timestamp_std, pad_sequence=pad_sequence) #selecting as features only the unicode categories that are used in the train dataset non_null_unicode_categories = [] for unicode_data_category in unicode_data_categories: category_name = unicode_data_category category_sum = train_dataset[category_name].sum() if(category_sum > 0): non_null_unicode_categories.append(category_name) print("Bucketized unicode categories used as features: " + repr(non_null_unicode_categories)) ``` ## Defining input/output features from the datasets ``` def split_inputs_and_outputs(pd_dat): labels = pd_dat['handle_index'].values icolumns = pd_dat.columns timestamps = pd_dat.loc[:, "timestamp"].astype(np.float32) text = pd_dat.loc[:, "text"] sequence = np.asarray([np.array(el) for el in pd_dat.loc[:, "sequence"]]) #unicode_char_ratios = pd_dat[unicode_data_categories].astype(np.float32) unicode_char_categories = { category_name: pd_dat[category_name] for category_name in non_null_unicode_categories } words_in_tweet = pd_dat['words_in_tweet'] return timestamps, text, sequence, unicode_char_categories, words_in_tweet, labels timestamps_train, text_train, sequence_train, unicode_char_categories_train, words_in_tweet_train, labels_train = split_inputs_and_outputs(train_dataset) timestamps_val, text_val, sequence_val, unicode_char_categories_val, words_in_tweet_val, labels_val = split_inputs_and_outputs(val_dataset) timestamps_test, text_test, sequence_test, unicode_char_categories_test, words_in_tweet_test, labels_test = split_inputs_and_outputs(test_dataset) ``` ## Input tensors ``` input_timestamp = Input(shape=(1, ), name='input_timestamp', dtype=tf.float32) input_text = Input(shape=(1, ), name='input_text', dtype=tf.string) input_sequence = Input(shape=(None, 1 ), name="input_sequence", dtype=tf.float32) input_unicode_char_categories = [ Input(shape=(1, ), name="input_"+category_name, dtype=tf.float32) for category_name in non_null_unicode_categories ] input_words_in_tweet = Input(shape=(1, ), name="input_words_in_tweet", dtype=tf.float32) inputs_train = { 'input_timestamp': timestamps_train, "input_text": text_train, "input_sequence": sequence_train, 'input_words_in_tweet': words_in_tweet_train, } inputs_train.update({ 'input_' + category_name: unicode_char_categories_train[category_name] for category_name in non_null_unicode_categories }) outputs_train = labels_train inputs_val = { 'input_timestamp': timestamps_val, "input_text": text_val, "input_sequence": sequence_val, 'input_words_in_tweet': words_in_tweet_val } inputs_val.update({ 'input_' + category_name: unicode_char_categories_val[category_name] for category_name in non_null_unicode_categories }) outputs_val = labels_val inputs_test = { 'input_timestamp': timestamps_test, "input_text": text_test, "input_sequence": sequence_test, 'input_words_in_tweet': words_in_tweet_test } inputs_test.update({ 'input_' + category_name: unicode_char_categories_test[category_name] for category_name in non_null_unicode_categories }) outputs_test = labels_test ``` ## TensorFlow 2 model definition ``` def get_model(): reg = None activation = 'relu' reshaped_text = layers.Reshape(target_shape=())(input_text) embedded = hub_layer(reshaped_text) x = layers.Dense(256, activation=activation)(embedded) masking = layers.Masking(mask_value=pad_sequence)(input_sequence) lstm_layer = layers.Bidirectional(layers.LSTM(32))(masking) flattened_lstm_layer = layers.Flatten()(lstm_layer) x = layers.concatenate([ input_timestamp, flattened_lstm_layer, *input_unicode_char_categories, input_words_in_tweet, x ]) x = layers.Dense(n_tokens // 30, activation=activation, kernel_regularizer=reg)(x) x = layers.Dropout(0.1)(x) x = layers.Dense(n_tokens // 50, activation=activation, kernel_regularizer=reg)(x) x = layers.Dropout(0.1)(x) x = layers.Dense(256, activation=activation, kernel_regularizer=reg)(x) y = layers.Dense(len(handles), activation='linear')(x) model = tf.keras.Model( inputs=[ input_timestamp, input_text, input_sequence, *input_unicode_char_categories, input_words_in_tweet ], outputs=[y] ) cce = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) model.compile( optimizer='adam', loss=cce, metrics=['sparse_categorical_accuracy'] ) return model model = get_model() tf.keras.utils.plot_model(model, to_file='twitstar.png', show_shapes=True) ``` ## TensorFlow 2 model fitting ``` history = model.fit( inputs_train, outputs_train, epochs=15, batch_size=64, verbose=True, validation_data=(inputs_val, outputs_val), callbacks=[ tf.keras.callbacks.ModelCheckpoint( os.path.realpath("weights.h5"), monitor="val_sparse_categorical_accuracy", save_best_only=True, verbose=2 ), tf.keras.callbacks.EarlyStopping( patience=3, monitor="val_sparse_categorical_accuracy" ), ] ) ``` ## TensorFlow 2 model plots for train loss and accuracy ``` import matplotlib.pyplot as plt %matplotlib inline plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.title('Loss vs. epochs') plt.ylabel('Loss') plt.xlabel('Epoch') plt.legend(['Training', 'Validation'], loc='upper right') plt.show() plt.plot(history.history['sparse_categorical_accuracy']) plt.plot(history.history['val_sparse_categorical_accuracy']) plt.title('Accuracy vs. epochs') plt.ylabel('Accuracy') plt.xlabel('Epoch') plt.legend(['Training', 'Validation'], loc='upper right') plt.show() ``` ## TensorFlow 2 model evaluation ``` #loading the "best" weights model.load_weights(os.path.realpath("weights.h5")) model.evaluate(inputs_test, outputs_test) ``` ### TensorFlow 2 model confusion matrix Using predictions on the test set, a confusion matrix is produced ``` def tf2_confusion_matrix(inputs, outputs): predictions = model.predict(inputs) wrong_labelled_counter = np.zeros((len(handles), len(handles))) wrong_labelled_sequences = np.empty((len(handles), len(handles)), np.object) for i in range(len(handles)): for j in range(len(handles)): wrong_labelled_sequences[i][j] = [] tot_wrong = 0 for i in range(len(predictions)): predicted = int(predictions[i].argmax()) true_value = int(outputs[i]) wrong_labelled_counter[true_value][predicted] += 1 wrong_labelled_sequences[true_value][predicted].append(inputs.get('input_text')[i]) ok = (int(true_value) == int(predicted)) if(not ok): tot_wrong += 1 return wrong_labelled_counter, wrong_labelled_sequences, predictions def print_confusion_matrix(wrong_labelled_counter): the_str = "\t" for handle in handles: the_str += handle + "\t" print(the_str) ctr = 0 for row in wrong_labelled_counter: the_str = handles[ctr] + '\t' ctr+=1 for i in range(len(row)): the_str += str(int(row[i])) if(i != len(row) -1): the_str += "\t" print(the_str) wrong_labelled_counter, wrong_labelled_sequences, predictions = tf2_confusion_matrix(inputs_test, outputs_test) print_confusion_matrix(wrong_labelled_counter) ``` # CatBoost model This CatBoost model instance was developed reusing the ideas presented in these tutorials from the official repository: [classification](https://github.com/catboost/tutorials/blob/master/classification/classification_tutorial.ipynb) and [text features](https://github.com/catboost/tutorials/blob/master/text_features/text_features_in_catboost.ipynb) ## Imports for the CatBoost model ``` import functools import os import pandas as pd import numpy as np import copy import calendar import datetime import re import unicodedata from catboost import Pool, CatBoostClassifier ``` ## Definitions for the CatBoost model ``` unicode_data_categories = [ "Cc", "Cf", "Cn", "Co", "Cs", "LC", "Ll", "Lm", "Lo", "Lt", "Lu", "Mc", "Me", "Mn", "Nd", "Nl", "No", "Pc", "Pd", "Pe", "Pf", "Pi", "Po", "Ps", "Sc", "Sk", "Sm", "So", "Zl", "Zp", "Zs" ] column_names = [ "handle", "timestamp", "text" ] column_names.extend(unicode_data_categories) #List of handles (labels) #Fill with the handles you want to consider in your dataset handles = [ ] train_file = os.path.realpath("./data/train.csv") val_file = os.path.realpath("./data/val.csv") test_file = os.path.realpath("./data/test.csv") ``` ## Preprocessing and computing dataset features ``` def get_pandas_dataset(input_file, timestamp_mean=None, timestamp_std=None): pd_dat = pd.read_csv(input_file, names=column_names) pd_dat = pd_dat[pd_dat.handle.isin(handles)] if(timestamp_mean is None): timestamp_mean = pd_dat.timestamp.mean() if(timestamp_std is None): timestamp_std = pd_dat.timestamp.std() pd_dat.timestamp = (pd_dat.timestamp - timestamp_mean) / timestamp_std pd_dat["handle_index"] = pd_dat['handle'].map(lambda x: handles.index(x)) pd_dat = pd_dat.reset_index(drop=True) return pd_dat, timestamp_mean, timestamp_std train_dataset, timestamp_mean, timestamp_std = get_pandas_dataset(train_file) test_dataset, _, _ = get_pandas_dataset(test_file, timestamp_mean=timestamp_mean, timestamp_std=timestamp_std) val_dataset, _, _ = get_pandas_dataset(val_file, timestamp_mean=timestamp_mean, timestamp_std=timestamp_std) def split_inputs_and_outputs(pd_dat): labels = pd_dat['handle_index'].values del(pd_dat['handle']) del(pd_dat['handle_index']) return pd_dat, labels X_train, labels_train = split_inputs_and_outputs(train_dataset) X_val, labels_val = split_inputs_and_outputs(val_dataset) X_test, labels_test = split_inputs_and_outputs(test_dataset) ``` ## CatBoost model definition ``` def get_model(catboost_params={}): cat_features = [] text_features = ['text'] catboost_default_params = { 'iterations': 1000, 'learning_rate': 0.03, 'eval_metric': 'Accuracy', 'task_type': 'GPU', 'early_stopping_rounds': 20 } catboost_default_params.update(catboost_params) model = CatBoostClassifier(**catboost_default_params) return model, cat_features, text_features model, cat_features, text_features = get_model() ``` ## CatBoost model fitting ``` def fit_model(X_train, X_val, y_train, y_val, model, cat_features, text_features, verbose=100): learn_pool = Pool( X_train, y_train, cat_features=cat_features, text_features=text_features, feature_names=list(X_train) ) val_pool = Pool( X_val, y_val, cat_features=cat_features, text_features=text_features, feature_names=list(X_val) ) model.fit(learn_pool, eval_set=val_pool, verbose=verbose) return model model = fit_model(X_train, X_val, labels_train, labels_val, model, cat_features, text_features) ``` ## CatBoost model evaluation Also for the CatBoost model, predictions on the test set, a confusion matrix is produced ``` def predict(X, model, cat_features, text_features): pool = Pool( data=X, cat_features=cat_features, text_features=text_features, feature_names=list(X) ) probs = model.predict_proba(pool) return probs def check_predictions_on(inputs, outputs, model, cat_features, text_features, handles): predictions = predict(inputs, model, cat_features, text_features) labelled_counter = np.zeros((len(handles), len(handles))) labelled_sequences = np.empty((len(handles), len(handles)), np.object) for i in range(len(handles)): for j in range(len(handles)): labelled_sequences[i][j] = [] tot_wrong = 0 for i in range(len(predictions)): predicted = int(predictions[i].argmax()) true_value = int(outputs[i]) labelled_counter[true_value][predicted] += 1 labelled_sequences[true_value][predicted].append(inputs.get('text').values[i]) ok = (int(true_value) == int(predicted)) if(not ok): tot_wrong += 1 return labelled_counter, labelled_sequences, predictions def confusion_matrix(labelled_counter, handles): the_str = "\t" for handle in handles: the_str += handle + "\t" the_str += "\n" ctr = 0 for row in labelled_counter: the_str += handles[ctr] + '\t' ctr+=1 for i in range(len(row)): the_str += str(int(row[i])) if(i != len(row) -1): the_str += "\t" the_str += "\n" return the_str labelled_counter, labelled_sequences, predictions = check_predictions_on( X_test, labels_test, model, cat_features, text_features, handles ) confusion_matrix_string = confusion_matrix(labelled_counter, handles) print(confusion_matrix_string) ``` # Evaluation To perform some experiments and evaluate the two models, 18 Twitter users were selected and, for each user, a number of tweets and responses to other users' tweets were collected. In total 39786 tweets were collected. The difference in class representation could be eliminated, for example limiting the number of tweets for each label to the number of tweets in the less represented class. This difference, however, was not eliminated, in order to test if it represents an issue for the accuracy of the two trained models. The division of the tweets corresponding to each twitter handle for each file (train, test, validation) is reported in the following table. To avoid policy issues (better safe than sorry), the actual user handle is masked using C_x placeholders and a brief description of the twitter user is presented instead. |Description|Handle|Train|Test|Validation|Sum| |-------|-------|-------|-------|-------|-------| |UK-based labour politician|C_1|1604|492|229|2325| |US-based democratic politician|C_2|1414|432|195|2041| |US-based democratic politician|C_3|1672|498|273|2443| |US-based actor|C_4|1798|501|247|2546| |UK-based actress|C_5|847|243|110|1200| |US-based democratic politician|C_6|2152|605|304|3061| |US-based singer|C_7|2101|622|302|3025| |US-based singer|C_8|1742|498|240|2480| |Civil rights activist|C_9|314|76|58|448| |US-based republican politician|C_10|620|159|78|857| |US-based TV host|C_11|2022|550|259|2831| |Parody account of C_15 |C_12|2081|624|320|3025| |US-based democratic politician|C_13|1985|557|303|2845| |US-based actor/director|C_14|1272|357|183|1812| |US-based republican politician|C_15|1121|298|134|1553| |US-based writer|C_16|1966|502|302|2770| |US-based writer|C_17|1095|305|153|1553| |US-based entrepreneur|C_18|2084|581|306|2971| |Sum||27890|7900|3996|39786| ## TensorFlow 2 model The following charts show loss and accuracy vs epochs for train and validation for a typical run of the TF2 model: ![loss](img/tf2_train_val_loss.png) ![accuracy](img/tf2_train_val_accuracy.png) If the images do not show correctly, they can be found at these links: [loss](https://github.com/icappello/ml-predict-text-author/blob/master/img/tf2_train_val_loss.png) [accuracy](https://github.com/icappello/ml-predict-text-author/blob/master/img/tf2_train_val_accuracy.png) After a few epochs, the model starts overfitting on the train data, and the accuracy for the validation set quickly reaches a plateau. The obtained accuracy on the test set is 0.672 ## CatBoost model The fit procedure stopped after 303 iterations. The obtained accuracy on the test set is 0.808 ## Confusion matrices The confusion matrices for the two models are reported [here](https://docs.google.com/spreadsheets/d/17JGDXYRajnC4THrBnZrbcqQbgzgjo0Jb7KAvPYenr-w/edit?usp=sharing), since large tables are not displayed correctly in the embedded github viewer for jupyter notebooks. Rows represent the actual classes, while columns represent the predicted ones. ## Summary The CatBoost model obtained a better accuracy overall, as well as a better accuracy on all but one label. No particular optimization was done on the definition of the CatBoost model. The TF2 model could need more data, as well as some changes to its definition, to perform better (comments and pointers on this are welcome). Some variants of the TF2 model were tried: a deeper model with more dense layers, higher dropout rate, more/less units in layers, using only a subset of features, regularization methods (L1, L2, batch regularization), different activation functions (sigmoid, tanh) but none performed significantly better than the one presented. Looking at the results summarized in the confusion matrices, tweets from C_9 clearly represented a problem, either for the under-representation relative to the other classes or for the actual content of the tweets (some were not written in english). Also, tweets from handles C_5 and C_14 were hard to correctly classify for both models, even if they were not under-represented w.r.t other labels.
true
code
0.423816
null
null
null
null
# Neural Networks for Regression with TensorFlow > Notebook demonstrates Neural Networks for Regression Problems with TensorFlow - toc: true - badges: true - comments: true - categories: [DeepLearning, NeuralNetworks, TensorFlow, Python, LinearRegression] - image: images/nntensorflow.png ## Neural Network Regression Model with TensorFlow This notebook is continuation of the Blog post [TensorFlow Fundamentals](https://sandeshkatakam.github.io/My-Machine_learning-Blog/tensorflow/machinelearning/2022/02/09/TensorFlow-Fundamentals.html). **The notebook is an account of my working for the Tensorflow tutorial by Daniel Bourke on Youtube**. **The Notebook will cover the following concepts:** * Architecture of a neural network regression model. * Input shapes and output shapes of a regression model(features and labels). * Creating custom data to view and fit. * Steps in modelling * Creating a model, compiling a model, fitting a model, evaluating a model. * Different evaluation methods. * Saving and loading models. **Regression Problems**: A regression problem is when the output variable is a real or continuous value, such as “salary” or “weight”. Many different models can be used, the simplest is the linear regression. It tries to fit data with the best hyper-plane which goes through the points. Examples: * How much will this house sell for? * How many people will buy this app? * How much will my health insurace be? * How much should I save each week for fuel? We can also use the regression model to try and predict where the bounding boxes should be in object detection problem. Object detection thus involves both regression and then classifying the image in the box(classification problem). ### Regression Inputs and outputs Architecture of a regression model: * Hyperparameters: * Input Layer Shape : same as shape of number of features. * Hidden Layrer(s): Problem specific * Neurons per hidden layer : Problem specific. * Output layer shape: same as hape of desired prediction shape. * Hidden activation : Usually ReLU(rectified linear unit) sometimes sigmoid. * Output acitvation: None, ReLU, logistic/tanh. * Loss function : MSE(Mean squared error) or MAE(Mean absolute error) or combination of both. * Optimizer: SGD(Stochastic Gradient Descent), Adam optimizer. **Source:** Adapted from page 239 of [Hands-On Machine learning with Scikit-Learn, Keras & TensorFlow](https://www.oreilly.com/library/view/hands-on-machine-learning/9781492032632/) Example of creating a sample regression model in TensorFlow: ``` # 1. Create a model(specific to your problem) model = tf.keras.Sequential([ tf.keras.Input(shape = (3,)), tf.keras.layers.Dense(100, activation = "relu"), tf.keras.layers.Dense(100, activation = "relu"), tf.keras.layers.Dense(100, activation = "relu"), tf.keras.layers.Dense(1, activation = None) ]) # 2. Compile the model model.compile(loss = tf.keras.losses.mae, optimizer = tf.keras.optimizers.Adam(lr = 0.0001), metrics = ["mae"]) # 3. Fit the model model.fit(X_train, Y_train, epochs = 100) ``` ### Introduction to Regression with Neural Networks in TensorFlow ``` # Import TensorFlow import tensorflow as tf print(tf.__version__) ## Creating data to view and fit import numpy as np import matplotlib.pyplot as plt from matplotlib import style style.use('dark_background') # create features X = np.array([-7.0,-4.0,-1.0,2.0,5.0,8.0,11.0,14.0]) # Create labels y = np.array([3.0,6.0,9.0,12.0,15.0,18.0,21.0,24.0]) # Visualize it plt.scatter(X,y) y == X + 10 ``` Yayy.. we got the relation by just seeing the data. Since the data is small and the relation ship is just linear, it was easy to guess the relation. ### Input and Output shapes ``` # Create a demo tensor for the housing price prediction problem house_info = tf.constant(["bedroom","bathroom", "garage"]) house_price = tf.constant([939700]) house_info, house_price X[0], y[0] X[1], y[1] input_shape = X[0].shape output_shape = y[0].shape input_shape, output_shape X[0].ndim ``` we are specifically looking at scalars here. Scalars have 0 dimension ``` # Turn our numpy arrays into tensors X = tf.cast(tf.constant(X), dtype = tf.float32) y = tf.cast(tf.constant(y), dtype = tf.float32) X.shape, y.shape input_shape = X[0].shape output_shape = y[0].shape input_shape, output_shape plt.scatter(X,y) ``` ### Steps in modelling with Tensorflow 1. **Creating a model** - define the input and output layers, as well as the hidden layers of a deep learning model. 2. **Compiling a model** - define the loss function(how wrong the prediction of our model is) and the optimizer (tells our model how to improve the partterns its learning) and evaluation metrics(what we can use to interpret the performance of our model). 3. Fitting a model - letting the model try to find the patterns between X & y (features and labels). ``` X,y X.shape # Set random seed tf.random.set_seed(42) # Create a model using the Sequential API model = tf.keras.Sequential([ tf.keras.layers.Dense(1) ]) # Compile the model model.compile(loss=tf.keras.losses.mae, # mae is short for mean absolute error optimizer=tf.keras.optimizers.SGD(), # SGD is short for stochastic gradient descent metrics=["mae"]) # Fit the model # model.fit(X, y, epochs=5) # this will break with TensorFlow 2.7.0+ model.fit(tf.expand_dims(X, axis=-1), y, epochs=5) # Check out X and y X, y # Try and make a prediction using our model y_pred = model.predict([17.0]) y_pred ``` The output is very far off from the actual value. So, Our model is not working correctly. Let's go and improve our model in the next section. ### Improving our Model Let's take a look about the three steps when we created the above model. We can improve the model by altering the steps we took to create a model. 1. **Creating a model** - here we might add more layers, increase the number of hidden units(all called neurons) within each of the hidden layers, change the activation function of each layer. 2. **Compiling a model** - here we might change the optimization function or perhaps the learning rate of the optimization function. 3. **Fitting a model** - here we might fit a model for more **epochs** (leave it for training longer) or on more data (give the model more examples to learn from) ``` # Let's rebuild our model with change in the epoch number # 1. Create the model model = tf.keras.Sequential([ tf.keras.layers.Dense(1) ]) # 2. Compile the model model.compile(loss = tf.keras.losses.mae, optimizer = tf.keras.optimizers.SGD(), metrics = ["mae"]) # 3. Fit the model to our dataset model.fit(tf.expand_dims(X, axis=-1), y, epochs=100, verbose = 0) # Our data X , y # Let's see if our model's prediction has improved model.predict([17.0]) ``` We got so close the actual value is 27 we performed a better prediction than the last model we trained. But we need to improve much better. Let's see what more we change and how close can we get to our actual output ``` # Let's rebuild our model with changing the optimization function to Adam # 1. Create the model model = tf.keras.Sequential([ tf.keras.layers.Dense(1) ]) # 2. Compile the model model.compile(loss = tf.keras.losses.mae, optimizer = tf.keras.optimizers.Adam(lr = 0.0001), # lr stands for learning rate metrics = ["mae"]) # 3. Fit the model to our dataset model.fit(tf.expand_dims(X, axis=-1), y, epochs=100, verbose = 0) # Prediction of our newly trained model: model.predict([17.0]) # we are going to predict for the same input value 17 ``` Oh..god!! This result went really bad for us. ``` # Let's rebuild our model by adding one extra hidden layer with 100 units # 1. Create the model model = tf.keras.Sequential([ tf.keras.layers.Dense(100, activation = "relu"), # only difference we made tf.keras.layers.Dense(1) ]) # 2. Compile the model model.compile(loss = "mae", optimizer = tf.keras.optimizers.SGD(), metrics = ["mae"]) # 3. Fit the model to our dataset model.fit(tf.expand_dims(X, axis=-1), y, epochs=100, verbose = 0) # verbose will hide the output from epochs X , y # It's prediction time! model.predict([17.0]) ``` Oh, this should be 27 but this prediction is very far off from our previous prediction. It seems that our previous model did better than this. Even though we find the values of our loss function are very low than that of our previous model. We still are far away from our label value. **Why is that so??** The explanation is our model is overfitting the dataset. That means it is trying to map a function that just fits the already provided examples correctly but it cannot fit the new examples that we are giving. So, the `mae` and `loss value` if not the ultimate metric to check for improving the model. because we need to get less error for new examples that the model has not seen before. ``` # Let's rebuild our model by using Adam optimizer # 1. Create the model model = tf.keras.Sequential([ tf.keras.layers.Dense(100, activation = "relu"), # only difference we made tf.keras.layers.Dense(1) ]) # 2. Compile the model model.compile(loss = "mae", optimizer = tf.keras.optimizers.Adam(), metrics = ["mae"]) # 3. Fit the model to our dataset model.fit(tf.expand_dims(X, axis=-1), y, epochs=100, verbose = 0)# verbose will hide the epochs output model.predict([17.0]) ``` Still not better!! ``` # Let's rebuild our model by adding more layers # 1. Create the model model = tf.keras.Sequential([ tf.keras.layers.Dense(100, activation = "relu"), tf.keras.layers.Dense(100, activation = "relu"), tf.keras.layers.Dense(100, activation = "relu"),# only difference we made tf.keras.layers.Dense(1) ]) # default value of lr is 0.001 # 2. Compile the model model.compile(loss = "mae", optimizer = tf.keras.optimizers.Adam(lr = 0.01), # lr stands for learning rate metrics = ["mae"]) # 3. Fit the model to our dataset model.fit(tf.expand_dims(X, axis=-1), y, epochs=100, verbose = 0) # verbose will hide the epochs output ``` The learning rate is the most important hyperparameter for all the Neural Networks ### Evaluating our model In practice, a typical workflow you'll go through when building a neural network is: ``` Build a model -> fit it -> evaluate it -> tweak a model -> fit it -> evaluate it -> tweak it -> fit it ``` Common ways to improve a deep model: * Adding Layers * Increase the number of hidden units * Change the activation functions * Change the optimization function * Change the learning rate * Fitting on more data * Train for longer (more epochs) **Because we can alter each of these they are called hyperparameters** When it comes to evaluation.. there are 3 words you should memorize: > "Visualize, Visualize, Visualize" It's a good idea to visualize: * The data - what data are working with? What does it look like * The model itself - What does our model look like? * The training of a model - how does a model perform while it learns? * The predictions of the model - how does the prediction of the model line up against the labels(original value) ``` # Make a bigger dataset X_large = tf.range(-100,100,4) X_large y_large = X_large + 10 y_large import matplotlib.pyplot as plt plt.scatter(X_large,y_large) ``` ### The 3 sets ... * **Training set** - The model learns from this data, which is typically 70-80% of the total data you have available. * **validation set** - The model gets tuned on this data, which is typically 10-15% of the data avaialable. * **Test set** - The model gets evaluated on this data to test what it has learned. This set is typically 10-15%. ``` # Check the length of how many samples we have len(X_large) # split the data into train and test sets # since the dataset is small we can skip the valdation set X_train = X_large[:40] X_test = X_large[40:] y_train = y_large[:40] y_test = y_large[40:] len(X_train), len(X_test), len(y_train), len(y_test) ``` ### Visualizing the data Now we've got our data in training and test sets. Let's visualize it. ``` plt.figure(figsize = (10,7)) # Plot the training data in blue plt.scatter(X_train, y_train, c= 'b', label = "Training data") # Plot the test data in green plt.scatter(X_test, y_test, c = "g", label = "Training data") plt.legend(); # Let's have a look at how to build neural network for our data # 1. Create the model model = tf.keras.Sequential([ tf.keras.layers.Dense(1) ]) # default value of lr is 0.001 # 2. Compile the model model.compile(loss = "mae", optimizer = tf.keras.optimizers.SGD(), # lr stands for learning rate metrics = ["mae"]) # 3. Fit the model to our dataset #model.fit(tf.expand_dims(X_train, axis=-1), y_train, epochs=100) ``` Let's visualize it before fitting the model ``` model.summary() ``` model.summary() doesn't work without building the model or fitting the model ``` X[0], y[0] # Let's create a model which builds automatically by defining the input_shape arguments tf.random.set_seed(42) # Create a model(same as above) model = tf.keras.Sequential([ tf.keras.layers.Dense(1, input_shape = [1]) # input_shape is 1 refer above code cell ]) # Compile the model model.compile(loss= "mae", optimizer = tf.keras.optimizers.SGD(), metrics = ["mae"]) model.summary() ``` * **Total params** - total number of parameters in the model. * **Trainable parameters**- these are the parameters (patterns) the model can update as it trains. * **Non-Trainable parameters** - these parameters aren't updated during training(this is typical when you have paramters from other models during **transfer learning**) ``` # Let's have a look at how to build neural network for our data # 1. Create the model model = tf.keras.Sequential([ tf.keras.layers.Dense(10, input_shape = [1], name= "input_layer"), tf.keras.layers.Dense(1, name = "output_layer") ], name = "model_1") # 2. Compile the model model.compile(loss = "mae", optimizer = tf.keras.optimizers.SGD(), # lr stands for learning rate metrics = ["mae"]) model.summary() ``` We have changed the layer names and added our custom model name. ``` from tensorflow.keras.utils import plot_model plot_model(model = model, to_file = 'model1.png', show_shapes = True) # Let's have a look at how to build neural network for our data # 1. Create the model model = tf.keras.Sequential([ tf.keras.layers.Dense(100, activation = "relu"), tf.keras.layers.Dense(100, activation = "relu"), tf.keras.layers.Dense(100, activation = "relu"),# only difference we made tf.keras.layers.Dense(1) ], name) # default value of lr is 0.001 # 2. Compile the model model.compile(loss = "mae", optimizer = tf.keras.optimizers.Adam(lr = 0.01), # lr stands for learning rate metrics = ["mae"]) # 3. Fit the model to our dataset model.fit(tf.expand_dims(X_train, axis=-1), y_train, epochs=100, verbose = 0) model.predict(X_test) ``` wow, we are so close!!! ``` model.summary() from tensorflow.keras.utils import plot_model plot_model(model = model, to_file = 'model.png', show_shapes = True) ``` ### Visualizing our model's predictions To visualize predictions, it's a good idea to plot them against the ground truth labels. Often you'll see this in the form of `y_test` or `y_true` versus `y_pred` ``` # Set random seed tf.random.set_seed(42) # Create a model (same as above) model = tf.keras.Sequential([ tf.keras.layers.Dense(10, input_shape = [1], name = "input_layer"), tf.keras.layers.Dense(1, name = "output_layer") # define the input_shape to our model ], name = "revised_model_1") # Compile model (same as above) model.compile(loss=tf.keras.losses.mae, optimizer=tf.keras.optimizers.SGD(), metrics=["mae"]) model.summary() model.fit(X_train, y_train, epochs=100, verbose=0) model.summary() # Make some predictions y_pred = model.predict(X_test) tf.constant(y_pred) ``` These are our predictions! ``` y_test ``` These are the ground truth labels! ``` plot_model(model, show_shapes=True) ``` **Note:** IF you feel like you're going to reuse some kind of functionality in future, it's a good idea to define a function so that we can reuse it whenever we need. ``` #Let's create a plotting function def plot_predictions(train_data= X_train, train_labels = y_train, test_data = X_test, test_labels =y_test, predictions = y_pred): """ Plots training data, test data and compares predictions to ground truth labels """ plt.figure(figsize = (10,7)) # Plot training data in blue plt.scatter(train_data, train_labels, c= "b", label = "Training data") # Plot testing data in green plt.scatter(test_data, test_labels, c= "g", label = "Testing data") # Plot model's predictions in red plt.scatter(test_data, predictions, c= "r", label = "Predictions") # Show legends plt.legend(); plot_predictions(train_data=X_train, train_labels=y_train, test_data=X_test, test_labels=y_test, predictions=y_pred) ``` We tuned our model very well this time. The predictions are really close to the actual values. ### Evaluating our model's predictions with regression evaluation metrics Depending on the problem you're working on, there will be different evaluation metrics to evaluate your model's performance. Since, we're working on a regression, two of the main metrics: * **MAE** - mean absolute error, "on average, how wrong id each of my model's predictions" * TensorFlow code: `tf.keras.losses.MAE()` * or `tf.metrics.mean_absolute_error()` $$ MAE = \frac{Σ_{i=1}^{n} |y_i - x_i| }{n} $$ * **MSE** - mean square error, "square of the average errors" * `tf.keras.losses.MSE()` * `tf.metrics.mean_square_error()` $$ MSE = \frac{1}{n} Σ_{i=1}^{n}(Y_i - \hat{Y_i})^2$$ $\hat{Y_i}$ is the prediction our model makes. $Y_i$ is the label value. * **Huber** - Combination of MSE and MAE, Less sensitive to outliers than MSE. * `tf.keras.losses.Huber()` ``` # Evaluate the model on test set model.evaluate(X_test, y_test) # calculate the mean absolute error mae = tf.metrics.mean_absolute_error(y_true = y_test, y_pred = tf.constant(y_pred)) mae ``` We got the metric values wrong..why did this happen?? ``` tf.constant(y_pred) y_test ``` Notice that the shape of `y_pred` is (10,1) and the shape of `y_test` is (10,) They might seem the same but they are not of the same shape. Let's reshape the tensor to make the shapes equal. ``` tf.squeeze(y_pred) # Calculate the mean absolute error mae = tf.metrics.mean_absolute_error(y_true = y_test, y_pred = tf.squeeze(y_pred)) mae ``` Now,we got our metric value. The mean absolute error of our model is 3.1969407. Now, let's calculate the mean squared error and see how that goes. ``` # Calculate the mean squared error mse = tf.metrics.mean_squared_error(y_true = y_test, y_pred = tf.squeeze(y_pred)) mse ``` Our mean squared error is 13.070143. Remember, the mean squared error squares the error for every example in the test set and averages the values. So, generally, the mse is largeer than mae. When larger errors are more significant than smaller errors, then it is best to use mse. MAE can be used as a great starter metric for any regression problem. We can also try Huber and see how that goes. ``` # Calculate the Huber metric for our model huber_metric = tf.losses.huber(y_true = y_test, y_pred = tf.squeeze(y_pred)) huber_metric # Make some functions to reuse MAE and MSE and also Huber def mae(y_true, y_pred): return tf.metrics.mean_absolute_error(y_true = y_test, y_pred = tf.squeeze(y_pred)) def mse(y_true, y_pred): return tf.metrics.mean_squared_error(y_true = y_test, y_pred = tf.squeeze(y_pred)) def huber(y_true, y_pred): return tf.losses.huber(y_true = y_test, y_pred = tf.squeeze(y_pred)) ``` ### Running experiments to improve our model ``` Build a model -> fit it -> evaluate it -> tweak a model -> fit it -> evaluate it -> tweak it -> fit it ``` 1. Get more data - get more examples for your model to train on(more oppurtunities to learn patterns or relationships between features and labels). 2. Make your mode larger(using a more complex model) - this might come in the form of more layeres or more hidden unites in each layer. 3. Train for longer - give your model more of a chance to find patterns in the data. Let's do a few modelling experiments: 1. `model_1` - same as original model, 1 layer, trained for 100 epochs. 2. `model_2` - 2 layers, trained for 100 epochs 3. `model_3` - 2 layers, trained for 500 epochs. You can design more experiments too to make the model more better **Build `Model_1`** ``` X_train, y_train # Set random seed tf.random.set_seed(42) # 1. Create the model model_1 = tf.keras.Sequential([ tf.keras.layers.Dense(1, input_shape = [1]) ], name = "Model_1") # 2. Compile the model model_1.compile(loss = tf.keras.losses.mae, optimizer = tf.keras.optimizers.SGD(), metrics = ["mae"]) # 3. Fit the model model_1.fit(X_train, y_train ,epochs = 100, verbose = 0) model_1.summary() # Make and plot the predictions for model_1 y_preds_1 = model_1.predict(X_test) plot_predictions(predictions = y_preds_1) # Calculate model_1 evaluation metrics mae_1 = mae(y_test, y_preds_1) mse_1 = mse(y_test, y_preds_1) mae_1, mse_1 ``` **Build `Model_2`** * 2 dense layers, trained for 100 epochs ``` # Set random seed tf.random.set_seed(42) # 1. Create the model model_2 = tf.keras.Sequential([ tf.keras.layers.Dense(10, input_shape =[1]), tf.keras.layers.Dense(1) ], name = "model_2") # 2. Compile the model model_2.compile(loss = tf.keras.losses.mae, optimizer = tf.keras.optimizers.SGD(), metrics = ["mse"]) # Let's build this model with mse as eval metric. # 3. Fit the model model_2.fit(X_train, y_train ,epochs = 100, verbose = 0) model_2.summary() # Make and plot predictions of model_2 y_preds_2 = model_2.predict(X_test) plot_predictions(predictions = y_preds_2) ``` Yeah,we improved this model very much than the previous one. If you want to compare with previous one..scroll up and see the plot_predictions of previous one and compare it with this one. ``` # Calculate the model_2 evaluation metrics mae_2 = mae(y_test, y_preds_2) mse_2 = mse(y_test, y_preds_2) mae_2, mse_2 ``` **Build `Model_3`** * 2 layers, trained for 500 epochs ``` # Set random seed tf.random.set_seed(42) # 1. Create the model model_3 = tf.keras.Sequential([ tf.keras.layers.Dense(10, input_shape =[1]), tf.keras.layers.Dense(1) ], name = "model_3") # 2. Compile the model model_3.compile(loss = tf.keras.losses.mae, optimizer = tf.keras.optimizers.SGD(), metrics = ["mae"]) # Let's build this model with mse as eval metric. # 3. Fit the model model_2.fit(X_train, y_train ,epochs = 500, verbose = 0) # Make and plot some predictions y_preds_3 = model_3.predict(X_test) plot_predictions(predictions = y_preds_3) ``` This is even terrible performance than the first model. we have actually made the model worse. WHY?? We, overfitted the model too much because we trained it for much longer than we are supposed to. ``` # Calculate the model_3 evaluation metrics mae_3 = mae(y_test, y_preds_3) mse_3 = mse(y_test, y_preds_3) mae_3, mse_3 ``` whoaa, the error is extremely high. I think the best of our models is `model_2` The Machine Learning practitioner's motto: `Experiment, experiment, experiment` **Note:** You want to start with small experiments(small models) and make sure they work and then increase their scale when neccessary. ### Comparing the results of our experiments We've run a few experiments, let's compare the results now. ``` # Let's compare our models'c results using pandas dataframe: import pandas as pd model_results = [["model_1", mae_1.numpy(), mse_1.numpy()], ["model_2", mae_2.numpy(), mse_2.numpy()], ["model_3", mae_3.numpy(), mse_3.numpy()]] all_results = pd.DataFrame(model_results, columns =["model", "mae", "mse"]) all_results ``` It looks like model_2 performed done the best. Let's look at what is model_2 ``` model_2.summary() ``` This is the model that has done the best on our dataset. **Note:** One of your main goals should be to minimize the time between your experiments. The more experiments you do, the more things you will figure out which don't work and in turn, get closer to figuring out what does work. Remeber, the machine learning pracitioner's motto : "experiment, experiment, experiment". ## Tracking your experiments: One really good habit of machine learning modelling is to track the results of your experiments. And when doing so, it can be tedious if you are running lots of experiments. Luckily, there are tools to help us! **Resources:** As you build more models, you'll want to look into using: * TensorBoard - a component of TensorFlow library to help track modelling experiments. It is integrated into the TensorFlow library. * Weights & Biases - A tool for tracking all kinds of machine learning experiments (it plugs straight into tensorboard). ## Saving our models Saving our models allows us to use them outside of Google Colab(or wherever they were trained) such as in a web application or a mobile app. There are two main formats we can save our model: 1. The SavedModel format 2. The HDF5 format `model.save()` allows us to save the model and we can use it again to do add things to the model after reloading it. ``` # Save model using savedmodel format model_2.save("best_model_SavedModel_format") ``` If we are planning to use this model inside the tensorflow framework. we will be better off using the `SavedModel` format. But if we are planning to export the model else where and use it outside the tensorflow framework use the HDF5 format. ``` # Save model using HDF5 format model_2.save("best_model_HDF5_format.h5") ``` Saving a model with SavedModel format will give us a folder with some files regarding our model. Saving a model with HDF5 format will give us just one file with our model. ### Loading in a saved model ``` # Load in the SavedModel format model loaded_SavedModel_format = tf.keras.models.load_model("/content/best_model_SavedModel_format") loaded_SavedModel_format.summary() # Let's check is that the same thing as model_2 model_2.summary() # Compare the model_2 predictions with SavedModel format model predictions model_2_preds = model_2.predict(X_test) loaded_SavedModel_format_preds = loaded_SavedModel_format.predict(X_test) model_2_preds == loaded_SavedModel_format_preds mae(y_true = y_test, y_pred = model_2_preds) == mae(y_true = y_test, y_pred = loaded_SavedModel_format_preds) # Load in a model using the .hf format loaded_h5_model = tf.keras.models.load_model("/content/best_model_HDF5_format.h5") loaded_h5_model.summary() model_2.summary() ``` Yeah the loading of .hf format model matched with our original mode_2 format. So, our model loading worked correctly. ``` # Check to see if loaded .hf model predictions match model_2 model_2_preds = model_2.predict(X_test) loaded_h5_model_preds = loaded_h5_model.predict(X_test) model_2_preds == loaded_h5_model_preds ``` ### Download a model(or any other file) from google colab If you want to download your files from Google Colab: 1. you can go to the files tab and right click on the file you're after and click download. 2. Use code(see the cell below). 3. You can save it to google drive by connecting to google drive and copying it there. ``` # Download a file from Google Colab from google.colab import files files.download("/content/best_model_HDF5_format.h5") # Save a file from Google Colab to Google Drive(requires mounting google drive) !cp /content/best_model_HDF5_format.h5 /content/drive/MyDrive/tensor-flow-deep-learning !ls /content/drive/MyDrive/tensor-flow-deep-learning ``` We have saved our model to our google drive !!! ## A larger example We take a larger dataset to do create a regression model. The model we do is insurance forecast by using linear regression available from kaggle [Medical Cost Personal Datasets](https://www.kaggle.com/mirichoi0218/insurance) ``` # Import required libraries import tensorflow as tf import pandas as pd import matplotlib.pyplot as plt # Read in the insurance data set insurance = pd.read_csv("https://raw.githubusercontent.com/stedy/Machine-Learning-with-R-datasets/master/insurance.csv") insurance ``` This is a quite bigger dataset than the one we have previously worked with. ``` # one hot encoding on a pandas dataframe insurance_one_hot = pd.get_dummies(insurance) insurance_one_hot.head() # Create X & y values (features and labels) X = insurance_one_hot.drop("charges", axis =1) y = insurance_one_hot["charges"] # View X X.head() # View y y.head() # Create training and test sets from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y,test_size = 0.2, random_state = 42) len(X), len(X_train), len(X_test) X_train insurance["smoker"] , insurance["sex"] # Build a neural network (sort of like model_2 above) tf.random.set_seed(42) # 1. Create a model insurance_model = tf.keras.Sequential([ tf.keras.layers.Dense(10), tf.keras.layers.Dense(1) ]) # 2. Compile the model insurance_model.compile(loss = tf.keras.losses.mae, optimizer = tf.keras.optimizers.SGD(), metrics = ["mae"]) #3. Fit the model insurance_model.fit(X_train, y_train,epochs = 100, verbose = 0) # Check the results of the insurance model on the test data insurance_model.evaluate(X_test,y_test) y_train.median(), y_train.mean() ``` Right now it looks like our model is not performing well, lets try and improve it. To try and improve our model, we'll run 2 experiments: 1. Add an extra layer with more hidden units and use the Adam optimizer 2. Train for longer (like 200 epochs) 3. We can also do our custom experiments to improve it. ``` # Set random seed tf.random.set_seed(42) # 1. Create the model insurance_model_2 = tf.keras.Sequential([ tf.keras.layers.Dense(100), tf.keras.layers.Dense(10), tf.keras.layers.Dense(1) ],name = "insurace_model_2") # 2. Compile the model insurance_model_2.compile(loss = tf.keras.losses.mae, optimizer = tf.keras.optimizers.Adam(), metrics = ["mae"]) # 3. Fit the model insurance_model_2.fit(X_train, y_train, epochs = 100, verbose = 0) insurance_model_2.evaluate(X_test, y_test) # Set random seed tf.random.set_seed(42) # 1. Create the model insurance_model_3 = tf.keras.Sequential([ tf.keras.layers.Dense(100), tf.keras.layers.Dense(10), tf.keras.layers.Dense(1) ],name = "insurace_model_2") # 2. Compile the model insurance_model_3.compile(loss = tf.keras.losses.mae, optimizer = tf.keras.optimizers.Adam(), metrics = ["mae"]) # 3. Fit the model history = insurance_model_3.fit(X_train, y_train, epochs = 200, verbose = 0) # Evaluate our third model insurance_model_3.evaluate(X_test, y_test) # Plot history (also known as a loss curve or a training curve) pd.DataFrame(history.history).plot() plt.ylabel("loss") plt.xlabel("epochs") plt.title("Training curve of our model") ``` **Question:** How long should you train for? It depends, It really depends on problem you are working on. However, many people have asked this question before, so TensorFlow has a solution!, It is called [EarlyStopping callback](https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/EarlyStopping), which is a TensorFlow component you can add to your model to stop training once it stops improving a certain metric. ## Preprocessing data (normalization and standardization) Short review of our modelling steps in TensorFlow: 1. Get data ready(turn into tensors) 2. Build or pick a pretrained model (to suit your problem) 3. Fit the model to the data and make a prediction. 4. Evaluate the model. 5. Imporve through experimentation. 6. Save and reload your trained models. we are going to focus on the step 1 to make our data set more rich for training. some steps involved in getting data ready: 1. Turn all data into numbers(neural networks can't handle strings). 2. Make sure all of your tensors are the right shape. 3. Scale features(normalize or standardize, neural networks tend to prefer normalization) -- this is the one thing we haven't done while preparing our data. **If you are not sure on which to use for scaling, you could try both and see which perform better** ``` # Import required libraries import tensorflow as tf import pandas as pd import matplotlib.pyplot as plt # Read in the insurance dataframe insurance = pd.read_csv("https://raw.githubusercontent.com/stedy/Machine-Learning-with-R-datasets/master/insurance.csv") insurance ``` To prepare our data, we can borrow few classes from Scikit-Learn ``` from sklearn.compose import make_column_transformer from sklearn.preprocessing import MinMaxScaler, OneHotEncoder from sklearn.model_selection import train_test_split ``` **Feature Scaling**: | **Scaling type** | **what it does** | **Scikit-Learn Function** | **when to use** | | --- | --- | --- | --- | | scale(refers to as normalization) | converts all values to between 0 and 1 whilst preserving the original distribution | `MinMaxScaler` | Use as default scaler with neural networks | | Standarization | Removes the mean and divides each value by the standard deviation | `StandardScaler` | Transform a feature to have close to normal distribution | ``` #Create a column transformer ct = make_column_transformer( (MinMaxScaler(), ["age", "bmi", "children"]), # Turn all values in these columns between 0 and 1 (OneHotEncoder(handle_unknown = "ignore"), ["sex", "smoker", "region"]) ) # Create our X and Y values # because we reimported our dataframe X = insurance.drop("charges", axis = 1) y = insurance["charges"] # Build our train and test set X_train, X_test, y_train, y_test = train_test_split(X,y, test_size = 0.2, random_state = 42) # Fit the column transformer to our training data (only training data) ct.fit(X_train) # Transform training and test data with normalization(MinMaxScaler) and OneHotEncoder X_train_normal = ct.transform(X_train) X_test_normal = ct.transform(X_test) # What does our data look like now?? X_train.loc[0] X_train_normal[0], X_train_normal[12], X_train_normal[78] # we have turned all our data into numerical encoding and aso normalized the data X_train.shape, X_train_normal.shape ``` Beautiful! our data has been normalized and One hot encoded. Let's build Neural Network on it and see how it goes. ``` # Build a neural network model to fit on our normalized data tf.random.set_seed(42) # 1. Create the model insurance_model_4 = tf.keras.Sequential([ tf.keras.layers.Dense(100), tf.keras.layers.Dense(10), tf.keras.layers.Dense(1) ]) # 2. Compile the model insurance_model_4.compile(loss = tf.keras.losses.mae, optimizer = tf.keras.optimizers.Adam(), metrics = ["mae"]) # 3. Fit the model history = insurance_model_4.fit(X_train_normal, y_train, epochs= 100, verbose = 0) # Evaluate our insurance model trained on normalized data insurance_model_4.evaluate(X_test_normal, y_test) insurance_model_4.summary() pd.DataFrame(history.history).plot() plt.ylabel("loss") plt.xlabel("epochs") plt.title("Training curve of insurance_model_4") ``` Let's just plot some graphs. Since we have use them the least in this notebook. ``` X["age"].plot(kind = "hist") X["bmi"].plot(kind = "hist") X["children"].value_counts() ``` ## **External Resources:** * [MIT introduction deep learning lecture 1](https://youtu.be/njKP3FqW3Sk) * [Kaggle's datasets](https://www.kaggle.com/data) * [Lion Bridge's collection of datasets](https://lionbridge.ai/datasets/) ## Bibliography: * [Learn TensorFlow and Deep Learning fundamentals with Python (code-first introduction) Part 1/2](https://www.youtube.com/watch?v=tpCFfeUEGs8&list=RDCMUCr8O8l5cCX85Oem1d18EezQ&start_radio=1&rv=tpCFfeUEGs8&t=3) * [Medical cost personal dataset](https://www.kaggle.com/mirichoi0218/insurance) * [TensorFlow documentation](https://www.tensorflow.org/api_docs/python/tf) * [TensorFlow and Deep learning Daniel Bourke GitHub Repo](https://github.com/mrdbourke/tensorflow-deep-learning)
true
code
0.761782
null
null
null
null
# Advanced topics The following material is a deep-dive into Yangson, and is not necessarily representative of how one would perform manipulations in a production environment. Please refer to the other tutorials for a better picture of Rosetta's intended use. Keep in mind that the key feature of Yangson is to be able to manipulate YANG data models in a more human-readable format, ala JSON. What lies below digs beneath the higher-level abstractions and should paint a decent picture of the functional nature of Yangson. # Manipulating models with Rosetta and Yangson One of the goals of many network operators is to provide abstractions in a multi-vendor environment. This can be done with YANG and OpenConfig data models, but as they say, the devil is in the details. It occurred to me that you should be able to parse configuration from one vendor and translate it to another. Unfortunately, as we all know, these configurations don't always translate well on a 1-to-1 basis. I will demonstrate this process below and show several features of the related libraries along the way. The following example begins exactly the same as the Cisco parsing tutorial. Let's load up some Juniper config and parse it into a YANG data model. First, we'll read the file. ``` from ntc_rosetta import get_driver import json junos = get_driver("junos", "openconfig") junos_driver = junos() # Strip any rpc tags before and after `<configuration>...</configuration>` with open("data/junos/dev_conf.xml", "r") as fp: config = fp.read() print(config) ``` ## Junos parsing Now, we parse the config and take a look at the data model. ``` from sys import exc_info from yangson.exceptions import SemanticError try: parsed = junos_driver.parse( native={"dev_conf": config}, validate=False, include=[ "/openconfig-interfaces:interfaces", "/openconfig-network-instance:network-instances/network-instance/name", "/openconfig-network-instance:network-instances/network-instance/config", "/openconfig-network-instance:network-instances/network-instance/vlans", ] ) except SemanticError as e: print(f"error: {e}") print(json.dumps(parsed.raw_value(), sort_keys=True, indent=2)) ``` ## Naive translation Since we have a valid data model, let's see if Rosetta can translate it as-is. ``` ios = get_driver("ios", "openconfig") ios_driver = ios() native = ios_driver.translate(candidate=parsed.raw_value()) print(native) ``` Pretty cool, right?! Rosetta does a great job of parsing and translating, but it is a case of "monkey see, monkey do". Rosetta doesn't have any mechanisms to translate interface names, for example. It is up to the operator to perform this sort of manipulation. ## Down the Yangson rabbit hole Yangson allows the developer to easily translate between YANG data models and JSON. Most all of these manipulations can be performed on dictionaries in Python and loaded into data models using [`from_raw`](https://yangson.labs.nic.cz/datamodel.html#yangson.datamodel.DataModel.from_raw). The following examples may appear to be a little obtuse, but the goal is to demonstrate the internals of Yangson. ### And it's mostly functional It is critical to read the short description of the [zipper](https://yangson.labs.nic.cz/instance.html?highlight=zipper#yangson.instance.InstanceNode) interface in the InstanceNode section of the docs. Yanson never manipulates an object, but returns a copy with the manipulated attributes. ### Show me the code! Let's take a look at fixing up the interface names and how we can manipulate data model attributes. To do that, we need to locate the attribute in the tree using the [`parse_resource_id`](https://yangson.labs.nic.cz/datamodel.html#yangson.datamodel.DataModel.parse_resource_id) method. This method returns an [`instance route'](https://yangson.labs.nic.cz/instance.html?highlight=arrayentry#yangson.instance.InstanceRoute). The string passed to the method is an xpath. ``` # Locate the interfaces in the tree. We need to modify this one # Note that we have to URL-escape the forward slashes per https://tools.ietf.org/html/rfc8040#section-3.5.3 irt = parsed.datamodel.parse_resource_id("openconfig-interfaces:interfaces/interface=xe-0%2F0%2F1") current_data = parsed.root.goto(irt) print("Current node configuration: ", json.dumps(current_data.raw_value(), sort_keys=True, indent=2)) modify_data = current_data.raw_value() ifname = 'Ethernet0/0/1' modify_data['name'] = ifname modify_data['config']['name'] = ifname stub = current_data.update(modify_data, raw=True) print("\n\nCandidate node configuration: ", json.dumps(stub.raw_value(), sort_keys=True, indent=2)) ``` ### Instance routes You will notice a `goto` method on child nodes. You _can_ access successors with this method, but you have to build the path from the root `datamodel` attribute as seen in the following example. If you aren't sure where an object is in the tree, you can also rely on its `path` attribute. Quick tangent... what is the difference between `parse_instance_id` and `parse_resource_id`? The answer can be found in the [Yangson glossary](https://yangson.labs.nic.cz/glossary.html) and the respective RFC's. ``` # TL;DR irt = parsed.datamodel.parse_instance_id('/openconfig-network-instance:network-instances/network-instance[1]/vlans/vlan[3]') print(parsed.root.goto(irt).raw_value()) irt = parsed.datamodel.parse_resource_id('openconfig-network-instance:network-instances/network-instance=default/vlans/vlan=10') print(parsed.root.goto(irt).raw_value()) ``` What about the rest of the interfaces in the list? Yangson provides an iterator for array nodes. ``` import re irt = parsed.datamodel.parse_resource_id("openconfig-interfaces:interfaces/interface") iface_objs = parsed.root.goto(irt) # Swap the name as required p, sub = re.compile(r'xe-'), 'Ethernet' # There are a couple challenges here. First is that Yanson doesn't impliment __len__ # The second problem is that you cannot modify a list in-place, so we're basically # hacking this to hijack the index of the current element and looking it up from a "clean" # instance. This is a pet example! It would be much easier using Python dicts. new_ifaces = None for iface in iface_objs: name_irt = parsed.datamodel.parse_instance_id('/name') cname_irt = parsed.datamodel.parse_instance_id('/config/name') if new_ifaces: name = new_ifaces[iface.index].goto(name_irt) else: name = iface.goto(name_irt) name = name.update(p.sub(sub, name.raw_value()), raw=True) cname = name.up().goto(cname_irt) cname = cname.update(p.sub(sub, cname.raw_value()), raw=True) iface = cname.up().up() new_ifaces = iface.up() print(json.dumps(new_ifaces.raw_value(), sort_keys=True, indent=2)) # Translate to Cisco-speak native = ios_driver.translate(candidate=new_ifaces.top().raw_value()) print(native) ``` Hooray! That should work. One final approach, just to show you different ways of doing things. This is another pet example to demonstrate Yangson methods. ``` import re from typing import Dict irt = parsed.datamodel.parse_resource_id("openconfig-interfaces:interfaces") iface_objs = parsed.root.goto(irt) # Nuke the whole branch! iface_objs = iface_objs.delete_item("interface") def build_iface(data: str) -> Dict: # Example template, this could be anything you like that conforms to the schema return { "name": f"{data['name']}", "config": { "name": f"{data['name']}", "description": f"{data['description']}", "type": "iana-if-type:ethernetCsmacd", "enabled": True }, } iface_data = [ build_iface({ "name": f"TenGigabitEthernet0/{idx}", "description": f"This is interface TenGigabitEthernet0/{idx}" }) for idx in range(10, 0, -1) ] initial = iface_data.pop() # Start a new interface list iface_objs = iface_objs.put_member("interface", [initial], raw=True) cur_obj = iface_objs[0] # Yangson exposes `next`, `insert_after`, and `insert_before` methods. # There is no `append`. while iface_data: new_obj = cur_obj.insert_after(iface_data.pop(), raw=True) cur_obj = new_obj # Translate to Cisco-speak native = ios_driver.translate(candidate=cur_obj.top().raw_value()) print(native) ``` ### Deleting individual items Here is an example of deleting an individual item. Navigating the tree can be a bit tricky, but it's not too bad once you get the hang of it. ``` # Locate a vlan by ID and delete it irt = parsed.datamodel.parse_resource_id("openconfig-network-instance:network-instances/network-instance=default/vlans/vlan=10") vlan10 = parsed.root.goto(irt) vlans = vlan10.up().delete_item(vlan10.index) print(json.dumps(vlans.raw_value(), sort_keys=True, indent=2)) ```
true
code
0.407451
null
null
null
null
# Boltzmann Machine ## Downloading the dataset ### ML-100K ``` # !wget "http://files.grouplens.org/datasets/movielens/ml-100k.zip" # !unzip ml-100k.zip # !ls ``` ### ML-1M ``` # !wget "http://files.grouplens.org/datasets/movielens/ml-1m.zip" # !unzip ml-1m.zip # !ls ``` ## Importing the libraries ``` import numpy as np import pandas as pd import torch import torch.nn as nn import torch.nn.parallel import torch.optim as optim import torch.utils.data from torch.autograd import Variable ``` ## Importing the dataset ``` # We won't be using this dataset. movies = pd.read_csv('ml-1m/movies.dat', sep = '::', header = None, engine = 'python', encoding = 'latin-1') users = pd.read_csv('ml-1m/users.dat', sep = '::', header = None, engine = 'python', encoding = 'latin-1') ratings = pd.read_csv('ml-1m/ratings.dat', sep = '::', header = None, engine = 'python', encoding = 'latin-1') ``` ## Preparing the training set and the test set ``` training_set = pd.read_csv('ml-100k/u1.base', delimiter = '\t') training_set = np.array(training_set, dtype = 'int') test_set = pd.read_csv('ml-100k/u1.test', delimiter = '\t') test_set = np.array(test_set, dtype = 'int') ``` ## Getting the number of users and movies ``` nb_users = int(max(max(training_set[:,0]), max(test_set[:,0]))) nb_movies = int(max(max(training_set[:,1]), max(test_set[:,1]))) print(nb_users,'|',nb_movies) ``` ## Converting the data into an array with users in lines and movies in columns ``` def convert(data): new_data = [] for id_users in range(1, nb_users + 1): id_movies = data[:,1][data[:,0] == id_users] id_ratings = data[:,2][data[:,0] == id_users] ratings = np.zeros(nb_movies) ratings[id_movies - 1] = id_ratings new_data.append(list(ratings)) return new_data training_set = convert(training_set) test_set = convert(test_set) ``` ## Converting the data into Torch Tensors ``` training_set = torch.FloatTensor(training_set) test_set = torch.FloatTensor(test_set) ``` ## Converting the ratings into binary ratings 1 (Liked) or 0 (Not Liked) ``` training_set[training_set == 0] = -1 training_set[training_set == 1] = 0 training_set[training_set == 2] = 0 training_set[training_set >= 3] = 1 test_set[test_set == 0] = -1 test_set[test_set == 1 ] = 0 test_set[test_set == 2] = 0 test_set[test_set >= 3] = 1 ``` ## Creating the architecture of the Neural Network ``` class RBM(): def __init__(self, nv, nh): self.W = torch.randn(nh, nv) self.a = torch.randn(1, nh) self.b = torch.randn(1, nv) def sample_h(self, x): wx = torch.mm(x, self.W.t()) activation = wx + self.a.expand_as(wx) p_h_given_v = torch.sigmoid(activation) return p_h_given_v, torch.bernoulli(p_h_given_v) def sample_v(self, y): wy = torch.mm(y, self.W) activation = wy + self.b.expand_as(wy) p_v_given_h = torch.sigmoid(activation) return p_v_given_h, torch.bernoulli(p_v_given_h) def train(self, v0, vk, ph0, phk): self.W += (torch.mm(v0.t(), ph0) - torch.mm(vk.t(), phk)).t() self.b += torch.sum((v0 - vk), 0) self.a += torch.sum((ph0 - phk), 0) nv = len(training_set[0]) nh = 100 batch_size = 100 rbm = RBM(nv, nh) ``` ## Training the RBM ``` nb_epoch = 10 for epoch in range(1, nb_epoch + 1): train_loss = 0 train_rmse = 0 s = 0. for id_user in range(0, nb_users - batch_size, batch_size): vk = training_set[id_user:id_user+batch_size] v0 = training_set[id_user:id_user+batch_size] ph0,_ = rbm.sample_h(v0) for k in range(10): _,hk = rbm.sample_h(vk) _,vk = rbm.sample_v(hk) vk[v0<0] = v0[v0<0] phk,_ = rbm.sample_h(vk) rbm.train(v0, vk, ph0, phk) train_loss += torch.mean(torch.abs(v0[v0>=0] - vk[v0>=0]))# train_rmse += np.sqrt(torch.mean((v0[v0 >= 0] - vk[v0 >= 0])**2)) s += 1. print('epoch : {} | Loss : {} | RMSE : {}'.format(epoch, train_loss/s, train_rmse/s)) ``` ## Testing the RBM ``` test_loss = 0 test_rmse = 0 s = 0. for id_user in range(nb_users): v = training_set[id_user:id_user+1] vt = test_set[id_user:id_user+1] if len(vt[vt>=0]) > 0: _,h = rbm.sample_h(v) _,v = rbm.sample_v(h) test_loss += torch.mean(torch.abs(vt[vt>=0] - v[vt>=0])) test_rmse += np.sqrt(torch.mean((vt[vt>=0] - v[vt>=0])**2)) s += 1. print('loss : {} | RMSE : {}'.format(test_loss/s, test_rmse/s)) ```
true
code
0.518729
null
null
null
null
# Autobatching log-densities example [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.sandbox.google.com/github/google/jax/blob/master/docs/notebooks/vmapped_log_probs.ipynb) This notebook demonstrates a simple Bayesian inference example where autobatching makes user code easier to write, easier to read, and less likely to include bugs. Inspired by a notebook by @davmre. ``` import functools import itertools import re import sys import time from matplotlib.pyplot import * import jax from jax import lax import jax.numpy as jnp import jax.scipy as jsp from jax import random import numpy as np import scipy as sp ``` ## Generate a fake binary classification dataset ``` np.random.seed(10009) num_features = 10 num_points = 100 true_beta = np.random.randn(num_features).astype(jnp.float32) all_x = np.random.randn(num_points, num_features).astype(jnp.float32) y = (np.random.rand(num_points) < sp.special.expit(all_x.dot(true_beta))).astype(jnp.int32) y ``` ## Write the log-joint function for the model We'll write a non-batched version, a manually batched version, and an autobatched version. ### Non-batched ``` def log_joint(beta): result = 0. # Note that no `axis` parameter is provided to `jnp.sum`. result = result + jnp.sum(jsp.stats.norm.logpdf(beta, loc=0., scale=1.)) result = result + jnp.sum(-jnp.log(1 + jnp.exp(-(2*y-1) * jnp.dot(all_x, beta)))) return result log_joint(np.random.randn(num_features)) # This doesn't work, because we didn't write `log_prob()` to handle batching. try: batch_size = 10 batched_test_beta = np.random.randn(batch_size, num_features) log_joint(np.random.randn(batch_size, num_features)) except ValueError as e: print("Caught expected exception " + str(e)) ``` ### Manually batched ``` def batched_log_joint(beta): result = 0. # Here (and below) `sum` needs an `axis` parameter. At best, forgetting to set axis # or setting it incorrectly yields an error; at worst, it silently changes the # semantics of the model. result = result + jnp.sum(jsp.stats.norm.logpdf(beta, loc=0., scale=1.), axis=-1) # Note the multiple transposes. Getting this right is not rocket science, # but it's also not totally mindless. (I didn't get it right on the first # try.) result = result + jnp.sum(-jnp.log(1 + jnp.exp(-(2*y-1) * jnp.dot(all_x, beta.T).T)), axis=-1) return result batch_size = 10 batched_test_beta = np.random.randn(batch_size, num_features) batched_log_joint(batched_test_beta) ``` ### Autobatched with vmap It just works. ``` vmap_batched_log_joint = jax.vmap(log_joint) vmap_batched_log_joint(batched_test_beta) ``` ## Self-contained variational inference example A little code is copied from above. ### Set up the (batched) log-joint function ``` @jax.jit def log_joint(beta): result = 0. # Note that no `axis` parameter is provided to `jnp.sum`. result = result + jnp.sum(jsp.stats.norm.logpdf(beta, loc=0., scale=10.)) result = result + jnp.sum(-jnp.log(1 + jnp.exp(-(2*y-1) * jnp.dot(all_x, beta)))) return result batched_log_joint = jax.jit(jax.vmap(log_joint)) ``` ### Define the ELBO and its gradient ``` def elbo(beta_loc, beta_log_scale, epsilon): beta_sample = beta_loc + jnp.exp(beta_log_scale) * epsilon return jnp.mean(batched_log_joint(beta_sample), 0) + jnp.sum(beta_log_scale - 0.5 * np.log(2*np.pi)) elbo = jax.jit(elbo) elbo_val_and_grad = jax.jit(jax.value_and_grad(elbo, argnums=(0, 1))) ``` ### Optimize the ELBO using SGD ``` def normal_sample(key, shape): """Convenience function for quasi-stateful RNG.""" new_key, sub_key = random.split(key) return new_key, random.normal(sub_key, shape) normal_sample = jax.jit(normal_sample, static_argnums=(1,)) key = random.PRNGKey(10003) beta_loc = jnp.zeros(num_features, jnp.float32) beta_log_scale = jnp.zeros(num_features, jnp.float32) step_size = 0.01 batch_size = 128 epsilon_shape = (batch_size, num_features) for i in range(1000): key, epsilon = normal_sample(key, epsilon_shape) elbo_val, (beta_loc_grad, beta_log_scale_grad) = elbo_val_and_grad( beta_loc, beta_log_scale, epsilon) beta_loc += step_size * beta_loc_grad beta_log_scale += step_size * beta_log_scale_grad if i % 10 == 0: print('{}\t{}'.format(i, elbo_val)) ``` ### Display the results Coverage isn't quite as good as we might like, but it's not bad, and nobody said variational inference was exact. ``` figure(figsize=(7, 7)) plot(true_beta, beta_loc, '.', label='Approximated Posterior Means') plot(true_beta, beta_loc + 2*jnp.exp(beta_log_scale), 'r.', label='Approximated Posterior $2\sigma$ Error Bars') plot(true_beta, beta_loc - 2*jnp.exp(beta_log_scale), 'r.') plot_scale = 3 plot([-plot_scale, plot_scale], [-plot_scale, plot_scale], 'k') xlabel('True beta') ylabel('Estimated beta') legend(loc='best') ```
true
code
0.715415
null
null
null
null
<a href="https://colab.research.google.com/github/agungsantoso/deep-learning-v2-pytorch/blob/master/intro-to-pytorch/Part%201%20-%20Tensors%20in%20PyTorch%20(Exercises).ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # Introduction to Deep Learning with PyTorch In this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. ## Neural Networks Deep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output. <img src="https://github.com/agungsantoso/deep-learning-v2-pytorch/blob/master/intro-to-pytorch/assets/simple_neuron.png?raw=1" width=400px> Mathematically this looks like: $$ \begin{align} y &= f(w_1 x_1 + w_2 x_2 + b) \\ y &= f\left(\sum_i w_i x_i +b \right) \end{align} $$ With vectors this is the dot/inner product of two vectors: $$ h = \begin{bmatrix} x_1 \, x_2 \cdots x_n \end{bmatrix} \cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n \end{bmatrix} $$ ## Tensors It turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors. <img src="https://github.com/agungsantoso/deep-learning-v2-pytorch/blob/master/intro-to-pytorch/assets/tensor_examples.svg?raw=1" width=600px> With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network. ``` # http://pytorch.org/ from os.path import exists from wheel.pep425tags import get_abbr_impl, get_impl_ver, get_abi_tag platform = '{}{}-{}'.format(get_abbr_impl(), get_impl_ver(), get_abi_tag()) cuda_output = !ldconfig -p|grep cudart.so|sed -e 's/.*\.\([0-9]*\)\.\([0-9]*\)$/cu\1\2/' accelerator = cuda_output[0] if exists('/dev/nvidia0') else 'cpu' !pip install -q http://download.pytorch.org/whl/{accelerator}/torch-0.4.1-{platform}-linux_x86_64.whl torchvision import torch # First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1)) ``` Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line: `features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution. Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution. PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.html#torch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function. ``` ## Calculate the output of this network using the weights and bias tensors y = activation(torch.sum(features * weights) + bias) ``` You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs. Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.html#torch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.html#torch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error ```python >> torch.mm(features, weights) --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-13-15d592eb5279> in <module>() ----> 1 torch.mm(features, weights) RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033 ``` As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work. **Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often. There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.view). * `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory. * `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch. * `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`. I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`. > **Exercise**: Calculate the output of our little network using matrix multiplication. ``` ## Calculate the output of this network using matrix multiplication y = activation(torch.mm(features, weights.view(5, 1)) + bias) ``` ### Stack them up! That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix. <img src='https://github.com/agungsantoso/deep-learning-v2-pytorch/blob/master/intro-to-pytorch/assets/multilayer_diagram_weights.png?raw=1' width=450px> The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$ \vec{h} = [h_1 \, h_2] = \begin{bmatrix} x_1 \, x_2 \cdots \, x_n \end{bmatrix} \cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2} \end{bmatrix} $$ The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply $$ y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right) $$ ``` ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output)) ``` > **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`. ``` ## Your solution here h = activation(torch.mm(features, W1) + B1) output = activation(torch.mm(h, W2) + B2) output ``` If you did this correctly, you should see the output `tensor([[ 0.3171]])`. The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. ## Numpy to Torch and back Special bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method. ``` import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy() ``` The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well. ``` # Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a ```
true
code
0.533094
null
null
null
null
# Algoritmos de Otimização No Deep Learning temos como propósito que nossas redes neurais aprendam a aproximar uma função de interesse, como o preço de casas numa regressão, ou a função que classifica objetos numa foto, no caso da classificação. No último notebook, nós programos nossa primeira rede neural. Além disso, vimos também a fórmula de atualização dos pesos. Se você não lembra, os pesos e os bias foram atualizados da seguinte forma: $$w_i = w_i - \lambda * \partial w $$ $$b_i = b_i - \lambda * \partial b$$ Mas, você já parou pra pensar da onde vêm essas fórmulas? Além disso, será que existem melhores formas de atualizar os pesos? É isso que vamos ver nesse notebook. ## Descida de Gradiente Estocástica (SGD) Na descida de gradiente estocástica separamos os nossos dados de treino em vários subconjuntos, que chamamos de mini-batches. No começo eles serão pequenos, como 32-128 exemplos, para aplicações mais avançadas eles tendem a ser muito maiores, na ordem de 1024 e até mesmo 8192 exemplos por mini-batch. Como na descida de gradiente padrão, computamos o gradiente da função de custo em relação aos exemplos, e subtraímos o gradiente vezes uma taxa de apredizado dos parâmetros da rede. Podemos ver o SGD como tomando um passo pequeno na direção de maior redução do valor da loss. ### Equação $w_{t+1} = w_t - \eta \cdot \nabla L$ ### Código ``` import jax def sgd(weights, gradients, eta): return jax.tree_util.tree_multimap(lambda w, g: w - eta*g, weights, gradients) ``` Vamos usar o SGD para otimizar uma função simples ``` #hide import numpy as np import matplotlib.pyplot as plt %matplotlib inline def f(x): return x**2 - 25 x = np.linspace(-10, 10, num=100) y = f(x) plt.plot(x, y) plt.ylim(-50) x0 = 9.0 f_g = jax.value_and_grad(f) x_ = [] y_ = [] for i in range(10): y0, grads = f_g(x0) x_.append(x0) y_.append(y0) x0 = sgd(x0, grads, 0.9) plt.plot(x, y) plt.plot(x_, y_, color='red', marker='o'); ``` ## Momentum Um problema com o uso de mini-batches é que agora estamos **estimando** a direção que diminui a função de perda no conjunto de treino, e quão menor o mini-batch mais ruidosa é a nossa estimativa. Para consertar esse problema do ruído nós introduzimos a noção de momentum. O momentm faz sua otimização agir como uma bola pesada descendo uma montanha, então mesmo que o caminho seja cheio de montes e buracos a direção da bola não é muito afetada. De um ponto de vista mais matemático as nossas atualizações dos pesos vão ser uma combinação entre os gradientes desse passo e os gradientes anteriores, estabilizando o treino. ### Equação $v_{t} = \gamma v_{t-1} + \nabla L \quad \text{o gamma serve como um coeficiente ponderando entre usar os updates anteriores e o novo gradiente} \\ w_{t+1} = w_t - \eta v_t $ ### Código ``` def momentum(weights, gradients, eta, mom, gamma): mom = jax.tree_util.tree_multimap( lambda v, g: gamma*v + (1 - gamma)*g, weights, gradients) weights = jax.tree_util.tree_multimap( lambda w, v: w - eta*mom, weights, mom) return weights, mom x0 = 9.0 mom = 0.0 x_ = [] y_ = [] for i in range(10): y0, grads = f_g(x0) x_.append(x0) y_.append(y0) x0, mom = momentum(x0, grads, 0.9, mom, 0.99) plt.plot(x, y) plt.plot(x_, y_, color='red', marker='o'); ``` ## RMSProp Criado por Geoffrey Hinton durante uma aula, esse método é o primeiro **método adaptivo** que estamos vendo. O que isso quer dizer é que o método tenta automaticamente computar uma taxa de aprendizado diferente para cada um dos pesos da nossa rede neural, usando taxas pequenas para parâmetros que sofrem atualização frequentemente e taxas maiores para parâmetros que são atualizados mais raramente, permitindo uma otimização mais rápida. Mais especificamente, o RMSProp divide o update normal do SGD pela raiz da soma dos quadrados dos gradientes anteriores (por isso seu nome Root-Mean-Square Proportional), assim reduzindo a magnitude da atualização de acordo com as magnitudes anteriores. ### Equação $ \nu_{t} = \gamma \nu_{t-1} + (1 - \gamma) (\nabla L)^2 \\ w_{t+1} = w_t - \frac{\eta \nabla L}{\sqrt{\nu_t + \epsilon}} $ ### Código ``` def computa_momento(updates, moments, decay, order): return jax.tree_multimap( lambda g, t: (1 - decay) * (g ** order) + decay * t, updates, moments) def rmsprop(weights, gradients, eta, nu, gamma): nu = computa_momento(gradients, nu, gamma, 2) updates = jax.tree_multimap( lambda g, n: g * jax.lax.rsqrt(n + 1e-8), gradients, nu) weights = jax.tree_util.tree_multimap(lambda w, g: w - eta*g, weights, updates) return weights, nu x0 = 9.0 nu = 0.0 x_ = [] y_ = [] for i in range(10): y0, grads = f_g(x0) x_.append(x0) y_.append(y0) x0, nu = rmsprop(x0, grads, 0.9, nu, 0.99) plt.plot(x, y) plt.plot(x_, y_, color='red', marker='o'); ``` ## Adam Por fim o Adam usa ideias semelhantes ao Momentum e ao RMSProp, mantendo médias exponenciais tanto dos gradientes passados, quanto dos seus quadrados. ### Equação $ m_t = \beta_1 m_{t-1} + (1 - \beta_1) \nabla L \\ v_t = \beta_2 v_{t-1} + (1 - \beta_2) (\nabla L)^2 \\ w_{t+1} = w_t - \frac{\eta m_t}{\sqrt{v_t} \epsilon} $ ### Código ``` import jax.numpy as jnp def adam(weights, gradients, eta, mu, nu, b1, b2): mu = computa_momento(gradients, mu, b1, 1) nu = computa_momento(gradients, nu, b2, 2) updates = jax.tree_multimap( lambda m, v: m / (jnp.sqrt(v + 1e-6) + 1e-8), mu, nu) weights = jax.tree_util.tree_multimap(lambda w, g: w - eta*g, weights, updates) return weights, mu, nu x0 = 9.0 mu = 0.0 nu = 0.0 x_ = [] y_ = [] for i in range(10): y0, grads = f_g(x0) x_.append(x0) y_.append(y0) x0, mu, nu = adam(x0, grads, 0.8, mu, nu, 0.9, 0.999) plt.plot(x, y) plt.plot(x_, y_, color='red', marker='o'); ```
true
code
0.527134
null
null
null
null
# MNIST With SET This is an example of training an SET network on the MNIST dataset using synapses, pytorch, and torchvision. ``` #Import torch libraries and get SETLayer from synapses import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim from torchvision import datasets, transforms from synapses import SETLayer #Some extras for visualizations import numpy as np import matplotlib.pyplot as plt import seaborn as sns from IPython.display import clear_output print("done") ``` ## SET Layer The SET layer is a pytorch module that works with a similar API to a standard fully connected layer; to initialize, specify input and output dimensions.<br><br> NOTE: one condition mentioned in the paper is that epsilon (a hyperparameter controlling layer sparsity) be much less than the input dimension and much less than the output dimension. The default value of epsilon is 11. Keep dimensions much bigger than epsilon! (epsilon can be passed in as an init argument to the layer). ``` #initialize the layer sprs = SETLayer(128, 256) #We can see the layer transforms inputs as we expect inp = torch.randn((2, 128)) print('Input batch shape: ', tuple(inp.shape)) out = sprs(inp) print('Output batch shape: ', tuple(out.shape)) ``` In terms of behavior, the SETLayer transforms an input vector into the output space as would a fcl. ## Initial Connection Distribution The intialized layer has randomly assigned connections between input nodes and output nodes; each connection is associated with a weight, drawn from a normal distribution. ``` #Inspect init weight distribution plt.hist(np.array(sprs.weight.data), bins=40) plt.title('Weights distribution on initialization') plt.xlabel('Weight Value') plt.ylabel('Number of weights') plt.show() vec = sprs.connections[:, 0] vec = np.array(vec) values, counts = np.unique(vec, return_counts=True) plt.title('Connections to inputs') plt.bar(values, counts) plt.xlabel('Input vector index') plt.ylabel('Number of connections') plt.show() print("done") ``` The weights are sampled from a normal distribution, as is done with a standard fcl. The connections to the inputs are uniformly distributed.<br><br> ## Killing Connections When connections are reassigned in SET, some proportion (defined by hyperparameter zeta) of the weights closest to zero are removed. We can set these to zero using the zero_connections method on the layer. (This method leaves the connections unchanged.) ``` sprs.zero_connections() #Inspect init weight distribution plt.hist(np.array(sprs.weight.data), bins=40) plt.title('Weights distribution after zeroing connections') plt.xlabel('Weight Value') plt.ylabel('Number of weights') plt.show() print("done") ``` ## Evolving Connections The evolve_connections() method will reassign these weights to new connections between input and output nodes. By default, these weights are initialized by sampling from the same distribution as the init function. Optionally, these weights can be set at zero (with init=False argument). ``` sprs.evolve_connections() plt.hist(np.array(sprs.weight.data), bins=40) plt.title('Weights distribution after evolving connections') plt.show() plt.title('Connections to inputs') plt.bar(values, counts) plt.xlabel('Input vector index') plt.ylabel('Number of connections') plt.show() print("done") ``` We can see these weight values have been re-distributed; the new connections conform to the same uniform distribution as before. (We see in the SET paper, and here later on, that the adaptive algorithm learns to allocate these connections to more important input values.) ## A Simple SET Model The following is a simple sparsely-connected model using SETLayers with default hyperparameters. ``` class SparseNet(nn.Module): def __init__(self): super(SparseNet, self).__init__() self.set_layers = [] self.set1 = SETLayer(784, 512) self.set_layers.append(self.set1) #self.set2 = SETLayer(512, 512) #self.set_layers.append(self.set2) self.set2 = SETLayer(512, 128) self.set_layers.append(self.set2) #Use a dense layer for output because of low output dimensionality self.fc1 = nn.Linear(128, 10) def zero_connections(self): """Sets connections to zero for inferences.""" for layer in self.set_layers: layer.zero_connections() def evolve_connections(self): """Evolves connections.""" for layer in self.set_layers: layer.evolve_connections() def forward(self, x): x = x.reshape(-1, 784) x = F.relu(self.set1(x)) x = F.relu(self.set2(x)) #x = F.relu(self.set3(x)) x = self.fc1(x) return F.log_softmax(x, dim=1) def count_params(model): prms = 0 for parameter in model.parameters(): n_params = 1 for prm in parameter.shape: n_params *= prm prms += n_params return prms device = "cpu" sparse_net = SparseNet().to(device) print('number of params: ', count_params(sparse_net)) ``` Consider a fully-connected model with the same architecture: It would contain more than 20 times the number of parameters!<br> ## Training on MNIST This code was adapted directly from the [pytorch mnist tutorial](https://github.com/pytorch/examples/blob/master/mnist/main.py). ``` class History(object): """Tracks and plots training history""" def __init__(self): self.train_loss = [] self.val_loss = [] self.train_acc = [] self.val_acc = [] def plot(self): clear_output() plt.plot(self.train_loss, label='train loss') plt.plot(self.train_acc, label='train acc') plt.plot(self.val_loss, label='val loss') plt.plot(self.val_acc, label='val acc') plt.legend() plt.show() def train(log_interval, model, device, train_loader, optimizer, epoch, history): model.train() correct = 0 loss_ = [] for batch_idx, (data, target) in enumerate(train_loader): data, target = data.to(device), target.to(device) optimizer.zero_grad() output = model(data) pred = output.max(1, keepdim=True)[1] # get the index of the max log-probability correct += pred.eq(target.view_as(pred)).sum().item() loss = F.nll_loss(output, target) loss.backward() loss_.append(loss.item()) optimizer.step() if batch_idx % log_interval == 0: print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format( epoch, batch_idx * len(data), len(train_loader.dataset), 100. * batch_idx / len(train_loader), loss.item())) history.train_loss.append(np.array(loss_).mean()) history.train_acc.append(correct/len(train_loader.dataset)) return history def test(model, device, test_loader, history): model.eval() test_loss = 0 correct = 0 with torch.no_grad(): for data, target in test_loader: data, target = data.to(device), target.to(device) output = model(data) test_loss += F.nll_loss(output, target, reduction='sum').item() # sum up batch loss pred = output.max(1, keepdim=True)[1] # get the index of the max log-probability correct += pred.eq(target.view_as(pred)).sum().item() acc = correct / len(test_loader.dataset) test_loss /= len(test_loader.dataset) print('Test set: Average loss: {:.4f}, Accuracy: {}/{} ({:.2f}%)'.format( test_loss, correct, len(test_loader.dataset), 100. * acc)) history.val_loss.append(test_loss) history.val_acc.append(acc) return history print("done") torch.manual_seed(0) #Optimizer settings lr = .01 momentum = .5 epochs = 50 batch_size=128 log_interval = 64 test_batch_size=128 train_loader = torch.utils.data.DataLoader( datasets.MNIST('../data', train=True, download=True, transform=transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,)) ])), batch_size=batch_size, shuffle=True) test_loader = torch.utils.data.DataLoader( datasets.MNIST('../data', train=False, transform=transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,)) ])), batch_size=test_batch_size, shuffle=True) print("done") ``` ## Dealing with Optimizer Buffers Synapses recycles parameters. When connections are broken and reassigned, its parameter gets set to zero.<br><br> This system is designed to be computationally efficient, but it comes with a nasty side-effect. Often, we use optimizers with some sort of buffer; the simplest example is momentum in SGD. When we reset a parameter, the information about the overwritten parameter in the optimizer buffer is not useful. We need to overwrite specific values in the buffer also. To do this in pytorch, we need to pass the optimizer to each SETLayer to let synapses do this for us. <br><br> <b>Notice: I'm still working out the best way to initialize adaptive optimizers (current version makes a naive attempt to pick good values); SGD with momentum works fine</b> ``` optimizer = optim.SGD(sparse_net.parameters(), lr=lr, momentum=momentum, weight_decay=1e-2) for layer in sparse_net.set_layers: #here we tell our set layers about layer.optimizer = optimizer #This guy will keep track of optimization metrics. set_history = History() print("done") def show_MNIST_connections(model): vec = model.set1.connections[:, 0] vec = np.array(vec) _, counts = np.unique(vec, return_counts=True) t = counts.reshape(28, 28) sns.heatmap(t, cmap='viridis', xticklabels=[], yticklabels=[], square=True); plt.title('Connections per input pixel'); plt.show(); v = [t[13-i:15+i,13-i:15+i].mean() for i in range(14)] plt.plot(v) plt.show() print("done") import time epochs = 1000 for epoch in range(1, epochs + 1): #In the paper, evolutions occur on each epoch if epoch != 1: set_history.plot() show_MNIST_connections(sparse_net) if epoch != 1: print('Train set: Average loss: {:.4f}, Accuracy: {:.2f}%'.format( set_history.train_loss[epoch-2], 100. * set_history.train_acc[epoch-2])) print('Test set: Average loss: {:.4f}, Accuracy: {:.2f}%'.format( set_history.val_loss[epoch-2], 100. * set_history.val_acc[epoch-2])) sparse_net.evolve_connections() show_MNIST_connections(sparse_net) set_history = train(log_interval, sparse_net, device, train_loader, optimizer, epoch, set_history) #And smallest connections are removed during inference. sparse_net.zero_connections() set_history = test(sparse_net, device, test_loader, set_history) time.sleep(10) ```
true
code
0.78651
null
null
null
null
# Just-in-time Compilation with [Numba](http://numba.pydata.org/) ## Numba is a JIT compiler which translates Python code in native machine language * Using special decorators on Python functions Numba compiles them on the fly to machine code using LLVM * Numba is compatible with Numpy arrays which are the basis of many scientific packages in Python * It enables parallelization of machine code so that all the CPU cores are used ``` import math import numpy as np import matplotlib.pyplot as plt import numba ``` ## Using `numba.jit` Numba offers `jit` which can used to decorate Python functions. ``` def is_prime(n): if n <= 1: raise ArithmeticError('"%s" <= 1' % n) if n == 2 or n == 3: return True elif n % 2 == 0: return False else: n_sqrt = math.ceil(math.sqrt(n)) for i in range(3, n_sqrt): if n % i == 0: return False return True n = np.random.randint(2, 10000000, dtype=np.int64) # Get a random integer between 2 and 10000000 print(n, is_prime(n)) #is_prime(1) @numba.jit def is_prime_jitted(n): if n <= 1: raise ArithmeticError('"%s" <= 1' % n) if n == 2 or n == 3: return True elif n % 2 == 0: return False else: n_sqrt = math.ceil(math.sqrt(n)) for i in range(3, n_sqrt): if n % i == 0: return False return True numbers = np.random.randint(2, 100000, dtype=np.int64, size=10000) %time p1 = [is_prime(n) for n in numbers] %time p2 = [is_prime_jitted(n) for n in numbers] ``` ## Using `numba.jit` with `nopython=True` ``` @numba.jit(nopython=True) def is_prime_njitted(n): if n <= 1: raise ArithmeticError('"%s" <= 1' % n) if n == 2 or n == 3: return True elif n % 2 == 0: return False else: n_sqrt = math.ceil(math.sqrt(n)) for i in range(3, n_sqrt): if n % i == 0: return False return True numbers = np.random.randint(2, 100000, dtype=np.int64, size=1000) %time p1 = [is_prime_jitted(n) for n in numbers] %time p2 = [is_prime_njitted(n) for n in numbers] ``` ## Using ` @numba.jit(nopython=True)` is equivalent to using ` @numba.njit` ``` @numba.njit def is_prime_njitted(n): if n <= 1: raise ArithmeticError('n <= 1') if n == 2 or n == 3: return True elif n % 2 == 0: return False else: n_sqrt = math.ceil(math.sqrt(n)) for i in range(3, n_sqrt): if n % i == 0: return False return True numbers = np.random.randint(2, 100000, dtype=np.int64, size=1000) %time p = [is_prime_jitted(n) for n in numbers] %time p = [is_prime_njitted(n) for n in numbers] ``` ## Use `cache=True` to cache the compiled function ``` import math from numba import njit @njit(cache=True) def is_prime_njitted_cached(n): if n <= 1: raise ArithmeticError('n <= 1') if n == 2 or n == 3: return True elif n % 2 == 0: return False else: n_sqrt = math.ceil(math.sqrt(n)) for i in range(3, n_sqrt): if n % i == 0: return False return True numbers = np.random.randint(2, 100000, dtype=np.int64, size=1000) %time p = [is_prime_njitted(n) for n in numbers] %time p = [is_prime_njitted_cached(n) for n in numbers] ``` ## Vector Triad Benchmark Python vs Numpy vs Numba ``` from timeit import default_timer as timer def vecTriad(a, b, c, d): for j in range(a.shape[0]): a[j] = b[j] + c[j] * d[j] def vecTriadNumpy(a, b, c, d): a[:] = b + c * d @numba.njit() def vecTriadNumba(a, b, c, d): for j in range(a.shape[0]): a[j] = b[j] + c[j] * d[j] # Initialize Vectors n = 10000 # Vector size r = 100 # Iterations a = np.zeros(n, dtype=np.float64) b = np.empty_like(a) b[:] = 1.0 c = np.empty_like(a) c[:] = 1.0 d = np.empty_like(a) d[:] = 1.0 # Python version start = timer() for i in range(r): vecTriad(a, b, c, d) end = timer() mflops = 2.0 * r * n / ((end - start) * 1.0e6) print(f'Python: Mflops/sec: {mflops}') # Numpy version start = timer() for i in range(r): vecTriadNumpy(a, b, c, d) end = timer() mflops = 2.0 * r * n / ((end - start) * 1.0e6) print(f'Numpy: Mflops/sec: {mflops}') # Numba version vecTriadNumba(a, b, c, d) # Run once to avoid measuring the compilation overhead start = timer() for i in range(r): vecTriadNumba(a, b, c, d) end = timer() mflops = 2.0 * r * n / ((end - start) * 1.0e6) print(f'Numba: Mflops/sec: {mflops}') ``` ## Eager compilation using function signatures ``` import math from numba import njit @njit(['boolean(int64)', 'boolean(int32)']) def is_prime_njitted_eager(n): if n <= 1: raise ArithmeticError('n <= 1') if n == 2 or n == 3: return True elif n % 2 == 0: return False else: n_sqrt = math.ceil(math.sqrt(n)) for i in range(3, n_sqrt): if n % i == 0: return False return True numbers = np.random.randint(2, 1000000, dtype=np.int64, size=1000) # Run twice aft %time p1 = [is_prime_njitted_eager(n) for n in numbers] %time p2 = [is_prime_njitted_eager(n) for n in numbers] p1 = [is_prime_njitted_eager(n) for n in numbers.astype(np.int32)] #p2 = [is_prime_njitted_eager(n) for n in numbers.astype(np.float64)] ``` ## Calculating and plotting the [Mandelbrot set](https://en.wikipedia.org/wiki/Mandelbrot_set) ``` X, Y = np.meshgrid(np.linspace(-2.0, 1, 1000), np.linspace(-1.0, 1.0, 1000)) def mandelbrot(X, Y, itermax): mandel = np.empty(shape=X.shape, dtype=np.int32) for i in range(X.shape[0]): for j in range(X.shape[1]): it = 0 cx = X[i, j] cy = Y[i, j] x = 0.0 y = 0.0 while x * x + y * y < 4.0 and it < itermax: x, y = x * x - y * y + cx, 2.0 * x * y + cy it += 1 mandel[i, j] = it return mandel fig = plt.figure(figsize=(15, 10)) ax = fig.add_subplot(111) %time m = mandelbrot(X, Y, 100) ax.imshow(np.log(1 + m), extent=[-2.0, 1, -1.0, 1.0]); ax.set_aspect('equal') ax.set_ylabel('Im[c]') ax.set_xlabel('Re[c]'); @numba.njit(parallel=True) def mandelbrot_jitted(X, Y, radius2, itermax): mandel = np.empty(shape=X.shape, dtype=np.int32) for i in numba.prange(X.shape[0]): for j in range(X.shape[1]): it = 0 cx = X[i, j] cy = Y[i, j] x = cx y = cy while x * x + y * y < 4.0 and it < itermax: x, y = x * x - y * y + cx, 2.0 * x * y + cy it += 1 mandel[i, j] = it return mandel fig = plt.figure(figsize=(15, 10)) ax = fig.add_subplot(111) %time m = mandelbrot_jitted(X, Y, 4.0, 100) ax.imshow(np.log(1 + m), extent=[-2.0, 1, -1.0, 1.0]); ax.set_aspect('equal') ax.set_ylabel('Im[c]') ax.set_xlabel('Re[c]'); ``` ### Getting parallelization information ``` mandelbrot_jitted.parallel_diagnostics(level=3) ``` ## Creating `ufuncs` using `numba.vectorize` ``` from math import sin from numba import float64, int64 def my_numpy_sin(a, b): return np.sin(a) + np.sin(b) @np.vectorize def my_sin(a, b): return sin(a) + sin(b) @numba.vectorize([float64(float64, float64), int64(int64, int64)], target='parallel') def my_sin_numba(a, b): return np.sin(a) + np.sin(b) x = np.random.randint(0, 100, size=9000000) y = np.random.randint(0, 100, size=9000000) %time _ = my_numpy_sin(x, y) %time _ = my_sin(x, y) %time _ = my_sin_numba(x, y) ``` ### Vectorize the testing of prime numbers ``` @numba.vectorize('boolean(int64)') def is_prime_v(n): if n <= 1: raise ArithmeticError(f'"0" <= 1') if n == 2 or n == 3: return True elif n % 2 == 0: return False else: n_sqrt = math.ceil(math.sqrt(n)) for i in range(3, n_sqrt): if n % i == 0: return False return True numbers = np.random.randint(2, 10000000000, dtype=np.int64, size=100000) %time p = is_prime_v(numbers) ``` ### Parallelize the vectorized function ``` @numba.vectorize(['boolean(int64)', 'boolean(int32)'], target='parallel') def is_prime_vp(n): if n <= 1: raise ArithmeticError('n <= 1') if n == 2 or n == 3: return True elif n % 2 == 0: return False else: n_sqrt = math.ceil(math.sqrt(n)) for i in range(3, n_sqrt): if n % i == 0: return False return True numbers = np.random.randint(2, 10000000000, dtype=np.int64, size=100000) %time p1 = is_prime_v(numbers) %time p2 = is_prime_vp(numbers) # Print the largest primes from to 1 and 10 millions numbers = np.arange(1000000, 10000001, dtype=np.int32) %time p1 = is_prime_vp(numbers) primes = numbers[p1] for n in primes[-10:]: print(n) ```
true
code
0.328061
null
null
null
null
<a href="https://colab.research.google.com/github/google/neural-tangents/blob/master/notebooks/phase_diagram.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Copyright 2020 Google LLC Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ### Imports & Utils ``` !pip install -q git+https://www.github.com/google/neural-tangents import jax.numpy as np from jax.experimental import optimizers from jax.api import grad, jit, vmap from jax import lax from jax.config import config config.update('jax_enable_x64', True) from functools import partial import neural_tangents as nt from neural_tangents import stax _Kernel = nt.utils.kernel.Kernel def Kernel(K): """Create an input Kernel object out of an np.ndarray.""" return _Kernel(cov1=np.diag(K), nngp=K, cov2=None, ntk=None, is_gaussian=True, is_reversed=False, diagonal_batch=True, diagonal_spatial=False, shape1=(K.shape[0], 1024), shape2=(K.shape[1], 1024), x1_is_x2=True, is_input=True, batch_axis=0, channel_axis=1) def fixed_point(f, initial_value, threshold): """Find fixed-points of a function f:R->R using Newton's method.""" g = lambda x: f(x) - x dg = grad(g) def cond_fn(x): x, last_x = x return np.abs(x - last_x) > threshold def body_fn(x): x, _ = x return x - g(x) / dg(x), x return lax.while_loop(cond_fn, body_fn, (initial_value, 0.0))[0] from IPython.display import set_matplotlib_formats set_matplotlib_formats('pdf', 'svg') import matplotlib.pyplot as plt import seaborn as sns sns.set_style(style='white') def format_plot(x='', y='', grid=True): ax = plt.gca() plt.grid(grid) plt.xlabel(x, fontsize=20) plt.ylabel(y, fontsize=20) def finalize_plot(shape=(1, 1)): plt.gcf().set_size_inches( shape[0] * 1.5 * plt.gcf().get_size_inches()[1], shape[1] * 1.5 * plt.gcf().get_size_inches()[1]) plt.tight_layout() ``` # Phase Diagram We will reproduce the phase diagram described in [Poole et al.](https://papers.nips.cc/paper/6322-exponential-expressivity-in-deep-neural-networks-through-transient-chaos) and [Schoenholz et al.](https://arxiv.org/abs/1611.01232) using Neural Tangents. In these and subsequent papers, it was found that deep neural networks can exhibit a phase transition as a function of the variance of their weights ($\sigma_w^2$) and biases ($\sigma_b^2$). For networks with $\tanh$ activation functions, this phase transition is between an "ordered" phase and a "chaotic" phase. In the ordered phase, pairs of inputs collapse to a single point as they propagate through the network. By contrast, in the chaotic phase, nearby inputs become increasingly dissimilar in later layers of the network. This phase diagram is shown below. A number of properties of neural networks - such as trainability, mode-collapse, and maximum learing rate - have now been related to this phase diagram over many papers (recently e.g. [Yang et al.](https://arxiv.org/abs/1902.08129), [Jacot et al.](https://arxiv.org/abs/1907.05715), [Hayou et al.](https://arxiv.org/abs/1905.13654), and [Xiao et al.](https://arxiv.org/abs/1912.13053)). \ ![Phase Diagram](https://raw.githubusercontent.com/google/neural-tangents/master/notebooks/figures/pennington_phase_diagram.svg?sanitize=true) > Phase diagram for $\tanh$ neural networks (appeared in [Pennington et al.](https://arxiv.org/abs/1802.09979)). \ Consider two inputs to a neural network, $x_1$ and $x_2$, normalized such that $\|x_1\| = \|x_2\| = q^0$. We can compute the cosine-angle between the inputs, $c^0 = \cos\theta_{12} = \frac{x_1 \cdot x_2}{q^0}$. Additionally, we can keep track of the norm and cosine angle of the resulting pre-activations ($q^l$ and $c^l$ respectively) as signal passes through layers of the neural network. In the wide-network limit there are deterministic functions, called the $\mathcal Q$-map and the $\mathcal{C}$-map, such that $q^{l+1} = \mathcal Q(q^l)$ and $c^{l+1} = \mathcal C(q^l, c^l)$. \ In fully-connected networks with $\tanh$-like activation functions, both the $\mathcal Q$-map and $\mathcal C$-map have unique stable-fixed-points, $q^*$ and $c^*$, such that $q^* = \mathcal Q(q^*)$ and $c^* = \mathcal C(q^*, c^*)$. To simplify the discussion, we typically choose to normalize our inputs so that $q^0 = q^*$ and we can restrict our study to the $\mathcal C$-map. The $\mathcal C$-map always has a fixed point at $c^* = 1$ since two identical inputs will remain identical as they pass through the network. However, this fixed point is not always stable and two points that start out very close together will often separate. Indeed, the ordered and chaotic phases are characterized by the stability of the $c^* = 1$ fixed point. In the ordered phase $c^* = 1$ is stable and pairs of inputs converge to one another as they pass through the network. In the chaotic phase the $c^* = 1$ point is unstable and a new, stable, fixed point with $c^* < 1$ emerges. The phase boundary is defined as the point where $c^* = 1$ is marginally stable. \ To understand the stability of a fixed point, $c^*$, we will use the standard technique in Dynamical Systems theory and expand the $\mathcal C$-map in $\epsilon^l = c^l - c^*$ which implies that $\epsilon^{l+1} = \chi(c^*)\epsilon^l$ where $\chi = \frac{\partial\mathcal C}{\partial C}$. This implies that sufficiently close to a fixed point of the dynamics, $\epsilon^l = \chi(c^*)^l$. If $\chi(c^*) < 1$ then the fixed point is stable and points move towards the fixed point exponentially quickly. If $\chi(c^*) > 1$ then points move away from the fixed point exponentially quickly. This implies that the phase boundary, being defined by the marginal stability of $c^* = 1$, will be where $\chi_1 = \chi(1) = 1$. \ To reproduce these results in Neural Tangents, we notice first that the $\mathcal{C}$-map described above is intimately related to the NNGP kernel, $K^l$, of [Lee et al.](https://arxiv.org/abs/1711.00165), [Matthews et al.](https://arxiv.org/abs/1804.11271), and [Novak et al.](https://arxiv.org/abs/1810.05148). The core of Neural Tangents is a map $\mathcal T$ for a wide range of architectures such that $K^{l + 1} = \mathcal T(K^l)$. Since $C^l$ can be written in terms of the NNGP kernel as $C^l = K^l_{12} / q^*$ this implies that Neural Tangents provides a way of computing the $\mathcal{C}$-map for a wide range of network architectures. \ To produce the phase diagam above, we must compute $q^*$ and $c^*$ as well as $\chi_1$. We will use a fully-connected network with $\text{Erf}$ activation functions since they admit an analytic kernel function and are very similar to $\tanh$ networks. We will first define the $\mathcal Q$-map by noting that the $\mathcal Q$-map will be identical to $\mathcal T$ if the covariance matrix has only a single entry. We will use Newton's method to find $q^*$ given the $\mathcal Q$-map. Next we will use the relationship above to define the $\mathcal C$-map in terms of $\mathcal T$. We will again use Newton's method to find the stable $c^*$ fixed point. We can define $\chi$ by using JAX's automatic differentiation to compute the derivative of the $\mathcal C$-map. This can be written relatively concisely below. \ Note: this particular phase diagram holds for a wide range of neural networks but, emphatically, not for ReLUs. The ReLU phase diagram is somewhat different and could be investigated using Neural Tangents. However, we will save it for a followup notebook. ``` def c_map(W_var, b_var): W_std = np.sqrt(W_var) b_std = np.sqrt(b_var) # Create a single layer of a network as an affine transformation composed # with an Erf nonlinearity. kernel_fn = stax.serial(stax.Dense(1024, W_std, b_std), stax.Erf())[2] def q_map_fn(q): return kernel_fn(Kernel(np.array([[q]]))).nngp[0, 0] qstar = fixed_point(q_map_fn, 1.0, 1e-7) def c_map_fn(c): K = np.array([[qstar, qstar * c], [qstar * c, qstar]]) K_out = kernel_fn(Kernel(K)).nngp return K_out[1, 0] / qstar return c_map_fn c_star = lambda W_var, b_var: fixed_point(c_map(W_var, b_var), 0.1, 1e-7) chi = lambda c, W_var, b_var: grad(c_map(W_var, b_var))(c) chi_1 = partial(chi, 1.) ``` To generate the phase diagram above, we would like to compute the fixed-point correlation not only at a single value of $(\sigma_w^2,\sigma_b^2)$ but on a whole mesh. We can use JAX's `vmap` functionality to do this. Here we define vectorized versions of the above functions. ``` def vectorize_over_sw_sb(fn): # Vectorize over the weight variance. fn = vmap(fn, (0, None)) # Vectorize over the bias variance. fn = vmap(fn, (None, 0)) return fn c_star = jit(vectorize_over_sw_sb(c_star)) chi_1 = jit(vectorize_over_sw_sb(chi_1)) ``` We can use these functions to plot $c^*$ as a function of the weight and bias variance. As expected, we see a region where $c^* = 1$ and a region where $c^* < 1$. ``` W_var = np.arange(0, 3, 0.01) b_var = np.arange(0., 0.25, 0.001) plt.contourf(W_var, b_var, c_star(W_var, b_var)) plt.colorbar() plt.title('$C^*$ as a function of weight and bias variance', fontsize=14) format_plot('$\\sigma_w^2$', '$\\sigma_b^2$') finalize_plot((1.15, 1)) ``` We can, of course, threshold on $c^*$ to get a cleaner definition of the phase diagram. ``` plt.contourf(W_var, b_var, c_star(W_var, b_var) > 0.999, levels=3, colors=[[1.0, 0.89, 0.811], [0.85, 0.85, 1]]) plt.title('Phase diagram in terms of weight and bias variance', fontsize=14) format_plot('$\\sigma_w^2$', '$\\sigma_b^2$') finalize_plot((1, 1)) ``` As described above, the boundary between the two phases should be defined by $\chi_1(\sigma_w^2, \sigma_b^2) = 1$ where $\chi_1$ is given by the derivative of the $\mathcal C$-map. ``` plt.contourf(W_var, b_var, chi_1(W_var, b_var)) plt.colorbar() plt.title(r'$\chi^1$ as a function of weight and bias variance', fontsize=14) format_plot('$\\sigma_w^2$', '$\\sigma_b^2$') finalize_plot((1.15, 1)) ``` We can see that the boundary where $\chi_1$ crosses 1 corresponds to the phase boundary we observe above. ``` plt.contourf(W_var, b_var, c_star(W_var, b_var) > 0.999, levels=3, colors=[[1.0, 0.89, 0.811], [0.85, 0.85, 1]]) plt.contourf(W_var, b_var, np.abs(chi_1(W_var, b_var) - 1) < 0.003, levels=[0.5, 1], colors=[[0, 0, 0]]) plt.title('Phase diagram in terms of weight and bias variance', fontsize=14) format_plot('$\\sigma_w^2$', '$\\sigma_b^2$') finalize_plot((1, 1)) ```
true
code
0.760617
null
null
null
null
### Genarating names with character-level RNN In this notebook we are going to follow the previous notebook wher we classified name's nationalities based on a character level RNN. This time around we are going to generate names using character level RNN. Example: _given a nationality and three starting characters we want to generate some names based on those characters_ We will be following [this pytorch tutorial](https://pytorch.org/tutorials/intermediate/char_rnn_generation_tutorial.html). The difference between this notebook and the previous notebook is that we instead of predicting the class where the name belongs, we are going to output one letter at a time until we generate a name. This can be done on a word level but we will do this on a character based level in our case. ### Data preparation The dataset that we are going to use was downloaded [here](https://download.pytorch.org/tutorial/data.zip). This dataset has nationality as a file name and inside the files we will see the names that belongs to that nationality. I've uploaded this dataset on my google drive so that we can load it eaisly. ### Mounting the drive ``` from google.colab import drive drive.mount('/content/drive') data_path = '/content/drive/My Drive/NLP Data/names-dataset/names' ``` ### Imports ``` from __future__ import unicode_literals, print_function, division import os, string, unicodedata, random import torch from torch import nn from torch.nn import functional as F torch.__version__ all_letters = string.ascii_letters + " .,;'-" n_letters = len(all_letters) + 1 # Plus EOS marker ``` A function that converts all unicodes to ASCII. ``` def unicodeToAscii(s): return ''.join( c for c in unicodedata.normalize('NFD', s) if unicodedata.category(c) != 'Mn' and c in all_letters ) def read_lines(filename): with open(filename, encoding='utf-8') as some_file: return [unicodeToAscii(line.strip()) for line in some_file] # Build the category_lines dictionary, a list of lines per category category_lines = {} all_categories = [] for filename in os.listdir(data_path): category = filename.split(".")[0] all_categories.append(category) lines = read_lines(os.path.join(data_path, filename)) category_lines[category] = lines n_categories = len(all_categories) print('# categories:', n_categories, all_categories) ``` ### Creating the Network This network extends from the previous notebook with an etra argumeny for the category tensor which is concatenated along with others. The category tensor is one-hot vector just like the letter input. We will output the most probable letter and used it as input to the next letter. ![img](https://i.imgur.com/jzVrf7f.png) ``` class RNN(nn.Module): def __init__(self, input_size, hidden_size, output_size): super(RNN, self).__init__() self.hidden_size = hidden_size self.i2h = nn.Linear(n_categories + input_size + hidden_size, hidden_size) self.i2o = nn.Linear(n_categories + input_size + hidden_size, output_size) self.o2o = nn.Linear(hidden_size + output_size, output_size) self.dropout = nn.Dropout(0.1) self.softmax = nn.LogSoftmax(dim=1) def forward(self, category, input, hidden): input_combined = torch.cat((category, input, hidden), 1) hidden = self.i2h(input_combined) output = self.i2o(input_combined) output_combined = torch.cat((hidden, output), 1) output = self.o2o(output_combined) output = self.dropout(output) output = self.softmax(output) return output, hidden def initHidden(self): return torch.zeros(1, self.hidden_size) ``` ### Training First of all, helper functions to get random pairs of (category, line): ``` # Random item from a list def randomChoice(l): return l[random.randint(0, len(l) - 1)] # Get a random category and random line from that category def randomTrainingPair(): category = randomChoice(all_categories) line = randomChoice(category_lines[category]) return category, line line, cate = randomTrainingPair() line, cate ``` For each timestep (that is, for each letter in a training word) the inputs of the network will be ``(category, current letter, hidden state)`` and the outputs will be ``(next letter, next hidden state)``. So for each training set, we’ll need the category, a set of input letters, and a set of output/target letters. Since we are predicting the next letter from the current letter for each timestep, the letter pairs are groups of consecutive letters from the line - e.g. for `"ABCD<EOS>"` we would create (“A”, “B”), (“B”, “C”), (“C”, “D”), (“D”, “EOS”). ![img](https://i.imgur.com/JH58tXY.png) The category tensor is a one-hot tensor of size `<1 x n_categories>`. When training we feed it to the network at every timestep - this is a design choice, it could have been included as part of initial hidden state or some other strategy. ``` def category_tensor(category): li = all_categories.index(category) tensor = torch.zeros(1, n_categories) tensor[0][li] = 1 return tensor # out = 3 def input_tensor(line): tensor = torch.zeros(len(line), 1, n_letters) for li in range(len(line)): letter = line[li] tensor[li][0][all_letters.find(letter)] = 1 return tensor def target_tensor(line): letter_indexes = [all_letters.find(line[li]) for li in range(1, len(line))] letter_indexes.append(n_letters - 1) # EOS return torch.LongTensor(letter_indexes) ``` For convenience during training we’ll make a `randomTrainingExample` function that fetches a random (category, line) pair and turns them into the required (category, input, target) tensors. ``` # Make category, input, and target tensors from a random category, line pair def randomTrainingExample(): category, line = randomTrainingPair() category_t = category_tensor(category) input_line_tensor = input_tensor(line) target_line_tensor = target_tensor(line) return category_t, input_line_tensor, target_line_tensor ``` ### Training the Network In contrast to classification, where only the last output is used, we are making a prediction at every step, so we are calculating loss at every step. The magic of autograd allows you to simply sum these losses at each step and call backward at the end. ``` criterion = nn.NLLLoss() learning_rate = 0.0005 def train(category_tensor, input_line_tensor, target_line_tensor): target_line_tensor.unsqueeze_(-1) hidden = rnn.initHidden() rnn.zero_grad() loss = 0 for i in range(input_line_tensor.size(0)): output, hidden = rnn(category_tensor, input_line_tensor[i], hidden) l = criterion(output, target_line_tensor[i]) loss += l loss.backward() for p in rnn.parameters(): p.data.add_(p.grad.data, alpha=-learning_rate) return output, loss.item() / input_line_tensor.size(0) ``` To keep track of how long training takes I am adding a `time_since(timestamp)` function which returns a human readable string: ``` import time, math def time_since(since): now = time.time() s = now - since m = math.floor(s / 60) s -= m * 60 return '%dm %ds' % (m, s) ``` Training is business as usual - call train a bunch of times and wait a few minutes, printing the current time and loss every `print_every` examples, and keeping store of an average loss per `plot_every` examples in `all_losses` for plotting later. ``` rnn = RNN(n_letters, 128, n_letters) n_iters = 100000 print_every = 5000 plot_every = 500 all_losses = [] total_loss = 0 # Reset every plot_every iters start = time.time() for iter in range(1, n_iters + 1): output, loss = train(*randomTrainingExample()) total_loss += loss if iter % print_every == 0: print('%s (%d %d%%) %.4f' % (time_since(start), iter, iter / n_iters * 100, loss)) if iter % plot_every == 0: all_losses.append(total_loss / plot_every) total_loss = 0 ``` ### Plotting the losses * Plotting the historical loss from all_losses shows the network learning: ``` import matplotlib.pyplot as plt plt.figure() plt.plot(all_losses) plt.show() ``` #### Sampling the network To sample we give the network a letter and ask what the next one is, feed that in as the next letter, and repeat until the `EOS` token. * Create tensors for input category, starting letter, and empty hidden state * Create a string output_name with the starting letter * Up to a maximum output length, * Feed the current letter to the network * Get the next letter from highest output, and next hidden state * If the letter is EOS, stop here * If a regular letter, add to output_name and continue * Return the final name > Rather than having to give it a starting letter, another strategy would have been to include a “start of string” token in training and have the network choose its own starting letter. ``` max_length = 20 # Sample from a category and starting letter def sample(category, start_letter='A'): with torch.no_grad(): # no need to track history in sampling category_t = category_tensor(category) input = input_tensor(start_letter) hidden = rnn.initHidden() output_name = start_letter for i in range(max_length): output, hidden = rnn(category_t, input[0], hidden) topv, topi = output.topk(1) topi = topi[0][0] if topi == n_letters - 1: #eos break else: letter = all_letters[topi] output_name += letter input = input_tensor(letter) return output_name # Get multiple samples from one category and multiple starting letters def samples(category, start_letters='ABC'): for start_letter in start_letters: print(sample(category, start_letter)) samples('Russian', 'RUS') samples('German', 'GER') samples('Spanish', 'SPA') samples('Chinese', 'CHI') ``` ### Ref * [pytorch tutorial](https://pytorch.org/tutorials/intermediate/char_rnn_generation_tutorial.html) * [Understanding LSTM Networks](https://colah.github.io/posts/2015-08-Understanding-LSTMs/) * [The Unreasonable Effectiveness of Recurrent Neural Networks](https://karpathy.github.io/2015/05/21/rnn-effectiveness/) ``` ```
true
code
0.649912
null
null
null
null
# Random Signals *This jupyter notebook is part of a [collection of notebooks](../index.ipynb) on various topics of Digital Signal Processing. Please direct questions and suggestions to [[email protected]](mailto:[email protected]).* ## Independent Processes The independence of random signals is a desired property in many applications of statistical signal processing, as well as uncorrelatedness and orthogonality. The concept of independence is introduced in the following together with a discussion of the links to uncorrelatedness and orthogonality. ### Definition Two stochastic events are said to be [independent](https://en.wikipedia.org/wiki/Independence_(probability_theory%29) if the probability of occurrence of one event is not affected by the occurrence of the other event. Or more specifically, if their joint probability equals the product of their individual probabilities. In terms of the bivariate probability density function (PDF) of two continuous-amplitude real-valued random processes $x[k]$ and $y[k]$ this reads \begin{equation} p_{xy}(\theta_x, \theta_y, k_x, k_y) = p_x(\theta_x, k_x) \cdot p_y(\theta_y, k_y) \end{equation} where $p_x(\theta_x, k_x)$ and $p_y(\theta_y, k_y)$ denote the univariate ([marginal](https://en.wikipedia.org/wiki/Marginal_distribution)) PDFs of the random processes for the time-instances $k_x$ and $k_y$, respectively. The bivariate PDF of two independent random processes is given by the multiplication of their univariate PDFs. It follows that the [second-order ensemble average](ensemble_averages.ipynb#Second-Order-Ensemble-Averages) for a linear mapping is given as \begin{equation} E\{ x[k_x] \cdot y[k_y] \} = E\{ x[k_x] \} \cdot E\{ y[k_y] \} \end{equation} The linear second-order ensemble average of two independent random signals is equal to the multiplication of their linear first-order ensemble averages. For jointly wide-sense stationary (WSS) processes, the bivariate PDF does only depend on the difference $\kappa = k_x - k_y$ of the time instants. Hence, two jointly WSS random signals are independent if \begin{equation} \begin{split} p_{xy}(\theta_x, \theta_y, \kappa) &= p_x(\theta_x, k_x) \cdot p_y(\theta_y, k_x - \kappa) \\ &= p_x(\theta_x) \cdot p_y(\theta_y, \kappa) \end{split} \end{equation} Above bivariate PDF is rewritten using the definition of [conditional probabilities](https://en.wikipedia.org/wiki/Conditional_probability) in order to specialize the definition of independence to one WSS random signal $x[k]$ \begin{equation} p_{xy}(\theta_x, \theta_y, \kappa) = p_{y|x}(\theta_x, \theta_y, \kappa) \cdot p_x(\theta_x) \end{equation} where $p_{y|x}(\theta_x, \theta_y, \kappa)$ denotes the conditional probability that $y[k - \kappa]$ takes the amplitude value $\theta_y$ under the condition that $x[k]$ takes the amplitude value $\theta_x$. Under the assumption that $y[k-\kappa] = x[k-\kappa]$ and substituting $\theta_x$ and $\theta_y$ by $\theta_1$ and $\theta_2$, independence for one random signal is defined as \begin{equation} p_{xx}(\theta_1, \theta_2, \kappa) = \begin{cases} p_x(\theta_1) \cdot \delta(\theta_2 - \theta_1) & \text{for } \kappa = 0 \\ p_x(\theta_1) \cdot p_x(\theta_2, \kappa) & \text{for } \kappa \neq 0 \end{cases} \end{equation} since the conditional probability $p_{x[k]|x[k-\kappa]}(\theta_1, \theta_2, \kappa) = \delta(\theta_2 - \theta_1)$ for $\kappa = 0$ since this represents a sure event. The bivariate PDF of an independent random signal is equal to the product of the univariate PDFs of the signal and the time-shifted signal for $\kappa \neq 0$. A random signal for which this condition does not hold shows statistical dependencies between samples. These dependencies can be exploited for instance for coding or prediction. #### Example - Comparison of bivariate PDF and product of marginal PDFs The following example estimates the bivariate PDF $p_{xx}(\theta_1, \theta_2, \kappa)$ of a WSS random signal $x[k]$ by computing its two-dimensional histogram. The univariate PDFs $p_x(\theta_1)$ and $p_x(\theta_2, \kappa)$ are additionally estimated. Both the estimated bivariate PDF and the product of the two univariate PDFs $p_x(\theta_1) \cdot p_x(\theta_2, \kappa)$ are plotted for different $\kappa$. ``` import numpy as np import matplotlib.pyplot as plt %matplotlib inline N = 10000000 # number of random samles M = 50 # number of bins for bivariate/marginal histograms def compute_plot_histograms(kappa): # shift signal x2 = np.concatenate((x1[kappa:], np.zeros(kappa))) # compute bivariate and marginal histograms pdf_xx, x1edges, x2edges = np.histogram2d(x1, x2, bins=(M,M), range=((-1.5, 1.5),(-1.5, 1.5)), normed=True) pdf_x1, _ = np.histogram(x1, bins=M, range=(-1.5, 1.5), density=True) pdf_x2, _ = np.histogram(x2, bins=M, range=(-1.5, 1.5), density=True) # plot results fig = plt.figure(figsize=(10, 10)) plt.subplot(121, aspect='equal') plt.pcolormesh(x1edges, x2edges, pdf_xx) plt.xlabel(r'$\theta_1$') plt.ylabel(r'$\theta_2$') plt.title(r'Bivariate PDF $p_{{xy}}(\theta_1, \theta_2, \kappa)$') plt.colorbar(fraction=0.046) plt.subplot(122, aspect='equal') plt.pcolormesh(x1edges, x2edges, np.outer(pdf_x1, pdf_x2)) plt.xlabel(r'$\theta_1$') plt.ylabel(r'$\theta_2$') plt.title(r'Product of PDFs $p_x(\theta_1) \cdot p_x(\theta_2, \kappa)$') plt.colorbar(fraction=0.046) fig.suptitle('Shift $\kappa =$ {:<2.0f}'.format(kappa), y=0.72) fig.tight_layout() # generate signal x = np.random.normal(size=N) x1 = np.convolve(x, [1, .5, .3, .7, .3], mode='same') # compute and plot the PDFs for various shifts compute_plot_histograms(0) compute_plot_histograms(2) compute_plot_histograms(20) ``` **Exercise** * With the given results, how can you evaluate the independence of the random signal? * Can the random signal assumed to be independent? Solution: According to the definition of independence, the bivariate PDF and the product of the univariate PDFs has to be equal for $\kappa \neq 0$. This is obviously not the case for $\kappa=2$. Hence, the random signal is not independent in a strict sense. However for $\kappa=20$ the condition for independence is sufficiently fulfilled, considering the statistical uncertainty due to a finite number of samples. ### Independence versus Uncorrelatedness Two continuous-amplitude real-valued jointly WSS random processes $x[k]$ and $y[k]$ are termed as [uncorrelated](correlation_functions.ipynb#Properties) if their cross-correlation function (CCF) is equal to the product of their linear means, $\varphi_{xy}[\kappa] = \mu_x \cdot \mu_y$. If two random signals are independent then they are also uncorrelated. This can be proven by introducing above findings for the linear second-order ensemble average of independent random signals into the definition of the CCF \begin{equation} \varphi_{xy}[\kappa] = E \{ x[k] \cdot y[k - \kappa] \} = E \{ x[k] \} \cdot E \{ y[k - \kappa] \} = \mu_x \cdot \mu_y \end{equation} where the last equality is a consequence of the assumed wide-sense stationarity. The reverse, that two uncorrelated signals are also independent does not hold in general from this result. The auto-correlation function (ACF) of an [uncorrelated signal](correlation_functions.ipynb#Properties) is given as $\varphi_{xx}[\kappa] = \mu_x^2 + \sigma_x^2 \cdot \delta[\kappa]$. Introducing the definition of independence into the definition of the ACF yields \begin{equation} \begin{split} \varphi_{xx}[\kappa] &= E \{ x[k] \cdot x[k - \kappa] \} \\ &= \begin{cases} E \{ x^2[k] \} & \text{for } \kappa = 0 \\ E \{ x[k] \} \cdot E \{ x[k - \kappa] \} & \text{for } \kappa \neq 0 \end{cases} \\ &= \begin{cases} \mu_x^2 + \sigma_x^2 & \text{for } \kappa = 0 \\ \mu_x^2 & \text{for } \kappa \neq 0 \end{cases} \\ &= \mu_x^2 + \sigma_x^2 \delta[\kappa] \end{split} \end{equation} where the result for $\kappa = 0$ follows from the bivariate PDF $p_{xx}(\theta_1, \theta_2, \kappa)$ of an independent signal, as derived above. It can be concluded from this result that an independent random signal is also uncorrelated. The reverse, that an uncorrelated signal is independent does not hold in general. ### Independence versus Orthogonality In geometry, two vectors are said to be [orthogonal](https://en.wikipedia.org/wiki/Orthogonality) if their dot product equals zero. This definition is frequently applied to finite-length random signals by interpreting them as vectors. The relation between independence, correlatedness and orthogonality is derived in the following. Let's assume two continuous-amplitude real-valued jointly wide-sense ergodic random signals $x_N[k]$ and $y_M[k]$ with finite lengths $N$ and $M$, respectively. The CCF $\varphi_{xy}[\kappa]$ between both can be reformulated as follows \begin{equation} \begin{split} \varphi_{xy}[\kappa] &= \frac{1}{N} \sum_{k=0}^{N-1} x_N[k] \cdot y_M[k-\kappa] \\ &= \frac{1}{N} < \mathbf{x}_N, \mathbf{y}_M[\kappa] > \end{split} \end{equation} where $<\cdot, \cdot>$ denotes the [dot product](https://en.wikipedia.org/wiki/Dot_product). The $(N+2M-2) \times 1$ vector $\mathbf{x}_N$ is defined as $$\mathbf{x}_N = \left[ \mathbf{0}^T_{(M-1) \times 1}, x[0], x[1], \dots, x[N-1], \mathbf{0}^T_{(M-1) \times 1} \right]^T$$ where $\mathbf{0}_{(M-1) \times 1}$ denotes the zero vector of length $M-1$. The $(N+2M-2) \times 1$ vector $\mathbf{y}_M[\kappa]$ is defined as $$\mathbf{y}_M = \left[ \mathbf{0}^T_{\kappa \times 1}, y[0], y[1], \dots, y[M-1], \mathbf{0}^T_{(N+M-2-\kappa) \times 1} \right]^T$$ It follows from above definition of orthogonality that two finite-length random signals are orthogonal if their CCF is zero. This implies that at least one of the two signals has to be mean free. It can be concluded further that two independent random signals are also orthogonal and uncorrelated if at least one of them is mean free. The reverse, that orthogonal signals are independent, does not hold in general. The concept of orthogonality can also be extended to one random signal by setting $\mathbf{y}_M[\kappa] = \mathbf{x}_N[\kappa]$. Since a random signal cannot be orthogonal to itself for $\kappa = 0$, the definition of orthogonality has to be extended for this case. According to the ACF of a mean-free uncorrelated random signal $x[k]$, self-orthogonality may be defined as \begin{equation} \frac{1}{N} < \mathbf{x}_N, \mathbf{x}_N[\kappa] > = \begin{cases} \sigma_x^2 & \text{for } \kappa = 0 \\ 0 & \text{for } \kappa \neq 0 \end{cases} \end{equation} An independent random signal is also orthogonal if it is zero-mean. The reverse, that an orthogonal signal is independent does not hold in general. #### Example - Computation of cross-correlation by dot product This example illustrates the computation of the CCF by the dot product. First, a function is defined which computes the CCF by means of the dot product ``` def ccf_by_dotprod(x, y): N = len(x) M = len(y) xN = np.concatenate((np.zeros(M-1), x, np.zeros(M-1))) yM = np.concatenate((y, np.zeros(N+M-2))) return np.fromiter([np.dot(xN, np.roll(yM, kappa)) for kappa in range(N+M-1)], float) ``` Now the CCF is computed using different methods: computation by the dot product and by the built-in correlation function. The CCF is plotted for the computation by the dot product, as well as the difference (magnitude) between both methods. The resulting difference is in the typical expected range due to numerical inaccuracies. ``` N = 32 # length of signals # generate signals np.random.seed(1) x = np.random.normal(size=N) y = np.convolve(x, [1, .5, .3, .7, .3], mode='same') # compute CCF ccf1 = 1/N * np.correlate(x, y, mode='full') ccf2 = 1/N * ccf_by_dotprod(x, y) kappa = np.arange(-N+1, N) # plot results plt.figure(figsize=(10, 4)) plt.subplot(121) plt.stem(kappa, ccf1) plt.xlabel('$\kappa$') plt.ylabel(r'$\varphi_{xy}[\kappa]$') plt.title('CCF by dot product') plt.grid() plt.subplot(122) plt.stem(kappa, np.abs(ccf1-ccf2)) plt.xlabel('$\kappa$') plt.title('Difference (magnitude)') plt.tight_layout() ``` **Copyright** This notebook is provided as [Open Educational Resource](https://en.wikipedia.org/wiki/Open_educational_resources). Feel free to use the notebook for your own purposes. The text is licensed under [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/), the code of the IPython examples under the [MIT license](https://opensource.org/licenses/MIT). Please attribute the work as follows: *Sascha Spors, Digital Signal Processing - Lecture notes featuring computational examples, 2016-2018*.
true
code
0.763693
null
null
null
null
This notebook shows the MEP quickstart sample, which also exists as a non-notebook version at: https://bitbucket.org/vitotap/python-spark-quickstart It shows how to use Spark (http://spark.apache.org/) for distributed processing on the PROBA-V Mission Exploitation Platform. (https://proba-v-mep.esa.int/) The sample intentionally implements a very simple computation: for each PROBA-V tile in a given bounding box and time range, a histogram is computed. The results are then summed and printed. Computation of the histograms runs in parallel. ## First step: get file paths A catalog API is available to easily retrieve paths to PROBA-V files: https://readthedocs.org/projects/mep-catalogclient/ ``` from catalogclient import catalog cat=catalog.Catalog() cat.get_producttypes() date = "2016-01-01" products = cat.get_products('PROBAV_L3_S1_TOC_333M', fileformat='GEOTIFF', startdate=date, enddate=date, min_lon=0, max_lon=10, min_lat=36, max_lat=53) #extract NDVI geotiff files from product metadata files = [p.file('NDVI')[5:] for p in products] print('Found '+str(len(files)) + ' files.') print(files[0]) #check if file exists !file {files[0]} ``` ## Second step: define function to apply Define the histogram function, this can also be done inline, which allows for a faster feedback loop when writing the code, but here we want to clearly separate the processing 'algorithm' from the parallelization code. ``` # Calculates the histogram for a given (single band) image file. def histogram(image_file): import numpy as np import gdal # Open image file img = gdal.Open(image_file) if img is None: print( '-ERROR- Unable to open image file "%s"' % image_file ) # Open raster band (first band) raster = img.GetRasterBand(1) xSize = img.RasterXSize ySize = img.RasterYSize # Read raster data data = raster.ReadAsArray(0, 0, xSize, ySize) # Calculate histogram hist, _ = np.histogram(data, bins=256) return hist ``` ## Third step: setup Spark To work on the processing cluster, we need to specify the resources we want: * spark.executor.cores: Number of cores per executor. Usually our tasks are single threaded, so 1 is a good default. * spark.executor.memory: memory to assign per executor. For the Java/Spark processing, not the Python part. * spark.yarn.executor.memoryOverhead: memory available for Python in each executor. We set up the SparkConf with these parameters, and create a SparkContext sc, which will be our access point to the cluster. ``` %%time # ================================================================ # === Calculate the histogram for a given number of files. The === # === processing is performed by spreading them over a cluster === # === of Spark nodes. === # ================================================================ from datetime import datetime from operator import add import pyspark import os # Setup the Spark cluster conf = pyspark.SparkConf() conf.set('spark.yarn.executor.memoryOverhead', 512) conf.set('spark.executor.memory', '512m') sc = pyspark.SparkContext(conf=conf) ``` ## Fourth step: compute histograms We use a couple of Spark functions to run our job on the cluster. Comments are provided in the code. ``` %%time # Distribute the local file list over the cluster. filesRDD = sc.parallelize(files,len(files)) # Apply the 'histogram' function to each filename using 'map', keep the result in memory using 'cache'. hists = filesRDD.map(histogram).cache() count = hists.count() # Combine distributed histograms into a single result total = list(hists.reduce(lambda h, i: map(add, h, i))) hists.unpersist() print( "Sum of %i histograms: %s" % (count, total)) #stop spark session if we no longer need it sc.stop() ``` ## Fifth step: plot our result Plot the array of values as a simple line chart using matplotlib. This is the most basic Python library. More advanced options such as bokeh, mpld3 and seaborn are also available. ``` %matplotlib inline import matplotlib.pyplot as plt plt.plot(total) plt.show() ```
true
code
0.431105
null
null
null
null
# Attempting to load higher order ASPECT elements An initial attempt at loading higher order element output from ASPECT. The VTU files have elements with a VTU type of `VTK_LAGRANGE_HEXAHEDRON` (VTK ID number 72, https://vtk.org/doc/nightly/html/classvtkLagrangeHexahedron.html#details), corresponding to 2nd order (quadratic) hexahedron, resulting in 27 nodes. Some useful links about this type of FEM output: * https://blog.kitware.com/modeling-arbitrary-order-lagrange-finite-elements-in-the-visualization-toolkit/ * https://github.com/Kitware/VTK/blob/0ce0d74e67927fd964a27c045d68e2f32b5f65f7/Common/DataModel/vtkCellType.h#L112 * https://github.com/ju-kreber/paraview-scripts * https://doi.org/10.1016/B978-1-85617-633-0.00006-X * https://discourse.paraview.org/t/about-high-order-non-traditional-lagrange-finite-element/1577/4 * https://gitlab.kitware.com/vtk/vtk/-/blob/7a0b92864c96680b1f42ee84920df556fc6ebaa3/Common/DataModel/vtkHigherOrderInterpolation.cxx At present, tis notebook requires the `vtu72` branch on the `meshio` fork at https://github.com/chrishavlin/meshio/pull/new/vtu72 to attempt to load the `VTK_LAGRANGE_HEXAHEDRON` output. As seen below, the data can be loaded with the general `unstructured_mesh_loader` but `yt` can not presently handle higher order output. ``` import os, yt, numpy as np import xmltodict, meshio DataDir=os.path.join(os.environ.get('ASPECTdatadir','../'),'litho_defo_sample','data') pFile=os.path.join(DataDir,'solution-00002.pvtu') if os.path.isfile(pFile) is False: print("data file not found") class pvuFile(object): def __init__(self,file,**kwargs): self.file=file self.dataDir=kwargs.get('dataDir',os.path.split(file)[0]) with open(file) as data: self.pXML = xmltodict.parse(data.read()) # store fields for convenience self.fields=self.pXML['VTKFile']['PUnstructuredGrid']['PPointData']['PDataArray'] def load(self): conlist=[] # list of 2D connectivity arrays coordlist=[] # global, concatenated coordinate array nodeDictList=[] # list of node_data dicts, same length as conlist con_offset=-1 for mesh_id,src in enumerate(self.pXML['VTKFile']['PUnstructuredGrid']['Piece']): mesh_name="connect{meshnum}".format(meshnum=mesh_id+1) # connect1, connect2, etc. srcFi=os.path.join(self.dataDir,src['@Source']) # full path to .vtu file [con,coord,node_d]=self.loadPiece(srcFi,mesh_name,con_offset+1) con_offset=con.max() conlist.append(con.astype("i8")) coordlist.extend(coord.astype("f8")) nodeDictList.append(node_d) self.connectivity=conlist self.coordinates=np.array(coordlist) self.node_data=nodeDictList def loadPiece(self,srcFi,mesh_name,connectivity_offset=0): # print(srcFi) meshPiece=meshio.read(srcFi) # read it in with meshio coords=meshPiece.points # coords and node_data are already global connectivity=meshPiece.cells_dict['lagrange_hexahedron'] # 2D connectivity array # parse node data node_data=self.parseNodeData(meshPiece.point_data,connectivity,mesh_name) # offset the connectivity matrix to global value connectivity=np.array(connectivity)+connectivity_offset return [connectivity,coords,node_data] def parseNodeData(self,point_data,connectivity,mesh_name): # for each field, evaluate field data by index, reshape to match connectivity con1d=connectivity.ravel() conn_shp=connectivity.shape comp_hash={0:'cx',1:'cy',2:'cz'} def rshpData(data1d): return np.reshape(data1d[con1d],conn_shp) node_data={} for fld in self.fields: nm=fld['@Name'] if nm in point_data.keys(): if '@NumberOfComponents' in fld.keys() and int(fld['@NumberOfComponents'])>1: # we have a vector, deal with components for component in range(int(fld['@NumberOfComponents'])): comp_name=nm+'_'+comp_hash[component] # e.g., velocity_cx m_F=(mesh_name,comp_name) # e.g., ('connect1','velocity_cx') node_data[m_F]=rshpData(point_data[nm][:,component]) else: # just a scalar! m_F=(mesh_name,nm) # e.g., ('connect1','T') node_data[m_F]=rshpData(point_data[nm]) return node_data pvuData=pvuFile(pFile) pvuData.load() ``` So it loads... `meshio`'s treatment of high order elements is not complicated: it assumes the same number of nodes per elements and just reshapes the 1d connectivity array appropriately. In this case, a single element has 27 nodes: ``` pvuData.connectivity[0].shape ``` And yes, it can load: ``` ds4 = yt.load_unstructured_mesh( pvuData.connectivity, pvuData.coordinates, node_data = pvuData.node_data ) ``` but the plots are don't actually take advantage of all the data, noted by the warning when slicing: "High order elements not yet supported, dropping to 1st order." ``` p=yt.SlicePlot(ds4, "x", ("all", "T")) p.set_log("T",False) p.show() ``` This run is a very high aspect ratio cartesian simulation so let's rescale the coords first and then reload (**TO DO** look up how to do this with *yt* after loading the data...) ``` def minmax(x): return [x.min(),x.max()] for idim in range(0,3): print([idim,minmax(pvuData.coordinates[:,idim])]) # some artificial rescaling for idim in range(0,3): pvuData.coordinates[:,idim]=pvuData.coordinates[:,idim] / pvuData.coordinates[:,idim].max() ds4 = yt.load_unstructured_mesh( pvuData.connectivity, pvuData.coordinates, node_data = pvuData.node_data ) p=yt.SlicePlot(ds4, "x", ("all", "T")) p.set_log("T",False) p.show() ``` To use all the data, we need to add a new element mapping for sampling these elements (see `yt/utilities/lib/element_mappings.pyx`). These element mappings can be automatically generated using a symbolic math library, e.g., `sympy`. See `ASPECT_VTK_quad_hex_mapping.ipynb`
true
code
0.411732
null
null
null
null
# Classification metrics Author: Geraldine Klarenberg Based on the Google Machine Learning Crash Course ## Tresholds In previous lessons, we have talked about using regression models to predict values. But sometimes we are interested in **classifying** things: "spam" vs "not spam", "bark" vs "not barking", etc. Logistic regression is a great tool to use in ML classification models. We can use the outputs from these models by defining **classification thresholds**. For instance, if our model tells us there's a probability of 0.8 that an email is spam (based on some characteristics), the model classifies it as such. If the probability estimate is less than 0.8, the model classifies it as "not spam". The threshold allows us to map a logistic regression value to a binary category (the prediction). Tresholds are problem-dependent, so they will have to be tuned for the specific problem you are dealing with. In this lesson we will look at metrics you can use to evaluate a classification model's predictions, and what changing the threshold does to your model and predictions. ## True, false, positive, negative... Now, we could simply look at "accuracy": the ratio of all correct predictions to all predictions. This is simple, intuitive and straightfoward. But there are some problems with this approach: * This approach does not work well if there is (class) imbalance; situations where certain negative or positive values or outcomes are rare; * and, most importantly: different kind of mistakes can have different costs... ### The boy who cried wolf... We all know the story! ![Illustration of the boy who cried wolf](../nb-images/wolfpic.jpg) For this example, we define "there actually is a wolf" as a positive class, and "there is no wolf" as a negative class. The predictions that a model makes can be true or false for both classes, generating 4 outcomes: ![An table showing a confusion matrix based on the story of the boy who cried wolf](../nb-images/confusionmatrix_wolf.png) This table is also called a *confusion matrix*. There are 2 metrics we can derive from these outcomes: precision and recall. ## Precision Precision asks the question what proportion of the positive predictions was actually correct? To calculate the precision of your model, take all true positives divided by *all* positive predictions: $$\text{Precision} = \frac{TP}{TP+FP}$$ Basically: **did the model cry 'wolf' too often or too little?** **NB** If your model produces no negative positives, the value of the precision is 1.0. Too many negative positives gives values greater than 1, too few gives values less than 1. ### Exercise Calculate the precision of a model with the following outcomes true positives (TP): 1 | false positives (FP): 1 -------|-------- **false negatives (FN): 8** | **true negatives (TN): 90** ## Recall Recall tries to answer the question what proportion of actual positives was answered correctly? To calculate recall, divide all true positives by the true positives plus the false negatives: $$\text{Recall} = \frac{TP}{TP+FN}$$ Basically: **how many wolves that tried to get into the village did the model actually get?** **NB** If the model produces no false negative, recall equals 1.0 ### Exercise For the same confusion matrix as above, calculate the recall. ## Balancing precision and recall To evaluate your model, should look at **both** precision and recall. They are often in tension though: improving one reduces the other. Lowering the classification treshold improves recall (your model will call wolf at every little sound it hears) but will negatively affect precision (it will call wolf too often). ### Exercise #### Part 1 Look at the outputs of a model that classifies incoming emails as "spam" or "not spam". ![Image of outcomes of a spam/not spam classification model](../nb-images/PrecisionVsRecallBase.svg) The confusion matrix looks as follows true positives (TP): 8 | false positives (FP): 2 -------|-------- **false negatives (FN): 3** | **true negatives (TN): 17** Calculate the precision and recall for this model. #### Part 2 Now see what happens to the outcomes (below) if we increase the threshold ![Image of outcomes of a spam/not spam classification model](../nb-images/PrecisionVsRecallRaiseThreshold.svg) The confusion matrix looks as follows true positives (TP): 7 | false positives (FP): 4 -------|-------- **false negatives (FN): 1** | **true negatives (TN): 18** Calculate the precision and recall again. **Compare the precision and recall from the first and second model. What do you notice?** ## Evaluate model performance We can evaluate the performance of a classification model at all classification thresholds. For all different thresholds, calculate the *true positive rate* and the *false positive rate*. The true positive rate is synonymous with recall (and sometimes called *sensitivity*) and is thus calculated as $ TPR = \frac{TP} {TP + FN} $ False positive rate (sometimes called *specificity*) is: $ FPR = \frac{FP} {FP + TN} $ When you plot the pairs of TPR and FPR for all the different thresholds, you get a Receiver Operating Characteristics (ROC) curve. Below is a typical ROC curve. ![Image of an ROC curve](../nb-images/ROCCurve.svg) To evaluate the model, we look at the area under the curve (AUC). The AUC has a probabilistic interpretation: it represents the probability that a random positive (green) example is positioned to the right of a random negative (red) example. ![Image with predictions ranked according to logistic regression score](../nb-images/AUCPredictionsRanked.svg) So if that AUC is 0.9, that's the probability the pair-wise prediction is correct. Below are a few visualizations of AUC results. On top are the distributions of the outcomes of the negative and positive outcomes at various thresholds. Below is the corresponding ROC. ![Image with distributions of positive and negative classes - perfect](../nb-images/TowardsDataScienceAUC_perfect.png) ![Image with AUC - perfect](../nb-images/TowardsDataScienceAUC_perfect2.png) **This AUC suggests a perfect model** (which is suspicious!) ![Image with distributions of positive and negative classes - normal](../nb-images/TowardsDataScienceAUC_normal.png) ![Image with AUC - normal](../nb-images/TowardsDataScienceAUC_normal2.png) **This is what most AUCs look like**. In this case, AUC = 0.7 means that there is 70% chance the model will be able to distinguish between positive and negative classes. ![Image with distributions of positive and negative classes - worst](../nb-images/TowardsDataScienceAUC_worst.png) ![Image with AUC - worst](../nb-images/TowardsDataScienceAUC_worst2.png) **This is actually the worst case scenario.** This model has no discrimination capacity at all... ## Prediction bias Logistic regression should be unbiased, meaning that the average of the predictions should be more or less equal to the average of the observations. **Prediction bias** is the difference between the average of the predictions and the average of the labels in a data set. This approach is not perfect, e.g. if your model almost always predicts the average there will not be much bias. However, if there **is** bias ("significant nonzero bias"), that means there is something something going on that needs to be checked, specifically that the model is wrong about the frequency of positive labels. Possible root causes of prediction bias are: * Incomplete feature set * Noisy data set * Buggy pipeline * Biased training sample * Overly strong regularization ### Buckets and prediction bias For logistic regression, this process is a bit more involved, as the labels assigned to an examples are either 0 or 1. So you cannot accurately predict the prediction bias based on one example. You need to group data in "buckets" and examine the prediction bias on that. Prediction bias for logistic regression only makes sense when grouping enough examples together to be able to compare a predicted value (for example, 0.392) to observed values (for example, 0.394). You can create buckets by linearly breaking up the target predictions, or create quantiles. The plot below is a calibration plot. Each dot represents a bucket with 1000 values. On the x-axis we have the average value of the predictions for that bucket and on the y-axis the average of the actual observations. Note that the axes are on logarithmic scales. ![Image of a calibration plot with buckets](../nb-images/BucketingBias.svg) ## Coding Recall the logistic regression model we made in the previous lesson. That was a perfect fit, so not that useful when we look at the metrics we just discussed. In the cloud plot with the sepal length and petal width plotted against each other, it is clear that the other two iris species are less separated. Let's use one of these as an example. We'll rework the example so we're classifying irises for being "virginica" or "not virginica". ``` import numpy as np import matplotlib.pyplot as plt from sklearn.datasets import load_iris import pandas as pd iris = load_iris() X = iris.data y = iris.target df = pd.DataFrame(X, columns = ['sepal_length(cm)', 'sepal_width(cm)', 'petal_length(cm)', 'petal_width(cm)']) df['species_id'] = y species_map = {0: 'setosa', 1: 'versicolor', 2: 'virginica'} df['species_name'] = df['species_id'].map(species_map) df.head() ``` Now extract the data we need and create the necessary dataframes again. ``` X = np.c_[X[:,0], X[:,3]] y = [] for i in range(len(X)): if i > 99: y.append(1) else: y.append(0) y = np.array(y) plt.scatter(X[:,0], X[:,1], c = y) ``` Create our test and train data, and run a model. The default classification threshold is 0.5. If the predicted probability is > 0.5, the predicted result is 'virgnica'. If it is < 0.5, the predicted result is 'not virginica'. ``` random = np.random.permutation(len(X)) x_train = X[random][30:] x_test = X[random][:30] y_train= y[random][30:] y_test = y[random][:30] from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression() log_reg.fit(x_train,y_train) ``` Instead of looking at the probabilities and the plot, like in the last lesson, let's run some classification metrics on the training dataset. If you use ".score", you get the mean accuracy. ``` log_reg.score(x_train, y_train) ``` Let's predict values and see what this ouput means and how we can look at other metrics. ``` predictions = log_reg.predict(x_train) predictions, y_train ``` There is a way to look at the confusion matrix. The output that is generated has the same structure as the confusion matrices we showed earlier: true positives (TP) | false positives (FP) -------|-------- **false negatives (FN)** | **true negatives (TN)** ``` from sklearn.metrics import confusion_matrix confusion_matrix(y_train, predictions) ``` Indeed, for the accuracy calculation: we predicted 81 + 33 = 114 correct (true positives and true negatives), and 114/120 (remember, our training data had 120 points) = 0.95. There is also a function to calculate recall and precision: Since we also have a testing data set, let's see what the metrics look like for that. ``` from sklearn.metrics import recall_score recall_score(y_train, predictions) from sklearn.metrics import precision_score precision_score(y_train, predictions) ``` And, of course, there are also built-in functions to check the ROC curve and AUC! For these functions, the inputs are the labels of the original dataset and the predicted probabilities (- not the predicted labels -> **why?**). Remember what the two columns mean? ``` proba_virginica = log_reg.predict_proba(x_train) proba_virginica[0:10] from sklearn.metrics import roc_curve fpr_model, tpr_model, thresholds_model = roc_curve(y_train, proba_virginica[:,1]) fpr_model tpr_model thresholds_model ``` Plot the ROC curve as follows ``` plt.plot(fpr_model, tpr_model,label='our model') plt.plot([0,1],[0,1],label='random') plt.plot([0,0,1,1],[0,1,1,1],label='perfect') plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.legend() plt.show() ``` The AUC: ``` from sklearn.metrics import roc_auc_score auc_model = roc_auc_score(y_train, proba_virginica[:,1]) auc_model ``` You can use the ROC and AUC metric to evaluate competing models. Many people prefer to use these metrics to analyze each model’s performance because it does not require selecting a threshold and helps balance true positive rate and false positive rate. Now let's do the same thing for our test data (but again, this dataset is fairly small, and K-fold cross-validation is recommended). ``` log_reg.score(x_test, y_test) predictions = log_reg.predict(x_test) predictions, y_test confusion_matrix(y_test, predictions) recall_score(y_test, predictions) precision_score(y_test, predictions) proba_virginica = log_reg.predict_proba(x_test) fpr_model, tpr_model, thresholds_model = roc_curve(y_test, proba_virginica[:,1]) plt.plot(fpr_model, tpr_model,label='our model') plt.plot([0,1],[0,1],label='random') plt.plot([0,0,1,1],[0,1,1,1],label='perfect') plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.legend() plt.show() auc_model = roc_auc_score(y_test, proba_virginica[:,1]) auc_model ``` Learn more about the logistic regression function and options at https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html
true
code
0.600188
null
null
null
null
<a href="https://colab.research.google.com/github/bitprj/Bitcamp-DataSci/blob/master/Week1-Introduction-to-Python-_-NumPy/Intro_to_Python_plus_NumPy.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> <img src="https://github.com/bitprj/Bitcamp-DataSci/blob/master/Week1-Introduction-to-Python-_-NumPy/assets/icons/bitproject.png?raw=1" width="200" align="left"> <img src="https://github.com/bitprj/Bitcamp-DataSci/blob/master/Week1-Introduction-to-Python-_-NumPy/assets/icons/data-science.jpg?raw=1" width="300" align="right"> # Introduction to Python ### Table of Contents - Why, Where, and How we use Python - What we will be learning today - Goals - Numbers - Types of Numbers - Basic Arithmetic - Arithmetic Continued - Variable Assignment - Strings - Creating Strings - Printing Strings - String Basics - String Properties - Basic Built-In String Methods - Print Formatting - **1.0 Now Try This** - Booleans - Lists - Creating Lists - Basic List Methods - Nesting Lists - List Comprehensions - **2.0 Now Try This** - Tuples - Constructing Tuples - Basic Tuple Methods - Immutability - When To Use Tuples - **3.0 Now Try This** - Dictionaries - Constructing a Dictionary - Nesting With Dictionaries - Dictionary Methods - **4.0 Now Try This** - Comparison Operators - Functions - Intro to Functions - `def` Statements - Examples - Using `return` - **5.0 Now Try This** - Modules and Packages - Overview - NumPy - Creating Arrays - Indexing - Slicing - **6.0 Now Try This** - Data Types - **7.0 Now Try This** - Copy vs. View - **8.0 Now Try This** - Shape - **9.0 Now Try This** - Iterating Through Arrays - Joining Arrays - Splitting Arrays - Searching Arrays - Sorting Arrays - Filtering Arrays - **10.0 Now Try This** - Resources ## Why, Where, and How we use Python Python is a very popular scripting language that you can use to create applications and programs of all sizes and complexity. It is very easy to learn and has very little syntax, making it very efficient to code with. Python is also the language of choice for many when performing comprehensive data analysis. ## What we will be learning today ### Goals - Understanding key Python data types, operators and data structures - Understanding functions - Understanding modules - Understanding errors and exceptions First data type we'll cover in detail is Numbers! ## Numbers ### Types of numbers Python has various "types" of numbers. We'll strictly cover integers and floating point numbers for now. Integers are just whole numbers, positive or negative. (2,4,-21,etc.) Floating point numbers in Python have a decimal point in them, or use an exponential (e). For example 3.14 and 2.17 are *floats*. 5E7 (5 times 10 to the power of 7) is also a float. This is scientific notation and something you've probably seen in math classes. Let's start working through numbers and arithmetic: ### Basic Arithmetic ``` # Addition 4+5 # Subtraction 5-10 # Multiplication 4*8 # Division 25/5 # Floor Division 12//4 ``` What happened here? The reason we get this result is because we are using "*floor*" division. The // operator (two forward slashes) removes any decimals and doesn't round. This always produces an integer answer. **So what if we just want the remainder of division?** ``` # Modulo 9 % 4 ``` 4 goes into 9 twice, with a remainder of 1. The % operator returns the remainder after division. ### Arithmetic continued ``` # Powers 4**2 # A way to do roots 144**0.5 # Order of Operations 4 + 20 * 52 + 5 # Can use parentheses to specify orders (21+5) * (4+89) ``` ## Variable Assignments We can do a lot more with Python than just using it as a calculator. We can store any numbers we create in **variables**. We use a single equals sign to assign labels or values to variables. Let's see a few examples of how we can do this. ``` # Let's create a variable called "a" and assign to it the number 10 a = 10 a ``` Now if I call *a* in my Python script, Python will treat it as the integer 10. ``` # Adding the objects a+a ``` What happens on reassignment? Will Python let us write it over? ``` # Reassignment a = 20 # Check a ``` Yes! Python allows you to write over assigned variable names. We can also use the variables themselves when doing the reassignment. Here is an example of what I mean: ``` # Use A to redefine A a = a+a # check a ``` The names you use when creating these labels need to follow a few rules: 1. Names can not start with a number. 2. There can be no spaces in the name, use _ instead. 3. Can't use any of these symbols :'",<>/?|\()!@#$%^&*~-+ 4. Using lowercase names are best practice. 5. Can't words that have special meaning in Python like "list" and "str", we'll see why later Using variable names can be a very useful way to keep track of different variables in Python. For example: ``` # Use object names to keep better track of what's going on in your code! income = 1000 tax_rate = 0.2 taxes = income*tax_rate # Show the result! taxes ``` So what have we learned? We learned some of the basics of numbers in Python. We also learned how to do arithmetic and use Python as a basic calculator. We then wrapped it up with learning about Variable Assignment in Python. Up next we'll learn about Strings! ## Strings Strings are used in Python to record text information, such as names. Strings in Python are not treated like their own objects, but rather like a *sequence*, a consecutive series of characters. For example, Python understands the string "hello' to be a sequence of letters in a specific order. This means we will be able to use indexing to grab particular letters (like the first letter, or the last letter). ### Creating Strings To create a string in Python you need to use either single quotes or double quotes. For example: ``` # A word 'hi' # A phrase 'A string can even be a sentence like this.' # Using double quotes "The quote type doesn't really matter." # Be wary of contractions and apostrophes! 'I'm using single quotes, but this will create an error!' ``` The reason for the error above is because the single quote in <code>I'm</code> stopped the string. You can use combinations of double and single quotes to get the complete statement. ``` "This shouldn't cause an error now." ``` Now let's learn about printing strings! ### Printing Strings Jupyter Notebooks have many neat behaviors that aren't available in base python. One of those is the ability to print strings by just typing it into a cell. The universal way to display strings however, is to use a **print()** function. ``` # In Jupyter, this is all we need 'Hello World' # This is the same as: print('Hello World') # Without the print function, we can't print multiple times in one block of code: 'Hello World' 'Second string' ``` A print statement can look like the following. ``` print('Hello World') print('Second string') print('\n prints a new line') print('\n') print('Just to prove it to you.') ``` Now let's move on to understanding how we can manipulate strings in our programs. ### String Basics Oftentimes, we would like to know how many characters are in a string. We can do this very easily with the **len()** function (short for 'length'). ``` len('Hello World') ``` Python's built-in len() function counts all of the characters in the string, including spaces and punctuation. Naturally, we can assign strings to variables. ``` # Assign 'Hello World' to mystring variable mystring = 'Hello World' # Did it work? mystring # Print it to make sure print(mystring) ``` As stated before, Python treats strings as a sequence of characters. That means we can interact with each letter in a string and manipulate it. The way we access these letters is called **indexing**. Each letter has an index, which corresponds to their position in the string. In python, indices start at 0. For instance, in the string 'Hello World', 'H' has an index of 0, 'e' has an index of 1, the 'W' has an index of 6 (because spaces count as characters), and 'd' has an index of 10. The syntax for indexing is shown below. ``` # Extract first character in a string. mystring[0] mystring[1] mystring[2] ``` We can use a <code>:</code> to perform *slicing* which grabs everything up to a designated index. For example: ``` # Grab all letters past the first letter all the way to the end of the string mystring[:] # This does not change the original string in any way mystring # Grab everything UP TO the 5th index mystring[:5] ``` Note what happened above. We told Python to grab everything from 0 up to 5. It doesn't include the character in the 5th index. You'll notice this a lot in Python, where statements are usually in the context of "up to, but not including". ``` # The whole string mystring[:] # The 'default' values, if you leave the sides of the colon blank, are 0 and the length of the string end = len(mystring) # See that is matches above mystring[0:end] ``` But we don't have to go forwards. Negative indexing allows us to start from the *end* of the string and work backwards. ``` # The LAST letter (one index 'behind' 0, so it loops back around) mystring[-1] # Grab everything but the last letter mystring[:-1] ``` We can also use indexing and slicing to grab characters by a specified step size (1 is the default). See the following examples. ``` # Grab everything (default), go in steps size of 1 mystring[::1] # Grab everything, but go in step sizes of 2 (every other letter) mystring[0::2] # A handy way to reverse a string! mystring[::-1] ``` Strings have certain properties to them that affect the way we can, and cannot, interact with them. ### String Properties It's important to note that strings are *immutable*. This means that once a string is created, the elements within it can not be changed or replaced. For example: ``` mystring # Let's try to change the first letter mystring[0] = 'a' ``` The error tells it us to straight. Strings do not support assignment the same way other data types do. However, we *can* **concatenate** strings. ``` mystring # Combine strings through concatenation mystring + ". It's me." # We can reassign mystring to a new value, however mystring = mystring + ". It's me." mystring ``` One neat trick we can do with strings is use multiplication whenever we want to repeat characters a certain number of times. ``` letter = 'a' letter*20 ``` We already saw how to use len(). This is an example of a built-in string method, but there are quite a few more which we will cover next. ### Basic Built-in String methods Objects in Python usually have built-in methods. These methods are functions inside the object that can perform actions or commands on the object itself. We call methods with a period and then the method name. Methods are in the form: object.method(parameters) Parameters are extra arguments we can pass into the method. Don't worry if the details don't make 100% sense right now. We will be going into more depth with these later. Here are some examples of built-in methods in strings: ``` mystring # Make all letters in a string uppercase mystring.upper() # Make all letters in a string lowercase mystring.lower() # Split strings with a specified character as the separator. Spaces are the default. mystring.split() # Split by a specific character (doesn't include the character in the resulting string) mystring.split('W') ``` ### 1.0 Now Try This Given the string 'Amsterdam' give an index command that returns 'd'. Enter your code in the cell below: ``` s = 'Amsterdam' # Print out 'd' using indexing answer1 = # INSERT CODE HERE print(answer1) ``` Reverse the string 'Amsterdam' using slicing: ``` s ='Amsterdam' # Reverse the string using slicing answer2 = # INSERT CODE HERE print(answer2) ``` Given the string Amsterdam, extract the letter 'm' using negative indexing. ``` s ='Amsterdam' # Print out the 'm' answer3 = # INSERT CODE HERE print(answer3) ``` ## Booleans Python comes with *booleans* (values that are essentially binary: True or False, 1 or 0). It also has a placeholder object called None. Let's walk through a few quick examples of Booleans. ``` # Set object to be a boolean a = True #Show a ``` We can also use comparison operators to create booleans. We'll cover comparison operators a little later. ``` # Output is boolean 1 > 2 ``` We can use None as a placeholder for an object that we don't want to reassign yet: ``` # None placeholder b = None # Show print(b) ``` That's all to booleans! Next we start covering data structures. First up, lists. ## Lists Earlier when discussing strings we introduced the concept of a *sequence*. Lists is the most generalized version of sequences in Python. Unlike strings, they are mutable, meaning the elements inside a list can be changed! Lists are constructed with brackets [] and commas separating every element in the list. Let's start with seeing how we can build a list. ### Creating Lists ``` # Assign a list to an variable named my_list my_list = [1,2,3] ``` We just created a list of integers, but lists can actually hold elements of multiple data types. For example: ``` my_list = ['A string',23,100.232,'o'] ``` Just like strings, the len() function will tell you how many items are in the sequence of the list. ``` len(my_list) my_list = ['one','two','three',4,5] # Grab element at index 0 my_list[0] # Grab index 1 and everything past it my_list[1:] # Grab everything UP TO index 3 my_list[:3] ``` We can also use + to concatenate lists, just like we did for strings. ``` my_list + ['new item'] ``` Note: This doesn't actually change the original list! ``` my_list ``` You would have to reassign the list to make the change permanent. ``` # Reassign my_list = my_list + ['add new item permanently'] my_list ``` We can also use the * for a duplication method similar to strings: ``` # Make the list double my_list * 2 # Again doubling not permanent my_list ``` Use the **append** method to permanently add an item to the end of a list: ``` # Append list1.append('append me!') # Show list1 ``` ### List Comprehensions Python has an advanced feature called list comprehensions. They allow for quick construction of lists. To fully understand list comprehensions we need to understand for loops. So don't worry if you don't completely understand this section, and feel free to just skip it since we will return to this topic later. But in case you want to know now, here are a few examples! ``` # Build a list comprehension by deconstructing a for loop within a [] first_col = [row[0] for row in matrix] first_col ``` We used a list comprehension here to grab the first element of every row in the matrix object. We will cover this in much more detail later on! ### 2.0 Now Try This Build this list [0,0,0] using any of the shown ways. ``` # Build the list answer1 = #INSERT CODE HERE print(answer1) ``` ## Tuples In Python tuples are very similar to lists, however, unlike lists they are *immutable* meaning they can not be changed. You would use tuples to present things that shouldn't be changed, such as days of the week, or dates on a calendar. You'll have an intuition of how to use tuples based on what you've learned about lists. We can treat them very similarly with the major distinction being that tuples are immutable. ### Constructing Tuples The construction of a tuples use () with elements separated by commas. For example: ``` # Create a tuple t = (1,2,3) # Check len just like a list len(t) # Can also mix object types t = ('one',2) # Show t # Use indexing just like we did in lists t[0] # Slicing just like a list t[-1] ``` ### Basic Tuple Methods Tuples have built-in methods, but not as many as lists do. Let's look at two of them: ``` # Use .index to enter a value and return the index t.index('one') # Use .count to count the number of times a value appears t.count('one') ``` ### Immutability It can't be stressed enough that tuples are immutable. To drive that point home: ``` t[0]= 'change' ``` Because of this immutability, tuples can't grow. Once a tuple is made we can not add to it. ``` t.append('nope') ``` ### When to use Tuples You may be wondering, "Why bother using tuples when they have fewer available methods?" To be honest, tuples are not used as often as lists in programming, but are used when immutability is necessary. If in your program you are passing around an object and need to make sure it does not get changed, then a tuple becomes your solution. It provides a convenient source of data integrity. You should now be able to create and use tuples in your programming as well as have an understanding of their immutability. ### 3.0 Now Try This Create a tuple. ``` answer1 = #INSERT CODE HERE print(type(answer1)) ``` ## Dictionaries We've been learning about *sequences* in Python but now we're going to switch gears and learn about *mappings* in Python. If you're familiar with other languages you can think of dictionaries as hash tables. So what are mappings? Mappings are a collection of objects that are stored by a *key*, unlike a sequence that stored objects by their relative position. This is an important distinction, since mappings won't retain order as is no *order* to keys.. A Python dictionary consists of a key and then an associated value. That value can be almost any Python object. ### Constructing a Dictionary Let's see how we can build dictionaries and better understand how they work. ``` # Make a dictionary with {} and : to signify a key and a value my_dict = {'key1':'value1','key2':'value2'} # Call values by their key my_dict['key2'] ``` Its important to note that dictionaries are very flexible in the data types they can hold. For example: ``` my_dict = {'key1':123,'key2':[12,23,33],'key3':['item0','item1','item2']} # Let's call items from the dictionary my_dict['key3'] # Can call an index on that value my_dict['key3'][0] # Can then even call methods on that value my_dict['key3'][0].upper() ``` We can affect the values of a key as well. For instance: ``` my_dict['key1'] # Subtract 123 from the value my_dict['key1'] = my_dict['key1'] - 123 #Check my_dict['key1'] ``` A quick note, Python has a built-in method of doing a self subtraction or addition (or multiplication or division). We could have also used += or -= for the above statement. For example: ``` # Set the object equal to itself minus 123 my_dict['key1'] -= 123 my_dict['key1'] ``` We can also create keys by assignment. For instance if we started off with an empty dictionary, we could continually add to it: ``` # Create a new dictionary d = {} # Create a new key through assignment d['animal'] = 'Dog' # Can do this with any object d['answer'] = 42 #Show d ``` ### Nesting with Dictionaries Hopefully you're starting to see how powerful Python is with its flexibility of nesting objects and calling methods on them. Let's see a dictionary nested inside a dictionary: ``` # Dictionary nested inside a dictionary nested inside a dictionary d = {'key1':{'nestkey':{'subnestkey':'value'}}} ``` Seems complicated, but let's see how we can grab that value: ``` # Keep calling the keys d['key1']['nestkey']['subnestkey'] ``` ### Dictionary Methods There are a few methods we can call on a dictionary. Let's get a quick introduction to a few of them: ``` # Create a typical dictionary d = {'key1':1,'key2':2,'key3':3} # Method to return a list of all keys d.keys() # Method to grab all values d.values() # Method to return tuples of all items (we'll learn about tuples soon) d.items() ``` ### 4.0 Now Try This Using keys and indexing, grab the 'hello' from the following dictionaries: ``` d = {'simple_key':'hello'} # Grab 'hello' answer1 = #INSERT CODE HERE print(answer1) d = {'k1':{'k2':'hello'}} # Grab 'hello' answer2 = #INSERT CODE HERE print(answer2) # Getting a little tricker d = {'k1':[{'nest_key':['this is deep',['hello']]}]} #Grab hello answer3 = #INSERT CODE HERE print(answer3) # This will be hard and annoying! d = {'k1':[1,2,{'k2':['this is tricky',{'tough':[1,2,['hello']]}]}]} # Grab hello answer4 = #INSERT CODE HERE print(answer4) ``` ## Comparison Operators As stated previously, comparison operators allow us to compare variables and output a Boolean value (True or False). These operators are the exact same as what you've seen in Math, so there's nothing new here. First we'll present a table of the comparison operators and then work through some examples: <h2> Table of Comparison Operators </h2><p> In the table below, a=9 and b=11.</p> <table class="table table-bordered"> <tr> <th style="width:10%">Operator</th><th style="width:45%">Description</th><th>Example</th> </tr> <tr> <td>==</td> <td>If the values of two operands are equal, then the condition becomes true.</td> <td> (a == b) is not true.</td> </tr> <tr> <td>!=</td> <td>If the values of two operands are not equal, then the condition becomes true.</td> <td>(a != b) is true</td> </tr> <tr> <td>&gt;</td> <td>If the value of the left operand is greater than the value of the right operand, then the condition becomes true.</td> <td> (a &gt; b) is not true.</td> </tr> <tr> <td>&lt;</td> <td>If the value of the left operand is less than the value of the right operand, then the condition becomes true.</td> <td> (a &lt; b) is true.</td> </tr> <tr> <td>&gt;=</td> <td>If the value of the left operand is greater than or equal to the value of the right operand, then the condition becomes true.</td> <td> (a &gt;= b) is not true. </td> </tr> <tr> <td>&lt;=</td> <td>If the value of the left operand is less than or equal to the value of the right operand, then the condition becomes true.</td> <td> (a &lt;= b) is true. </td> </tr> </table> Let's now work through quick examples of each of these. #### Equal ``` 4 == 4 1 == 0 ``` Note that <code>==</code> is a <em>comparison</em> operator, while <code>=</code> is an <em>assignment</em> operator. #### Not Equal ``` 4 != 5 1 != 1 ``` #### Greater Than ``` 8 > 3 1 > 9 ``` #### Less Than ``` 3 < 8 7 < 0 ``` #### Greater Than or Equal to ``` 7 >= 7 9 >= 4 ``` #### Less than or Equal to ``` 4 <= 4 1 <= 3 ``` Hopefully this was more of a review than anything new! Next, we move on to one of the most important aspects of building programs: functions and how to use them. ## Functions ### Introduction to Functions Here, we will explain what a function is in Python and how to create one. Functions will be one of our main building blocks when we construct larger and larger amounts of code to solve problems. **So what is a function?** Formally, a function is a useful device that groups together a set of statements so they can be run more than once. They can also let us specify parameters that can serve as inputs to the functions. On a more fundamental level, functions allow us to not have to repeatedly write the same code again and again. If you remember back to the lessons on strings and lists, remember that we used a function len() to get the length of a string. Since checking the length of a sequence is a common task you would want to write a function that can do this repeatedly at command. Functions will be one of most basic levels of reusing code in Python, and it will also allow us to start thinking of program design. ### def Statements Let's see how to build out a function's syntax in Python. It has the following form: ``` def name_of_function(arg1,arg2): ''' This is where the function's Document String (docstring) goes ''' # Do stuff here # Return desired result ``` We begin with <code>def</code> then a space followed by the name of the function. Try to keep names relevant, for example len() is a good name for a length() function. Also be careful with names, you wouldn't want to call a function the same name as a [built-in function in Python](https://docs.python.org/2/library/functions.html) (such as len). Next come a pair of parentheses with a number of arguments separated by a comma. These arguments are the inputs for your function. You'll be able to use these inputs in your function and reference them. After this you put a colon. Now here is the important step, you must indent to begin the code inside your function correctly. Python makes use of *whitespace* to organize code. Lots of other programing languages do not do this, so keep that in mind. Next you'll see the docstring, this is where you write a basic description of the function. Docstrings are not necessary for simple functions, but it's good practice to put them in so you or other people can easily understand the code you write. After all this you begin writing the code you wish to execute. The best way to learn functions is by going through examples. So let's try to go through examples that relate back to the various objects and data structures we learned about before. ### A simple print 'hello' function ``` def say_hello(): print('hello') ``` Call the function: ``` say_hello() ``` ### A simple greeting function Let's write a function that greets people with their name. ``` def greeting(name): print('Hello %s' %(name)) greeting('Bob') ``` ### Using return Let's see some example that use a <code>return</code> statement. <code>return</code> allows a function to *return* a result that can then be stored as a variable, or used in whatever manner a user wants. ### Example 3: Addition function ``` def add_num(num1,num2): return num1+num2 add_num(4,5) # Can also save as variable due to return result = add_num(4,5) print(result) ``` What happens if we input two strings? ``` add_num('one','two') ``` Note that because we don't declare variable types in Python, this function could be used to add numbers or sequences together! We'll later learn about adding in checks to make sure a user puts in the correct arguments into a function. Let's also start using <code>break</code>, <code>continue</code>, and <code>pass</code> statements in our code. We introduced these during the <code>while</code> lecture. Finally let's go over a full example of creating a function to check if a number is prime (a common interview exercise). We know a number is prime if that number is only evenly divisible by 1 and itself. Let's write our first version of the function to check all the numbers from 1 to N and perform modulo checks. ``` def is_prime(num): ''' Naive method of checking for primes. ''' for n in range(2,num): #'range()' is a function that returns an array based on the range you provide. Here, it is from 2 to 'num' inclusive. if num % n == 0: print(num,'is not prime') break # 'break' statements signify that we exit the loop if the above condition holds true else: # If never mod zero, then prime print(num,'is prime!') is_prime(16) is_prime(17) ``` Note how the <code>else</code> lines up under <code>for</code> and not <code>if</code>. This is because we want the <code>for</code> loop to exhaust all possibilities in the range before printing our number is prime. Also note how we break the code after the first print statement. As soon as we determine that a number is not prime we break out of the <code>for</code> loop. We can actually improve this function by only checking to the square root of the target number, and by disregarding all even numbers after checking for 2. We'll also switch to returning a boolean value to get an example of using return statements: ``` import math def is_prime2(num): ''' Better method of checking for primes. ''' if num % 2 == 0 and num > 2: return False for i in range(3, int(math.sqrt(num)) + 1, 2): if num % i == 0: return False return True is_prime2(27) ``` Why don't we have any <code>break</code> statements? It should be noted that as soon as a function *returns* something, it shuts down. A function can deliver multiple print statements, but it will only obey one <code>return</code>. ### 5.0 Now Try This Write a function that capitalizes the first and fourth letters of a name. For this, you might want to make use of a string's `.upper()` method. cap_four('macdonald') --> MacDonald Note: `'macdonald'.capitalize()` returns `'Macdonald'` ``` def cap_four(name): return new_name # Check answer1 = cap_four('macdonald') print(answer1) ``` ## Modules and Packages ### Understanding modules Modules in Python are simply Python files with the .py extension, which implement a set of functions. Modules are imported from other modules using the import command. To import a module, we use the import command. Check out the full list of built-in modules in the Python standard library here. The first time a module is loaded into a running Python script, it is initialized by executing the code in the module once. If another module in your code imports the same module again, it will not be loaded twice. If we want to import the math module, we simply import the name of the module: ``` # import the library import math # use it (ceiling rounding) math.ceil(3.2) ``` ## Why, Where, and How we use NumPy NumPy is a library for Python that allows you to create matrices and multidimensional arrays, as well as perform many sophisticated mathematical operations on them. Previously, dealing with anything more than a single-dimensional array was very difficult in base Python. Additionally, there weren't a lot of built-in functionality to perform many standard mathematical operations that data scientists typically do with data, such as transposing, dot products, cumulative sums, etc. All of this makes NumPy very useful in statistical analyses and analyzing datasets to produce insights. ### Creating Arrays NumPy allows you to work with arrays very efficiently. The array object in NumPy is called *ndarray*. This is short for 'n-dimensional array'. We can create a NumPy ndarray object by using the array() function. ``` import numpy as np arr = np.array([1,2,3,4,5,6,7,8,9,10]) print(arr) print(type(arr)) ``` ### Indexing Indexing is the same thing as accessing an element of a list or string. In this case, we will be accessing an array element. You can access an array element by referring to its **index number**. The indexes in NumPy arrays start with 0, also like in base Python. The following example shows how you can access multiple elements of an array and perform operations on them. ``` import numpy as np arr = np.array([1,2,3,4,5,6,7,8,9,10]) print(arr[4] + arr[8]) ``` ### Slicing Slicing in NumPy behaves much like in base Python, a quick recap from above: We slice using this syntax: [start:end]. We can also define the step, like this: [start:end:step]. ``` # Reverse an array through backwards/negative stepping import numpy as np arr = np.array([3,7,9,0]) print(arr[::-1]) # Slice elements from the beginning to index 8 import numpy as np arr = np.array([1, 2, 3, 4, 5, 6, 7,8,9,10]) print(arr[:8]) ``` You'll notice we only got to index 7. That's because the end is always *non-inclusive*. We slice up to but not including the end value. The start index on the other hand, **is** inclusive. ### 6.0 Now Try This: Create an array of at least size 10 and populate it with random numbers. Then, use slicing to split it into two and create two new arrays. Then, find the sum of the third digits in each array. ``` # Answer here ``` ### Data Types Just like base Python, NumPy has many data types available. They are all differentiated by a single character, and here are the most common few: * i - int / integer (whole numbers) * b - boolean (true/false) * f - float (decimal numbers) * S - string * There are many more too! ``` # Checking the data type of an array import numpy as np arr = np.array([5, 7, 3, 1]) print(arr.dtype) # How to convert between types import numpy as np arr = np.array([4.4, 24.1, 3.7]) print(arr) print(arr.dtype) # Converts decimal numbers by rounding them all down to whole numbers newarr = arr.astype('i') print(newarr) print(newarr.dtype) ``` ### 7.0 Now Try This: Modify the code below to fix the error and make the addition work: ``` import numpy as np arr = np.array([1,3,5,7],dtype='S') arr2 = np.array([2,4,6,8],dtype='i') print(arr + arr2) ``` ### Copy vs. View In NumPy, you can work with either a copy of the data or the data itself, and it's very important that you know the difference. Namely, modifying a copy of the data will not change the original dataset but modifying the view **will**. Here are some examples: ``` # A Copy import numpy as np arr = np.array([6, 2, 1, 5, 3]) x = arr.copy() arr[0] = 8 print(arr) print(x) # A View import numpy as np arr = np.array([6, 2, 1, 5, 3]) x = arr.view() arr[0] = 8 print(arr) print(x) ``` ### 8.0 Now Try This: A student wants to create a copy of an array and modify the first element. The following is the code they wrote for it: arr = np.array([1,2,3,4,5]) x = arr x[0] = 0 Is this correct? ### Shape All NumPy arrays have an attribute called *shape*. This is helpful for 2d or n-dimensional arrays, but for simple lists, it is just the number of elements that it has. ``` # Print the shape of an array import numpy as np arr = np.array([2,7,3,7]) print(arr.shape) ``` ### 9.0 Now Try This: Without using Python, what is the shape of this array? Answer in the same format as the `shape` method. arr = np.array([[0,1,2].[3,4,5]) ### Iterating Through Arrays Iterating simply means to traverse or travel through an object. In the case of arrays, we can iterate through them by using simple for loops. ``` import numpy as np arr = np.array([1, 5, 7]) for x in arr: print(x) ``` ### Joining Arrays Joining combining the elements of multiple arrays into one. The basic way to do it is like this: ``` import numpy as np arr1 = np.array([7, 1, 0]) arr2 = np.array([2, 8, 1]) arr = np.concatenate((arr1, arr2)) print(arr) ``` ### Splitting Arrays Splitting is the opposite of joining arrays. It takes one array and creates multiple from it. ``` # Split array into 4 import numpy as np arr = np.array([1, 2, 3, 4, 5, 6,7,8]) newarr = np.array_split(arr, 4) print(newarr) ``` ### Searching Arrays Searching an array to find a certain element is a very important and basic operation. We can do this using the *where()* method. ``` import numpy as np arr = np.array([1, 2, 5, 9, 5, 3, 4]) x = np.where(arr == 4) # Returns the index of the array element(s) that matches this condition print(x) # Find all the odd numbers in an array import numpy as np arr = np.array([10, 20, 30, 40, 50, 60, 70, 80,99]) x = np.where(arr%2 == 1) print(x) ``` ### Sorting Arrays Sorting an array is another very important and commonly used operation. NumPy has a function called sort() for this task. ``` import numpy as np arr = np.array([4, 1, 0, 3]) print(np.sort(arr)) # Sorting a string array alphabetically import numpy as np arr = np.array(['zephyr', 'gate', 'match']) print(np.sort(arr)) ``` ### Filtering Arrays Sometimes you would want to create a new array from an existing array where you select elements out based on a certain condition. Let's say you have an array with all integers from 1 to 10. You would like to create a new array with only the odd numbers from that list. You can do this very efficiently with **filtering**. When you filter something, you only take out what you want, and the same principle applies to objects in NumPy. NumPy uses what's called a **boolean index list** to filter. This is an array of True and False values that correspond directly to the target array and what values you would like to filter. For example, using the example above, the target array would look like this: [1,2,3,4,5,6,7,8,9,10] And if you wanted to filter out the odd values, you would use this particular boolean index list: [True,False,True,False,True,False,True,False,True,False] Applying this list onto the target array will get you what you want: [1,3,5,7,9] A working code example is shown below: ``` import numpy as np arr = np.array([51, 52, 53, 54]) x = [False, False, True, True] newarr = arr[x] print(newarr) ``` We don't need to hard-code the True and False values. Like stated previously, we can filter based on conditions. ``` arr = np.array([51, 52, 53, 54]) # Create an empty list filter_arr = [] # go through each element in arr for element in arr: # if the element is higher than 52, set the value to True, otherwise False: if element > 52: filter_arr.append(True) else: filter_arr.append(False) newarr = arr[filter_arr] print(filter_arr) print(newarr) ``` Filtering is a very common task when working with data and as such, NumPy has an even more efficient way to perform it. It is possible to create a boolean index list directly from the target array and then apply it to obtain the filtered array. See the example below: ``` import numpy as np arr = np.array([10,20,30,40,50,60,70,80,90,100]) filter = arr > 50 filter_arr = arr[filter] print(filter) print(filter_arr) ``` ### 10.0 Now Try This: Create an array with the first 10 numbers of the Fibonacci sequence. Split this array into two. On each half, search for any multiples of 4. Next, filter both arrays for multiples of 5. Finally, take the two filtered arrays, join them, and sort them. ``` # Answer here ``` ## Resources - [Python Documentation](https://docs.python.org/3/) - [Official Python Tutorial](https://docs.python.org/3/tutorial/) - [W3Schools Python Tutorial](https://www.w3schools.com/python/)
true
code
0.300977
null
null
null
null
Move current working directory, in case for developing the machine learning program by remote machine or it is fine not to use below single line. ``` %cd /tmp/pycharm_project_881 import numpy as np import pandas as pd def sigmoid(x): return 1/(1+np.exp(-x)) def softmax(x): x = x - x.max(axis=1, keepdims=True) return np.exp(x)/np.sum(np.exp(x),axis=1, keepdims=True) df = pd.read_csv("adult.data.txt", names=["age","workclass","fnlwgt","education","education-num","marital-status" \ ,"occupation","relationship","race","sex","capital-gain","capital-loss","hours-per-week","native-country","class"]) dx = pd.read_csv("adult.test.txt", names=["age","workclass","fnlwgt","education","education-num","marital-status" \ ,"occupation","relationship","race","sex","capital-gain","capital-loss","hours-per-week","native-country","class"]) df.head() for lf in df: if df[lf].dtype == "object": df[lf] = df[lf].astype("category").cat.codes dx[lf] = dx[lf].astype("category").cat.codes else : df[lf] = (df[lf] - df[lf].mean())/(df[lf].max() - df[lf].min()) dx[lf] = (dx[lf] - dx[lf].mean()) / (dx[lf].max() - dx[lf].min()) df.head() ``` Set initial hyperparameters.. ``` x = df.drop(columns=["class"]) y = df["class"].values x_test = dx.drop(columns=["class"]) y_test = dx["class"].values multi_y = np.zeros((y.size, y.max()+1)) multi_y[np.arange(y.size), y] = 1 multi_y_test = np.zeros((y_test.size, y_test.max()+1)) multi_y_test[np.arange(y_test.size), y_test] = 1 inputSize = len(x.columns) numberOfNodes = 150 numberOfClass = y.max() + 1 numberOfExamples = x.shape[0] w1 = np.random.random_sample(size=(inputSize, numberOfNodes)) b1 = np.random.random_sample(numberOfNodes) w2 = np.random.random_sample(size=(numberOfNodes, numberOfClass)) b2 = np.random.random_sample(numberOfClass) batchSize = 32 trainNum = 150 learningRate = 0.01 # Start Training for k in range(trainNum + 1): cost = 0 accuracy = 0 for i in range(int(numberOfExamples/batchSize)): # Forward-Propagation z = x[i * batchSize : (i+1) * batchSize] z_y = multi_y[i * batchSize : (i+1) * batchSize] layer1 = np.matmul(z, w1) + b1 sig_layer1 = sigmoid(layer1) layer2 = np.matmul(sig_layer1, w2) + b2 soft_layer2 = softmax(layer2) pred = np.argmax(soft_layer2, axis=1) # Cost Function: Cross-Entropy loss cost += -(z_y * np.log(soft_layer2 + 1e-9) + (1-z_y) * np.log(1 - soft_layer2 + 1e-9)).sum() accuracy += (pred == y[i * batchSize : (i + 1) * batchSize]).sum() # Back-Propagation dlayer2 = soft_layer2 - multi_y[i * batchSize : (i+1) * batchSize] dw2 = np.matmul(sig_layer1.T, dlayer2) / batchSize db2 = dlayer2.mean(axis=0) dsig_layer1 = (dlayer2.dot(w2.T)) dlayer1 = sigmoid(layer1) * (1 - sigmoid(layer1)) * dsig_layer1 dw1 = np.matmul(z.T, dlayer1) / batchSize db1 = dlayer1.mean(axis=0) w2 -= learningRate * dw2 w1 -= learningRate * dw1 b2 -= learningRate * db2 b1 -= learningRate * db1 if k % 10 == 0 : print("-------- # : {} ---------".format(k)) print("cost: {}".format(cost/numberOfExamples)) print("accuracy: {} %".format(accuracy/numberOfExamples * 100)) # Test the trained model test_cost = 0 test_accuracy = 0 # Forward-Propagation layer1 = np.matmul(x_test, w1) + b1 sig_layer1 = sigmoid(layer1) layer2 = np.matmul(sig_layer1, w2) + b2 soft_layer2 = softmax(layer2) pred = np.argmax(soft_layer2, axis=1) # Cost Function: Cross-Entropy loss test_cost += -(multi_y_test * np.log(soft_layer2 + 1e-9) + (1-multi_y_test) * np.log(1 - soft_layer2 + 1e-9)).sum() test_accuracy += (pred == y_test).sum() print("---- Result of applying test data to the trained model") print("cost: {}".format(test_cost/numberOfExamples)) print("accuracy: {} %".format(test_accuracy/numberOfExamples * 100)) ```
true
code
0.477189
null
null
null
null
## Fashion Item Recognition with CNN > Antonopoulos Ilias (p3352004) <br /> > Ndoja Silva (p3352017) <br /> > MSc Data Science AUEB ## Table of Contents - [Data Loading](#Data-Loading) - [Hyperparameter Tuning](#Hyperparameter-Tuning) - [Model Selection](#Model-Selection) - [Evaluation](#Evaluation) ``` import gc import itertools import numpy as np import keras_tuner as kt import matplotlib.pyplot as plt import tensorflow as tf from sklearn.metrics import confusion_matrix print(tf.__version__) print("Num GPUs Available: ", len(tf.config.list_physical_devices("GPU"))) ``` ### Data Loading ``` fashion_mnist = tf.keras.datasets.fashion_mnist (train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data() train_images.shape train_labels set(train_labels) test_images.shape ``` This is a dataset of 60,000 28x28 grayscale images of 10 fashion categories, along with a test set of 10,000 images. The classes are: | Label | Description | |:-----:|-------------| | 0 | T-shirt/top | | 1 | Trouser | | 2 | Pullover | | 3 | Dress | | 4 | Coat | | 5 | Sandal | | 6 | Shirt | | 7 | Sneaker | | 8 | Bag | | 9 | Ankle boot | ``` class_names = [ "T-shirt/top", "Trouser", "Pullover", "Dress", "Coat", "Sandal", "Shirt", "Sneaker", "Bag", "Ankle boot", ] ``` ### Hyperparameter Tuning ``` SEED = 123456 np.random.seed(SEED) tf.random.set_seed(SEED) def clean_up(model_): tf.keras.backend.clear_session() del model_ gc.collect() def cnn_model_builder(hp): """Creates a HyperModel instance (or callable that takes hyperparameters and returns a Model instance).""" model = tf.keras.Sequential( [ tf.keras.layers.Conv2D( filters=hp.Int("1st-filter", min_value=32, max_value=128, step=16), kernel_size=(3, 3), strides=(1, 1), padding="same", kernel_regularizer="l2", dilation_rate=(1, 1), activation="relu", input_shape=(28, 28, 1), name="1st-convolution", ), tf.keras.layers.MaxPool2D( pool_size=(2, 2), strides=(2, 2), padding="same", name="1st-max-pooling" ), tf.keras.layers.Dropout( rate=hp.Float("1st-dropout", min_value=0.0, max_value=0.4, step=0.1), name="1st-dropout", ), tf.keras.layers.Conv2D( filters=hp.Int("2nd-filter", min_value=32, max_value=64, step=16), kernel_size=(3, 3), strides=(1, 1), padding="same", kernel_regularizer="l2", dilation_rate=(1, 1), activation="relu", name="2nd-convolution", ), tf.keras.layers.MaxPool2D( pool_size=(2, 2), strides=(2, 2), padding="same", name="2nd-max-pooling" ), tf.keras.layers.Dropout( rate=hp.Float("2nd-dropout", min_value=0.0, max_value=0.4, step=0.1), name="2nd-dropout", ), tf.keras.layers.Flatten(name="flatten-layer"), tf.keras.layers.Dense( units=hp.Int("dense-layer-units", min_value=32, max_value=128, step=16), kernel_regularizer="l2", activation="relu", name="dense-layer", ), tf.keras.layers.BatchNormalization(), tf.keras.layers.Dense(units=10, activation="softmax", name="output-layer"), ] ) model.compile( optimizer=tf.keras.optimizers.Adam( learning_rate=hp.Choice( "learning-rate", values=[1e-3, 1e-4, 2 * 1e-4, 4 * 1e-4] ) ), loss=tf.keras.losses.SparseCategoricalCrossentropy(), metrics=["accuracy"], ) return model # BayesianOptimization tuning with Gaussian process # THERE IS A BUG HERE: https://github.com/keras-team/keras-tuner/pull/655 # tuner = kt.BayesianOptimization( # cnn_model_builder, # objective="val_accuracy", # max_trials=5, # the total number of trials (model configurations) to test at most # allow_new_entries=True, # tune_new_entries=True, # seed=SEED, # directory="hparam-tuning", # project_name="cnn", # ) # Li, Lisha, and Kevin Jamieson. # "Hyperband: A Novel Bandit-Based Approach to Hyperparameter Optimization." # Journal of Machine Learning Research 18 (2018): 1-52. # https://jmlr.org/papers/v18/16-558.html tuner = kt.Hyperband( cnn_model_builder, objective="val_accuracy", max_epochs=50, # the maximum number of epochs to train one model seed=SEED, directory="hparam-tuning", project_name="cnn", ) tuner.search_space_summary() stop_early = tf.keras.callbacks.EarlyStopping(monitor="val_loss", patience=5) tuner.search( train_images, train_labels, epochs=40, validation_split=0.2, callbacks=[stop_early] ) # get the optimal hyperparameters best_hps = tuner.get_best_hyperparameters(num_trials=1)[0] print( f""" The hyperparameter search is complete. \n Results ======= | ---- optimal number of output filters in the 1st convolution : {best_hps.get('1st-filter')} | ---- optimal first dropout rate : {best_hps.get('1st-dropout')} | ---- optimal number of output filters in the 2nd convolution : {best_hps.get('2nd-filter')} | ---- optimal second dropout rate : {best_hps.get('2nd-dropout')} | ---- optimal number of units in the densely-connected layer : {best_hps.get('dense-layer-units')} | ---- optimal learning rate for the optimizer : {best_hps.get('learning-rate')} """ ) ``` ### Model Selection ``` model = tuner.get_best_models(num_models=1)[0] model.summary() tf.keras.utils.plot_model( model, to_file="static/cnn_model.png", show_shapes=True, show_layer_names=True ) clean_up(model) # build the model with the optimal hyperparameters and train it on the data for 50 epochs model = tuner.hypermodel.build(best_hps) history = model.fit(train_images, train_labels, epochs=50, validation_split=0.2) # keep best epoch val_acc_per_epoch = history.history["val_accuracy"] best_epoch = val_acc_per_epoch.index(max(val_acc_per_epoch)) + 1 print("Best epoch: %d" % (best_epoch,)) clean_up(model) hypermodel = tuner.hypermodel.build(best_hps) # retrain the model history = hypermodel.fit( train_images, train_labels, epochs=best_epoch, validation_split=0.2 ) ``` ### Evaluation ``` eval_result = hypermodel.evaluate(test_images, test_labels, verbose=3) print("[test loss, test accuracy]:", eval_result) def plot_history(hs, epochs, metric): print() plt.style.use("dark_background") plt.rcParams["figure.figsize"] = [15, 8] plt.rcParams["font.size"] = 16 plt.clf() for label in hs: plt.plot( hs[label].history[metric], label="{0:s} train {1:s}".format(label, metric), linewidth=2, ) plt.plot( hs[label].history["val_{0:s}".format(metric)], label="{0:s} validation {1:s}".format(label, metric), linewidth=2, ) x_ticks = np.arange(0, epochs + 1, epochs / 10) x_ticks[0] += 1 plt.xticks(x_ticks) plt.ylim((0, 1)) plt.xlabel("Epochs") plt.ylabel("Loss" if metric == "loss" else "Accuracy") plt.legend() plt.show() print("Train Loss : {0:.5f}".format(history.history["loss"][-1])) print("Validation Loss : {0:.5f}".format(history.history["val_loss"][-1])) print("Test Loss : {0:.5f}".format(eval_result[0])) print("-------------------") print("Train Accuracy : {0:.5f}".format(history.history["accuracy"][-1])) print("Validation Accuracy : {0:.5f}".format(history.history["val_accuracy"][-1])) print("Test Accuracy : {0:.5f}".format(eval_result[1])) # Plot train and validation error per epoch. plot_history(hs={"CNN": history}, epochs=best_epoch, metric="loss") plot_history(hs={"CNN": history}, epochs=best_epoch, metric="accuracy") def plot_confusion_matrix( cm, classes, normalize=False, title="Confusion matrix", cmap=plt.cm.PuBuGn ): plt.style.use("default") plt.rcParams["figure.figsize"] = [11, 9] plt.imshow(cm, interpolation="nearest", cmap=cmap) plt.title(title) plt.colorbar() tick_marks = np.arange(len(classes)) plt.xticks(tick_marks, classes, rotation=90) plt.yticks(tick_marks, classes) if normalize: cm = cm.astype("float") / cm.sum(axis=1)[:, np.newaxis] thresh = cm.max() / 2.0 for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])): plt.text( j, i, cm[i, j], horizontalalignment="center", color="white" if cm[i, j] > thresh else "black", ) plt.tight_layout() plt.ylabel("True label") plt.xlabel("Predicted label") # Predict the values from the validation dataset Y_pred = hypermodel.predict(test_images) # Convert predictions classes to one hot vectors Y_pred_classes = np.argmax(Y_pred, axis=1) # compute the confusion matrix confusion_mtx = confusion_matrix(test_labels, Y_pred_classes) # plot the confusion matrix plot_confusion_matrix( confusion_mtx, classes=class_names, ) incorrect = [] for i in range(len(test_labels)): if not Y_pred_classes[i] == test_labels[i]: incorrect.append(i) if len(incorrect) == 4: break fig, ax = plt.subplots(2, 2, figsize=(12, 6)) fig.set_size_inches(10, 10) ax[0, 0].imshow(test_images[incorrect[0]].reshape(28, 28), cmap="gray") ax[0, 0].set_title( "Predicted Label : " + class_names[Y_pred_classes[incorrect[0]]] + "\n" + "Actual Label : " + class_names[test_labels[incorrect[0]]] ) ax[0, 1].imshow(test_images[incorrect[1]].reshape(28, 28), cmap="gray") ax[0, 1].set_title( "Predicted Label : " + class_names[Y_pred_classes[incorrect[1]]] + "\n" + "Actual Label : " + class_names[test_labels[incorrect[1]]] ) ax[1, 0].imshow(test_images[incorrect[2]].reshape(28, 28), cmap="gray") ax[1, 0].set_title( "Predicted Label : " + class_names[Y_pred_classes[incorrect[2]]] + "\n" + "Actual Label : " + class_names[test_labels[incorrect[2]]] ) ax[1, 1].imshow(test_images[incorrect[3]].reshape(28, 28), cmap="gray") ax[1, 1].set_title( "Predicted Label : " + class_names[Y_pred_classes[incorrect[3]]] + "\n" + "Actual Label : " + class_names[test_labels[incorrect[3]]] ) ```
true
code
0.730957
null
null
null
null
``` import sys import keras import tensorflow as tf print('python version:', sys.version) print('keras version:', keras.__version__) print('tensorflow version:', tf.__version__) ``` # 6.3 Advanced use of recurrent neural networks --- ## A temperature-forecasting problem ### Inspecting the data of the Jena weather dataset ``` import matplotlib.pyplot as plt import numpy as np import os %matplotlib inline data_dir = 'jena_climate' fname = os.path.join(data_dir, 'jena_climate_2009_2016.csv') f = open(fname) data = f.read() f.close() lines = data.split('\n') header = lines[0].split(',') lines = lines[1:] print(header) print(len(lines)) ``` ### Parsing the data ``` float_data = np.zeros((len(lines), len(header) - 1)) for i, line in enumerate(lines): values = [float(x) for x in line.split(',')[1:]] float_data[i, :] = values ``` ### Plotting the temperature timeseries ``` temp = float_data[:, 1] plt.plot(range(len(temp)), temp) plt.show() ``` ### Plotting the first 10 days of the temperature timeseries ``` plt.plot(range(1440), temp[:1440]) plt.show() ``` ### Normalizing the data ``` mean = float_data[:200000].mean(axis = 0) float_data -= mean std = float_data[:200000].std(axis = 0) float_data /= std ``` ### Generator yielding timeseries samples and their targets ``` def generator(data, lookback, delay, min_index, max_index, shuffle = False, batch_size = 128, step = 6, revert = False): if max_index is None: max_index = len(data) - delay - 1 i = min_index + lookback while 1: if shuffle: rows = np.random.randint(min_index + lookback, max_index, size = batch_size) else: if i + batch_size >= max_index: i = min_index + lookback rows = np.arange(i, min(i + batch_size, max_index)) i += len(rows) samples = np.zeros((len(rows), lookback//step, data.shape[-1])) targets = np.zeros((len(rows),)) for j, row in enumerate(rows): indices = range(rows[j] - lookback, rows[j], step) samples[j] = data[indices] targets[j] = data[rows[j] + delay][1] if revert: yield samples[:, ::-1, :], targets else: yield samples, targets ``` ### Preparing the training, validation and test generators ``` lookback = 1440 step = 6 delay = 144 batch_size = 128 train_gen = generator(float_data, lookback = lookback, delay = delay, min_index = 0, max_index = 200000, shuffle = True, step = step, batch_size = batch_size) val_gen = generator(float_data, lookback = lookback, delay = delay, min_index = 200001, max_index = 300000, step = step, batch_size = batch_size) test_gen = generator(float_data, lookback = lookback, delay = delay, min_index = 300001, max_index = None, step = step, batch_size = batch_size) train_gen_r = generator(float_data, lookback = lookback, delay = delay, min_index = 0, max_index = 200000, shuffle = True, step = step, batch_size = batch_size, revert = True) val_gen_r = generator(float_data, lookback = lookback, delay = delay, min_index = 200001, max_index = 300000, step = step, batch_size = batch_size, revert = True) test_gen_r = generator(float_data, lookback = lookback, delay = delay, min_index = 300001, max_index = None, step = step, batch_size = batch_size, revert = True) # How many steps to draw from val_gen in order to see the entire validation set val_steps = (300000 - 200001 - lookback) // batch_size # How many steps to draw from test_gen in order to see the entire test set test_steps = (len(float_data) - 300001 - lookback) // batch_size ``` ### Computing the common-sense baseline MAE ``` def evaluate_naive_method(): batch_maes = [] for step in range(val_steps): samples, targets = next(val_gen) preds = samples[:, -1, 1] mae = np.mean(np.abs(preds - targets)) batch_maes.append(mae) print(np.mean(batch_maes)) evaluate_naive_method() ``` ### Training and evaluating a densely connected model ``` from keras import Sequential from keras import layers from keras.optimizers import RMSprop model = Sequential() model.add(layers.Flatten(input_shape = (lookback // step, float_data.shape[-1]))) model.add(layers.Dense(32, activation = 'relu')) model.add(layers.Dense(1)) model.compile(optimizer = RMSprop(), loss = 'mae') history = model.fit_generator(train_gen, steps_per_epoch = 500, epochs = 20, validation_data = val_gen, validation_steps = val_steps) ``` ### Plotting results ``` loss = history.history['loss'] val_loss = history.history['val_loss'] epochs = range(1, len(loss) + 1) plt.figure() plt.plot(epochs, loss, 'bo', label = 'Training loss') plt.plot(epochs, val_loss, 'b', label = 'Validation loss') plt.title('Training and validation loss') plt.legend() plt.show() ``` ### Training and evaluating a GRU-based model ``` from keras.models import Sequential from keras import layers from keras.optimizers import RMSprop model = Sequential() model.add(layers.GRU(32, implementation = 1, input_shape = (None, float_data.shape[-1]))) model.add(layers.Dense(1)) model.compile(optimizer = RMSprop(), loss = 'mae') history = model.fit_generator(train_gen, steps_per_epoch = 500, epochs = 20, validation_data = val_gen, validation_steps = val_steps) ``` ### Plotting results ``` loss = history.history['loss'] val_loss = history.history['val_loss'] epochs = range(1, len(loss) + 1) plt.figure() plt.plot(epochs, loss, 'bo', label = 'Training loss') plt.plot(epochs, val_loss, 'b', label = 'Validation loss') plt.title('Training and validation loss') plt.legend() plt.show() ``` ### Training and evaluating a dropout-regularized GRU-based model ``` from keras.models import Sequential from keras import layers from keras.optimizers import RMSprop model = Sequential() model.add(layers.GRU(32, implementation = 1, dropout = 0.2, recurrent_dropout = 0.2, input_shape = (None, float_data.shape[-1]))) model.add(layers.Dense(1)) model.compile(optimizer = RMSprop(), loss = 'mae') history = model.fit_generator(train_gen, steps_per_epoch = 500, epochs = 40, validation_data = val_gen, validation_steps = val_steps) ``` ### Plotting results ``` loss = history.history['loss'] val_loss = history.history['val_loss'] epochs = range(1, len(loss) + 1) plt.figure() plt.plot(epochs, loss, 'bo', label = 'Training loss') plt.plot(epochs, val_loss, 'b', label = 'Validation loss') plt.title('Training and validation loss') plt.legend() plt.show() ``` ### Training and evaluating a dropout-regularized, stacked GRU model ``` from keras.models import Sequential from keras import layers from keras.optimizers import RMSprop model = Sequential() model.add(layers.GRU(32, implementation = 1, dropout = 0.1, recurrent_dropout = 0.5, return_sequences = True, input_shape = (None, float_data.shape[-1]))) model.add(layers.GRU(64, implementation = 1, activation = 'relu', dropout = 0.1, recurrent_dropout = 0.5)) model.add(layers.Dense(1)) model.compile(optimizer = RMSprop(), loss = 'mae') history = model.fit_generator(train_gen, steps_per_epoch = 500, epochs = 40, validation_data = val_gen, validation_steps = val_steps) ``` ### Plotting results ``` loss = history.history['loss'] val_loss = history.history['val_loss'] epochs = range(1, len(loss) + 1) plt.figure() plt.plot(epochs, loss, 'bo', label = 'Training loss') plt.plot(epochs, val_loss, 'b', label = 'Validation loss') plt.title('Training and validation loss') plt.legend() plt.show() ``` ### Training and evaluating an GRU-based model using reversed sequences ``` from keras.models import Sequential from keras import layers from keras.optimizers import RMSprop model = Sequential() model.add(layers.GRU(32, implementation = 1, input_shape = (None, float_data.shape[-1]))) model.add(layers.Dense(1)) model.compile(optimizer = RMSprop(), loss = 'mae') history = model.fit_generator(train_gen_r, steps_per_epoch = 500, epochs = 20, validation_data = val_gen_r, validation_steps = val_steps) ``` ### Plotting results ``` loss = history.history['loss'] val_loss = history.history['val_loss'] epochs = range(1, len(loss) + 1) plt.figure() plt.plot(epochs, loss, 'bo', label = 'Training loss') plt.plot(epochs, val_loss, 'b', label = 'Validation loss') plt.title('Training and validation loss') plt.legend() plt.show() ``` ### Training and evaluating an LSTM using reversed sequences ``` from keras.datasets import imdb from keras.preprocessing import sequence from keras import layers from keras.models import Sequential max_features = 10000 # Number of words to consider as features maxlen = 500 # Cuts off texts after this number of words (x_train, y_train), (x_test, y_test) = imdb.load_data(num_words = max_features) # Reverses sequences x_train = [x[::-1] for x in x_train] x_test = [x[::-1] for x in x_test] # Pads sequences x_train = sequence.pad_sequences(x_train, maxlen = maxlen) x_test = sequence.pad_sequences(x_test, maxlen = maxlen) model = Sequential() model.add(layers.Embedding(max_features, 128)) model.add(layers.LSTM(32)) model.add(layers.Dense(1, activation = 'sigmoid')) model.compile(optimizer = 'rmsprop', loss = 'binary_crossentropy', metrics = ['acc']) history = model.fit(x_train, y_train, epochs = 10, batch_size = 128, validation_split = 0.2) ``` ### Training and evaluating a bidirectional LSTM ``` model = Sequential() model.add(layers.Embedding(max_features, 32)) model.add(layers.Bidirectional(layers.LSTM(32))) model.add(layers.Dense(1, activation = 'sigmoid')) model.compile(optimizer = 'rmsprop', loss = 'binary_crossentropy', metrics = ['acc']) history = model.fit(x_train, y_train, epochs = 10, batch_size = 128, validation_split = 0.2) ``` ### Training a bidirectional GRU ``` from keras.models import Sequential from keras import layers from keras.optimizers import RMSprop model = Sequential() model.add(layers.Bidirectional(layers.GRU(32, implementation = 1), input_shape = (None, float_data.shape[-1]))) model.add(layers.Dense(1)) model.compile(optimizer = RMSprop(), loss = 'mae') history = model.fit_generator(train_gen, steps_per_epoch = 500, epochs = 40, validation_data = val_gen, validation_steps = val_steps) ``` ### Plotting results ``` loss = history.history['loss'] val_loss = history.history['val_loss'] epochs = range(1, len(loss) + 1) plt.figure() plt.plot(epochs, loss, 'bo', label = 'Training loss') plt.plot(epochs, val_loss, 'b', label = 'Validation loss') plt.title('Training and validation loss') plt.legend() plt.show() ```
true
code
0.53783
null
null
null
null
# Improving Data Quality **Learning Objectives** 1. Resolve missing values 2. Convert the Date feature column to a datetime format 3. Rename a feature column, remove a value from a feature column 4. Create one-hot encoding features 5. Understand temporal feature conversions ## Introduction Recall that machine learning models can only consume numeric data, and that numeric data should be "1"s or "0"s. Data is said to be "messy" or "untidy" if it is missing attribute values, contains noise or outliers, has duplicates, wrong data, upper/lower case column names, and is essentially not ready for ingestion by a machine learning algorithm. This notebook presents and solves some of the most common issues of "untidy" data. Note that different problems will require different methods, and they are beyond the scope of this notebook. Each learning objective will correspond to a __#TODO__ in the [student lab notebook](https://github.com/GoogleCloudPlatform/training-data-analyst/blob/master/courses/machine_learning/deepdive2/launching_into_ml/labs/improve_data_quality.ipynb) -- try to complete that notebook first before reviewing this solution notebook. ``` # Use the chown command to change the ownership of the repository to user !sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst ``` ### Import Libraries ``` import os # Here we'll import Pandas and Numpy data processing libraries import pandas as pd import numpy as np from datetime import datetime # Use matplotlib for visualizing the model import matplotlib.pyplot as plt # Use seaborn for data visualization import seaborn as sns %matplotlib inline ``` ### Load the Dataset The dataset is based on California's [Vehicle Fuel Type Count by Zip Code](https://data.ca.gov/dataset/vehicle-fuel-type-count-by-zip-codeSynthetic) report. The dataset has been modified to make the data "untidy" and is thus a synthetic representation that can be used for learning purposes. ``` # Creating directory to store dataset if not os.path.isdir("../data/transport"): os.makedirs("../data/transport") # Download the raw .csv data by copying the data from a cloud storage bucket. !gsutil cp gs://cloud-training-demos/feat_eng/transport/untidy_vehicle_data.csv ../data/transport # ls shows the working directory's contents. # Using the -l parameter will lists files with assigned permissions !ls -l ../data/transport ``` ### Read Dataset into a Pandas DataFrame Next, let's read in the dataset just copied from the cloud storage bucket and create a Pandas DataFrame. We also add a Pandas .head() function to show you the top 5 rows of data in the DataFrame. Head() and Tail() are "best-practice" functions used to investigate datasets. ``` # Reading "untidy_vehicle_data.csv" file using the read_csv() function included in the pandas library. df_transport = pd.read_csv('../data/transport/untidy_vehicle_data.csv') # Output the first five rows. df_transport.head() ``` ### DataFrame Column Data Types DataFrames may have heterogenous or "mixed" data types, that is, some columns are numbers, some are strings, and some are dates etc. Because CSV files do not contain information on what data types are contained in each column, Pandas infers the data types when loading the data, e.g. if a column contains only numbers, Pandas will set that column’s data type to numeric: integer or float. Run the next cell to see information on the DataFrame. ``` # The .info() function will display the concise summary of an dataframe. df_transport.info() ``` From what the .info() function shows us, we have six string objects and one float object. We can definitely see more of the "string" object values now! ``` # Let's print out the first and last five rows of each column. print(df_transport,5) ``` ### Summary Statistics At this point, we have only one column which contains a numerical value (e.g. Vehicles). For features which contain numerical values, we are often interested in various statistical measures relating to those values. Note, that because we only have one numeric feature, we see only one summary stastic - for now. ``` # We can use .describe() to see some summary statistics for the numeric fields in our dataframe. df_transport.describe() ``` Let's investigate a bit more of our data by using the .groupby() function. ``` # The .groupby() function is used for spliting the data into groups based on some criteria. grouped_data = df_transport.groupby(['Zip Code','Model Year','Fuel','Make','Light_Duty','Vehicles']) # Get the first entry for each month. df_transport.groupby('Fuel').first() ``` ### Checking for Missing Values Missing values adversely impact data quality, as they can lead the machine learning model to make inaccurate inferences about the data. Missing values can be the result of numerous factors, e.g. "bits" lost during streaming transmission, data entry, or perhaps a user forgot to fill in a field. Note that Pandas recognizes both empty cells and “NaN” types as missing values. #### Let's show the null values for all features in the DataFrame. ``` df_transport.isnull().sum() ``` To see a sampling of which values are missing, enter the feature column name. You'll notice that "False" and "True" correpond to the presence or abscence of a value by index number. ``` print (df_transport['Date']) print (df_transport['Date'].isnull()) print (df_transport['Make']) print (df_transport['Make'].isnull()) print (df_transport['Model Year']) print (df_transport['Model Year'].isnull()) ``` ### What can we deduce about the data at this point? # Let's summarize our data by row, column, features, unique, and missing values. ``` # In Python shape() is used in pandas to give the number of rows/columns. # The number of rows is given by .shape[0]. The number of columns is given by .shape[1]. # Thus, shape() consists of an array having two arguments -- rows and columns print ("Rows : " ,df_transport.shape[0]) print ("Columns : " ,df_transport.shape[1]) print ("\nFeatures : \n" ,df_transport.columns.tolist()) print ("\nUnique values : \n",df_transport.nunique()) print ("\nMissing values : ", df_transport.isnull().sum().values.sum()) ``` Let's see the data again -- this time the last five rows in the dataset. ``` # Output the last five rows in the dataset. df_transport.tail() ``` ### What Are Our Data Quality Issues? 1. **Data Quality Issue #1**: > **Missing Values**: Each feature column has multiple missing values. In fact, we have a total of 18 missing values. 2. **Data Quality Issue #2**: > **Date DataType**: Date is shown as an "object" datatype and should be a datetime. In addition, Date is in one column. Our business requirement is to see the Date parsed out to year, month, and day. 3. **Data Quality Issue #3**: > **Model Year**: We are only interested in years greater than 2006, not "<2006". 4. **Data Quality Issue #4**: > **Categorical Columns**: The feature column "Light_Duty" is categorical and has a "Yes/No" choice. We cannot feed values like this into a machine learning model. In addition, we need to "one-hot encode the remaining "string"/"object" columns. 5. **Data Quality Issue #5**: > **Temporal Features**: How do we handle year, month, and day? #### Data Quality Issue #1: ##### Resolving Missing Values Most algorithms do not accept missing values. Yet, when we see missing values in our dataset, there is always a tendency to just "drop all the rows" with missing values. Although Pandas will fill in the blank space with “NaN", we should "handle" them in some way. While all the methods to handle missing values is beyond the scope of this lab, there are a few methods you should consider. For numeric columns, use the "mean" values to fill in the missing numeric values. For categorical columns, use the "mode" (or most frequent values) to fill in missing categorical values. In this lab, we use the .apply and Lambda functions to fill every column with its own most frequent value. You'll learn more about Lambda functions later in the lab. Let's check again for missing values by showing how many rows contain NaN values for each feature column. ``` # The isnull() method is used to check and manage NULL values in a data frame. # TODO 1a df_transport.isnull().sum() ``` Run the cell to apply the lambda function. ``` # Here we are using the apply function with lambda. # We can use the apply() function to apply the lambda function to both rows and columns of a dataframe. # TODO 1b df_transport = df_transport.apply(lambda x:x.fillna(x.value_counts().index[0])) ``` Let's check again for missing values. ``` # The isnull() method is used to check and manage NULL values in a data frame. # TODO 1c df_transport.isnull().sum() ``` #### Data Quality Issue #2: ##### Convert the Date Feature Column to a Datetime Format ``` # The date column is indeed shown as a string object. We can convert it to the datetime datatype with the to_datetime() function in Pandas. # TODO 2a df_transport['Date'] = pd.to_datetime(df_transport['Date'], format='%m/%d/%Y') # Date is now converted and will display the concise summary of an dataframe. # TODO 2b df_transport.info() # Now we will parse Date into three columns that is year, month, and day. df_transport['year'] = df_transport['Date'].dt.year df_transport['month'] = df_transport['Date'].dt.month df_transport['day'] = df_transport['Date'].dt.day #df['hour'] = df['date'].dt.hour - you could use this if your date format included hour. #df['minute'] = df['date'].dt.minute - you could use this if your date format included minute. # The .info() function will display the concise summary of an dataframe. df_transport.info() ``` # Let's confirm the Date parsing. This will also give us a another visualization of the data. ``` # Here, we are creating a new dataframe called "grouped_data" and grouping by on the column "Make" grouped_data = df_transport.groupby(['Make']) # Get the first entry for each month. df_transport.groupby('Fuel').first() ``` Now that we have Dates as a integers, let's do some additional plotting. ``` # Here we will visualize our data using the figure() function in the pyplot module of matplotlib's library -- which is used to create a new figure. plt.figure(figsize=(10,6)) # Seaborn's .jointplot() displays a relationship between 2 variables (bivariate) as well as 1D profiles (univariate) in the margins. This plot is a convenience class that wraps JointGrid. sns.jointplot(x='month',y='Vehicles',data=df_transport) # The title() method in matplotlib module is used to specify title of the visualization depicted and displays the title using various attributes. plt.title('Vehicles by Month') ``` #### Data Quality Issue #3: ##### Rename a Feature Column and Remove a Value. Our feature columns have different "capitalizations" in their names, e.g. both upper and lower "case". In addition, there are "spaces" in some of the column names. In addition, we are only interested in years greater than 2006, not "<2006". We can also resolve the "case" problem too by making all the feature column names lower case. ``` # Let's remove all the spaces for feature columns by renaming them. # TODO 3a df_transport.rename(columns = { 'Date': 'date', 'Zip Code':'zipcode', 'Model Year': 'modelyear', 'Fuel': 'fuel', 'Make': 'make', 'Light_Duty': 'lightduty', 'Vehicles': 'vehicles'}, inplace = True) # Output the first two rows. df_transport.head(2) ``` **Note:** Next we create a copy of the dataframe to avoid the "SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame" warning. Run the cell to remove the value '<2006' from the modelyear feature column. ``` # Here, we create a copy of the dataframe to avoid copy warning issues. # TODO 3b df = df_transport.loc[df_transport.modelyear != '<2006'].copy() # Here we will confirm that the modelyear value '<2006' has been removed by doing a value count. df['modelyear'].value_counts(0) ``` #### Data Quality Issue #4: ##### Handling Categorical Columns The feature column "lightduty" is categorical and has a "Yes/No" choice. We cannot feed values like this into a machine learning model. We need to convert the binary answers from strings of yes/no to integers of 1/0. There are various methods to achieve this. We will use the "apply" method with a lambda expression. Pandas. apply() takes a function and applies it to all values of a Pandas series. ##### What is a Lambda Function? Typically, Python requires that you define a function using the def keyword. However, lambda functions are anonymous -- which means there is no need to name them. The most common use case for lambda functions is in code that requires a simple one-line function (e.g. lambdas only have a single expression). As you progress through the Course Specialization, you will see many examples where lambda functions are being used. Now is a good time to become familiar with them. ``` # Lets count the number of "Yes" and"No's" in the 'lightduty' feature column. df['lightduty'].value_counts(0) # Let's convert the Yes to 1 and No to 0. # The .apply takes a function and applies it to all values of a Pandas series (e.g. lightduty). df.loc[:,'lightduty'] = df['lightduty'].apply(lambda x: 0 if x=='No' else 1) df['lightduty'].value_counts(0) # Confirm that "lightduty" has been converted. df.head() ``` #### One-Hot Encoding Categorical Feature Columns Machine learning algorithms expect input vectors and not categorical features. Specifically, they cannot handle text or string values. Thus, it is often useful to transform categorical features into vectors. One transformation method is to create dummy variables for our categorical features. Dummy variables are a set of binary (0 or 1) variables that each represent a single class from a categorical feature. We simply encode the categorical variable as a one-hot vector, i.e. a vector where only one element is non-zero, or hot. With one-hot encoding, a categorical feature becomes an array whose size is the number of possible choices for that feature. Panda provides a function called "get_dummies" to convert a categorical variable into dummy/indicator variables. ``` # Making dummy variables for categorical data with more inputs. data_dummy = pd.get_dummies(df[['zipcode','modelyear', 'fuel', 'make']], drop_first=True) # Output the first five rows. data_dummy.head() # Merging (concatenate) original data frame with 'dummy' dataframe. # TODO 4a df = pd.concat([df,data_dummy], axis=1) df.head() # Dropping attributes for which we made dummy variables. Let's also drop the Date column. # TODO 4b df = df.drop(['date','zipcode','modelyear', 'fuel', 'make'], axis=1) # Confirm that 'zipcode','modelyear', 'fuel', and 'make' have been dropped. df.head() ``` #### Data Quality Issue #5: ##### Temporal Feature Columns Our dataset now contains year, month, and day feature columns. Let's convert the month and day feature columns to meaningful representations as a way to get us thinking about changing temporal features -- as they are sometimes overlooked. Note that the Feature Engineering course in this Specialization will provide more depth on methods to handle year, month, day, and hour feature columns. ``` # Let's print the unique values for "month", "day" and "year" in our dataset. print ('Unique values of month:',df.month.unique()) print ('Unique values of day:',df.day.unique()) print ('Unique values of year:',df.year.unique()) ``` Don't worry, this is the last time we will use this code, as you can develop an input pipeline to address these temporal feature columns in TensorFlow and Keras - and it is much easier! But, sometimes you need to appreciate what you're not going to encounter as you move through the course! Run the cell to view the output. ``` # Here we map each temporal variable onto a circle such that the lowest value for that variable appears right next to the largest value. We compute the x- and y- component of that point using the sin and cos trigonometric functions. df['day_sin'] = np.sin(df.day*(2.*np.pi/31)) df['day_cos'] = np.cos(df.day*(2.*np.pi/31)) df['month_sin'] = np.sin((df.month-1)*(2.*np.pi/12)) df['month_cos'] = np.cos((df.month-1)*(2.*np.pi/12)) # Let's drop month, and day # TODO 5 df = df.drop(['month','day','year'], axis=1) # scroll left to see the converted month and day coluumns. df.tail(4) ``` ### Conclusion This notebook introduced a few concepts to improve data quality. We resolved missing values, converted the Date feature column to a datetime format, renamed feature columns, removed a value from a feature column, created one-hot encoding features, and converted temporal features to meaningful representations. By the end of our lab, we gained an understanding as to why data should be "cleaned" and "pre-processed" before input into a machine learning model. Copyright 2020 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
true
code
0.423279
null
null
null
null
# Books Recommender System ![](http://labs.criteo.com/wp-content/uploads/2017/08/CustomersWhoBought3.jpg) This is the second part of my project on Book Data Analysis and Recommendation Systems. In my first notebook ([The Story of Book](https://www.kaggle.com/omarzaghlol/goodreads-1-the-story-of-book/)), I attempted at narrating the story of book by performing an extensive exploratory data analysis on Books Metadata collected from Goodreads. In this notebook, I will attempt at implementing a few recommendation algorithms (Basic Recommender, Content-based and Collaborative Filtering) and try to build an ensemble of these models to come up with our final recommendation system. # What's in this kernel? - [Importing Libraries and Loading Our Data](#1) - [Clean the dataset](#2) - [Simple Recommender](#3) - [Top Books](#4) - [Top "Genres" Books](#5) - [Content Based Recommender](#6) - [Cosine Similarity](#7) - [Popularity and Ratings](#8) - [Collaborative Filtering](#9) - [User Based](#10) - [Item Based](#11) - [Hybrid Recommender](#12) - [Conclusion](#13) - [Save Model](#14) # Importing Libraries and Loading Our Data <a id="1"></a> <br> ``` import pandas as pd import numpy as np import seaborn as sns import matplotlib.pyplot as plt import datetime import warnings warnings.filterwarnings('ignore') books = pd.read_csv('../input/goodbooks-10k//books.csv') ratings = pd.read_csv('../input/goodbooks-10k//ratings.csv') book_tags = pd.read_csv('../input/goodbooks-10k//book_tags.csv') tags = pd.read_csv('../input/goodbooks-10k//tags.csv') ``` # Clean the dataset <a id="2"></a> <br> As with nearly any real-life dataset, we need to do some cleaning first. When exploring the data I noticed that for some combinations of user and book there are multiple ratings, while in theory there should only be one (unless users can rate a book several times). Furthermore, for the collaborative filtering it is better to have more ratings per user. So I decided to remove users who have rated fewer than 3 books. ``` books['original_publication_year'] = books['original_publication_year'].fillna(-1).apply(lambda x: int(x) if x != -1 else -1) ratings_rmv_duplicates = ratings.drop_duplicates() unwanted_users = ratings_rmv_duplicates.groupby('user_id')['user_id'].count() unwanted_users = unwanted_users[unwanted_users < 3] unwanted_ratings = ratings_rmv_duplicates[ratings_rmv_duplicates.user_id.isin(unwanted_users.index)] new_ratings = ratings_rmv_duplicates.drop(unwanted_ratings.index) new_ratings['title'] = books.set_index('id').title.loc[new_ratings.book_id].values new_ratings.head(10) ``` # Simple Recommender <a id="3"></a> <br> The Simple Recommender offers generalized recommnendations to every user based on book popularity and (sometimes) genre. The basic idea behind this recommender is that books that are more popular and more critically acclaimed will have a higher probability of being liked by the average audience. This model does not give personalized recommendations based on the user. The implementation of this model is extremely trivial. All we have to do is sort our books based on ratings and popularity and display the top books of our list. As an added step, we can pass in a genre argument to get the top books of a particular genre. I will use IMDB's *weighted rating* formula to construct my chart. Mathematically, it is represented as follows: Weighted Rating (WR) = $(\frac{v}{v + m} . R) + (\frac{m}{v + m} . C)$ where, * *v* is the number of ratings for the book * *m* is the minimum ratings required to be listed in the chart * *R* is the average rating of the book * *C* is the mean rating across the whole report The next step is to determine an appropriate value for *m*, the minimum ratings required to be listed in the chart. We will use **95th percentile** as our cutoff. In other words, for a book to feature in the charts, it must have more ratings than at least 95% of the books in the list. I will build our overall Top 250 Chart and will define a function to build charts for a particular genre. Let's begin! ``` v = books['ratings_count'] m = books['ratings_count'].quantile(0.95) R = books['average_rating'] C = books['average_rating'].mean() W = (R*v + C*m) / (v + m) books['weighted_rating'] = W qualified = books.sort_values('weighted_rating', ascending=False).head(250) ``` ## Top Books <a id="4"></a> <br> ``` qualified[['title', 'authors', 'average_rating', 'weighted_rating']].head(15) ``` We see that J.K. Rowling's **Harry Potter** Books occur at the very top of our chart. The chart also indicates a strong bias of Goodreads Users towards particular genres and authors. Let us now construct our function that builds charts for particular genres. For this, we will use relax our default conditions to the **85th** percentile instead of 95. ## Top "Genres" Books <a id="5"></a> <br> ``` book_tags.head() tags.head() genres = ["Art", "Biography", "Business", "Chick Lit", "Children's", "Christian", "Classics", "Comics", "Contemporary", "Cookbooks", "Crime", "Ebooks", "Fantasy", "Fiction", "Gay and Lesbian", "Graphic Novels", "Historical Fiction", "History", "Horror", "Humor and Comedy", "Manga", "Memoir", "Music", "Mystery", "Nonfiction", "Paranormal", "Philosophy", "Poetry", "Psychology", "Religion", "Romance", "Science", "Science Fiction", "Self Help", "Suspense", "Spirituality", "Sports", "Thriller", "Travel", "Young Adult"] genres = list(map(str.lower, genres)) genres[:4] available_genres = tags.loc[tags.tag_name.str.lower().isin(genres)] available_genres.head() available_genres_books = book_tags[book_tags.tag_id.isin(available_genres.tag_id)] print('There are {} books that are tagged with above genres'.format(available_genres_books.shape[0])) available_genres_books.head() available_genres_books['genre'] = available_genres.tag_name.loc[available_genres_books.tag_id].values available_genres_books.head() def build_chart(genre, percentile=0.85): df = available_genres_books[available_genres_books['genre'] == genre.lower()] qualified = books.set_index('book_id').loc[df.goodreads_book_id] v = qualified['ratings_count'] m = qualified['ratings_count'].quantile(percentile) R = qualified['average_rating'] C = qualified['average_rating'].mean() qualified['weighted_rating'] = (R*v + C*m) / (v + m) qualified.sort_values('weighted_rating', ascending=False, inplace=True) return qualified ``` Let us see our method in action by displaying the Top 15 Fiction Books (Fiction almost didn't feature at all in our Generic Top Chart despite being one of the most popular movie genres). ``` cols = ['title','authors','original_publication_year','average_rating','ratings_count','work_text_reviews_count','weighted_rating'] genre = 'Fiction' build_chart(genre)[cols].head(15) ``` For simplicity, you can just pass the index of the wanted genre from below. ``` list(enumerate(available_genres.tag_name)) idx = 24 # romance build_chart(list(available_genres.tag_name)[idx])[cols].head(15) ``` # Content Based Recommender <a id="6"></a> <br> ![](https://miro.medium.com/max/828/1*1b-yMSGZ1HfxvHiJCiPV7Q.png) The recommender we built in the previous section suffers some severe limitations. For one, it gives the same recommendation to everyone, regardless of the user's personal taste. If a person who loves business books (and hates fiction) were to look at our Top 15 Chart, s/he wouldn't probably like most of the books. If s/he were to go one step further and look at our charts by genre, s/he wouldn't still be getting the best recommendations. For instance, consider a person who loves *The Fault in Our Stars*, *Twilight*. One inference we can obtain is that the person loves the romaintic books. Even if s/he were to access the romance chart, s/he wouldn't find these as the top recommendations. To personalise our recommendations more, I am going to build an engine that computes similarity between movies based on certain metrics and suggests books that are most similar to a particular book that a user liked. Since we will be using book metadata (or content) to build this engine, this also known as **Content Based Filtering.** I will build this recommender based on book's *Title*, *Authors* and *Genres*. ``` from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer from sklearn.metrics.pairwise import linear_kernel, cosine_similarity ``` My approach to building the recommender is going to be extremely *hacky*. These are steps I plan to do: 1. **Strip Spaces and Convert to Lowercase** from authors. This way, our engine will not confuse between **Stephen Covey** and **Stephen King**. 2. Combining books with their corresponding **genres** . 2. I then use a **Count Vectorizer** to create our count matrix. Finally, we calculate the cosine similarities and return books that are most similar. ``` books['authors'] = books['authors'].apply(lambda x: [str.lower(i.replace(" ", "")) for i in x.split(', ')]) def get_genres(x): t = book_tags[book_tags.goodreads_book_id==x] return [i.lower().replace(" ", "") for i in tags.tag_name.loc[t.tag_id].values] books['genres'] = books.book_id.apply(get_genres) books['soup'] = books.apply(lambda x: ' '.join([x['title']] + x['authors'] + x['genres']), axis=1) books.soup.head() count = CountVectorizer(analyzer='word',ngram_range=(1, 2),min_df=0, stop_words='english') count_matrix = count.fit_transform(books['soup']) ``` ## Cosine Similarity <a id="7"></a> <br> I will be using the Cosine Similarity to calculate a numeric quantity that denotes the similarity between two books. Mathematically, it is defined as follows: $cosine(x,y) = \frac{x. y^\intercal}{||x||.||y||} $ ``` cosine_sim = cosine_similarity(count_matrix, count_matrix) indices = pd.Series(books.index, index=books['title']) titles = books['title'] def get_recommendations(title, n=10): idx = indices[title] sim_scores = list(enumerate(cosine_sim[idx])) sim_scores = sorted(sim_scores, key=lambda x: x[1], reverse=True) sim_scores = sim_scores[1:31] book_indices = [i[0] for i in sim_scores] return list(titles.iloc[book_indices].values)[:n] get_recommendations("The One Minute Manager") ``` What if I want a specific book but I can't remember it's full name!! So I created the following *method* to get book titles from a **partial** title. ``` def get_name_from_partial(title): return list(books.title[books.title.str.lower().str.contains(title) == True].values) title = "business" l = get_name_from_partial(title) list(enumerate(l)) get_recommendations(l[1]) ``` ## Popularity and Ratings <a id="8"></a> <br> One thing that we notice about our recommendation system is that it recommends books regardless of ratings and popularity. It is true that ***Across the River and Into the Trees*** and ***The Old Man and the Sea*** were written by **Ernest Hemingway**, but the former one was cnosidered a bad (not the worst) book that shouldn't be recommended to anyone, since that most people hated the book for it's static plot and overwrought emotion. Therefore, we will add a mechanism to remove bad books and return books which are popular and have had a good critical response. I will take the top 30 movies based on similarity scores and calculate the vote of the 60th percentile book. Then, using this as the value of $m$, we will calculate the weighted rating of each book using IMDB's formula like we did in the Simple Recommender section. ``` def improved_recommendations(title, n=10): idx = indices[title] sim_scores = list(enumerate(cosine_sim[idx])) sim_scores = sorted(sim_scores, key=lambda x: x[1], reverse=True) sim_scores = sim_scores[1:31] book_indices = [i[0] for i in sim_scores] df = books.iloc[book_indices][['title', 'ratings_count', 'average_rating', 'weighted_rating']] v = df['ratings_count'] m = df['ratings_count'].quantile(0.60) R = df['average_rating'] C = df['average_rating'].mean() df['weighted_rating'] = (R*v + C*m) / (v + m) qualified = df[df['ratings_count'] >= m] qualified = qualified.sort_values('weighted_rating', ascending=False) return qualified.head(n) improved_recommendations("The One Minute Manager") improved_recommendations(l[1]) ``` I think the sorting of similar is more better now than before. Therefore, we will conclude our Content Based Recommender section here and come back to it when we build a hybrid engine. # Collaborative Filtering <a id="9"></a> <br> ![](https://miro.medium.com/max/706/1*DYJ-HQnOVvmm5suNtqV3Jw.png) Our content based engine suffers from some severe limitations. It is only capable of suggesting books which are *close* to a certain book. That is, it is not capable of capturing tastes and providing recommendations across genres. Also, the engine that we built is not really personal in that it doesn't capture the personal tastes and biases of a user. Anyone querying our engine for recommendations based on a book will receive the same recommendations for that book, regardless of who s/he is. Therefore, in this section, we will use a technique called **Collaborative Filtering** to make recommendations to Book Readers. Collaborative Filtering is based on the idea that users similar to a me can be used to predict how much I will like a particular product or service those users have used/experienced but I have not. I will not be implementing Collaborative Filtering from scratch. Instead, I will use the **Surprise** library that used extremely powerful algorithms like **Singular Value Decomposition (SVD)** to minimise RMSE (Root Mean Square Error) and give great recommendations. There are two classes of Collaborative Filtering: ![](https://miro.medium.com/max/1280/1*QvhetbRjCr1vryTch_2HZQ.jpeg) - **User-based**, which measures the similarity between target users and other users. - **Item-based**, which measures the similarity between the items that target users rate or interact with and other items. ## - User Based <a id="10"></a> <br> ``` # ! pip install surprise from surprise import Reader, Dataset, SVD from surprise.model_selection import cross_validate reader = Reader() data = Dataset.load_from_df(new_ratings[['user_id', 'book_id', 'rating']], reader) svd = SVD() cross_validate(svd, data, measures=['RMSE', 'MAE']) ``` We get a mean **Root Mean Sqaure Error** of about 0.8419 which is more than good enough for our case. Let us now train on our dataset and arrive at predictions. ``` trainset = data.build_full_trainset() svd.fit(trainset); ``` Let us pick users 10 and check the ratings s/he has given. ``` new_ratings[new_ratings['user_id'] == 10] svd.predict(10, 1506) ``` For book with ID 1506, we get an estimated prediction of **3.393**. One startling feature of this recommender system is that it doesn't care what the book is (or what it contains). It works purely on the basis of an assigned book ID and tries to predict ratings based on how the other users have predicted the book. ## - Item Based <a id="11"></a> <br> Here we will build a table for users with their corresponding ratings for each book. ``` # bookmat = new_ratings.groupby(['user_id', 'title'])['rating'].mean().unstack() bookmat = new_ratings.pivot_table(index='user_id', columns='title', values='rating') bookmat.head() def get_similar(title, mat): title_user_ratings = mat[title] similar_to_title = mat.corrwith(title_user_ratings) corr_title = pd.DataFrame(similar_to_title, columns=['correlation']) corr_title.dropna(inplace=True) corr_title.sort_values('correlation', ascending=False, inplace=True) return corr_title title = "Twilight (Twilight, #1)" smlr = get_similar(title, bookmat) smlr.head(10) ``` Ok, we got similar books, but we need to filter them by their *ratings_count*. ``` smlr = smlr.join(books.set_index('title')['ratings_count']) smlr.head() ``` Get similar books with at least 500k ratings. ``` smlr[smlr.ratings_count > 5e5].sort_values('correlation', ascending=False).head(10) ``` That's more interesting and reasonable result, since we could get *Twilight* book series in our top results. # Hybrid Recommender <a id="12"></a> <br> ![](https://www.toonpool.com/user/250/files/hybrid_20095.jpg) In this section, I will try to build a simple hybrid recommender that brings together techniques we have implemented in the content based and collaborative filter based engines. This is how it will work: * **Input:** User ID and the Title of a Book * **Output:** Similar books sorted on the basis of expected ratings by that particular user. ``` def hybrid(user_id, title, n=10): idx = indices[title] sim_scores = list(enumerate(cosine_sim[idx])) sim_scores = sorted(sim_scores, key=lambda x: x[1], reverse=True) sim_scores = sim_scores[1:51] book_indices = [i[0] for i in sim_scores] df = books.iloc[book_indices][['book_id', 'title', 'original_publication_year', 'ratings_count', 'average_rating']] df['est'] = df['book_id'].apply(lambda x: svd.predict(user_id, x).est) df = df.sort_values('est', ascending=False) return df.head(n) hybrid(4, 'Eat, Pray, Love') hybrid(10, 'Eat, Pray, Love') ``` We see that for our hybrid recommender, we get (almost) different recommendations for different users although the book is the same. But maybe we can make it better through following steps: 1. Use our *improved_recommendations* technique , that we used in the **Content Based** seciton above 2. Combine it with the user *estimations*, by dividing their summation by 2 3. Finally, put the result into a new feature ***score*** ``` def improved_hybrid(user_id, title, n=10): idx = indices[title] sim_scores = list(enumerate(cosine_sim[idx])) sim_scores = sorted(sim_scores, key=lambda x: x[1], reverse=True) sim_scores = sim_scores[1:51] book_indices = [i[0] for i in sim_scores] df = books.iloc[book_indices][['book_id', 'title', 'ratings_count', 'average_rating', 'original_publication_year']] v = df['ratings_count'] m = df['ratings_count'].quantile(0.60) R = df['average_rating'] C = df['average_rating'].mean() df['weighted_rating'] = (R*v + C*m) / (v + m) df['est'] = df['book_id'].apply(lambda x: svd.predict(user_id, x).est) df['score'] = (df['est'] + df['weighted_rating']) / 2 df = df.sort_values('score', ascending=False) return df[['book_id', 'title', 'original_publication_year', 'ratings_count', 'average_rating', 'score']].head(n) improved_hybrid(4, 'Eat, Pray, Love') improved_hybrid(10, 'Eat, Pray, Love') ``` Ok, we see that the new results make more sense, besides to, the recommendations are more personalized and tailored towards particular users. # Conclusion <a id="13"></a> <br> In this notebook, I have built 4 different recommendation engines based on different ideas and algorithms. They are as follows: 1. **Simple Recommender:** This system used overall Goodreads Ratings Count and Rating Averages to build Top Books Charts, in general and for a specific genre. The IMDB Weighted Rating System was used to calculate ratings on which the sorting was finally performed. 2. **Content Based Recommender:** We built content based engines that took book title, authors and genres as input to come up with predictions. We also deviced a simple filter to give greater preference to books with more votes and higher ratings. 3. **Collaborative Filtering:** We built two Collaborative Filters; - one that uses the powerful Surprise Library to build an **user-based** filter based on single value decomposition, since the RMSE obtained was less than 1, and the engine gave estimated ratings for a given user and book. - And the other (**item-based**) which built a pivot table for users ratings corresponding to each book, and the engine gave similar books for a given book. 4. **Hybrid Engine:** We brought together ideas from content and collaborative filterting to build an engine that gave book suggestions to a particular user based on the estimated ratings that it had internally calculated for that user. Previous -> [The Story of Book](https://www.kaggle.com/omarzaghlol/goodreads-1-the-story-of-book/)
true
code
0.385114
null
null
null
null
# WGAN 元論文 : Wasserstein GAN https://arxiv.org/abs/1701.07875 (2017) WGANはGANのLossを変えることで、数学的に画像生成の学習を良くしよう!っていうもの。 通常のGANはKLDivergenceを使って、Generatorによる確率分布を、生成したい画像の生起分布に近づけていく。だが、KLDでは連続性が保証されないので、代わりにWasserstain距離を用いて、近似していこうというのがWGAN。 Wasserstain距離によるLossを実現するために、WGANのDiscriminatorでは最後にSigmoid関数を適用しない。つまり、LossもSigmoid Cross Entropyでなく、Discriminatorの出力の値をそのまま使う。 WGANのアルゴリズムは、イテレーション毎に以下のDiscriminatorとGeneratorの学習を交互に行っていく。 - 最適化 : RMSProp(LearningRate:0.0005) #### Discriminatorの学習(以下操作をcriticの数値だけ繰り返す) 1. Real画像と、一様分布からzをサンプリングする 2. Loss $L_D = \frac{1}{|Minibatch|} \{ \sum_{i} D(x^{(i)}) - \sum_i D (G(z^{(i)})) \}$ を計算し、SGD 3. Discriminatorのパラメータを全て、 [- clip, clip] にクリッピングする #### Generatorの学習 1. 一様分布からzをサンプリングする 2. Loss $L_G = \frac{1}{|Minibatch|} \sum_i D (G(z^{(i)})) $ を計算し、SGD (WGANは収束がすごく遅い、、学習回数がめちゃくちゃ必要なので、注意!!!!) ## Import and Config ``` import torch import torch.nn.functional as F import torchvision import numpy as np from collections import OrderedDict from easydict import EasyDict import argparse import os import matplotlib.pyplot as plt import pandas as pd from _main_base import * #--- # config #--- cfg = EasyDict() # class cfg.CLASS_LABEL = ['akahara', 'madara'] # list, dict('label' : '[B, G, R]') cfg.CLASS_NUM = len(cfg.CLASS_LABEL) # model cfg.INPUT_Z_DIM = 128 cfg.INPUT_MODE = None cfg.OUTPUT_HEIGHT = 32 cfg.OUTPUT_WIDTH = 32 cfg.OUTPUT_CHANNEL = 3 cfg.OUTPUT_MODE = 'RGB' # RGB, GRAY, EDGE, CLASS_LABEL cfg.G_DIM = 64 cfg.D_DIM = 64 cfg.CHANNEL_AXIS = 1 # 1 ... [mb, c, h, w], 3 ... [mb, h, w, c] cfg.GPU = False cfg.DEVICE = torch.device('cuda' if cfg.GPU and torch.cuda.is_available() else 'cpu') # train cfg.TRAIN = EasyDict() cfg.TRAIN.DISPAY_ITERATION_INTERVAL = 50 cfg.PREFIX = 'WGAN' cfg.TRAIN.MODEL_G_SAVE_PATH = 'models/' + cfg.PREFIX + '_G_{}.pt' cfg.TRAIN.MODEL_D_SAVE_PATH = 'models/' + cfg.PREFIX + '_D_{}.pt' cfg.TRAIN.MODEL_SAVE_INTERVAL = 200 cfg.TRAIN.ITERATION = 5000 cfg.TRAIN.MINIBATCH = 32 cfg.TRAIN.OPTIMIZER_G = torch.optim.Adam cfg.TRAIN.LEARNING_PARAMS_G = {'lr' : 0.0002, 'betas' : (0.5, 0.9)} cfg.TRAIN.OPTIMIZER_D = torch.optim.Adam cfg.TRAIN.LEARNING_PARAMS_D = {'lr' : 0.0002, 'betas' : (0.5, 0.9)} cfg.TRAIN.LOSS_FUNCTION = None cfg.TRAIN.DATA_PATH = './data/' cfg.TRAIN.DATA_HORIZONTAL_FLIP = False # data augmentation : holizontal flip cfg.TRAIN.DATA_VERTICAL_FLIP = False # data augmentation : vertical flip cfg.TRAIN.DATA_ROTATION = False # data augmentation : rotation False, or integer cfg.TRAIN.LEARNING_PROCESS_RESULT_SAVE = True cfg.TRAIN.LEARNING_PROCESS_RESULT_INTERVAL = 500 cfg.TRAIN.LEARNING_PROCESS_RESULT_IMAGE_PATH = 'result/' + cfg.PREFIX + '_result_{}.jpg' cfg.TRAIN.LEARNING_PROCESS_RESULT_LOSS_PATH = 'result/' + cfg.PREFIX + '_loss.txt' #--- # WGAN config #--- cfg.TRAIN.WGAN_CLIPS_VALUE = 0.01 cfg.TRAIN.WGAN_CRITIC_N = 5 # test cfg.TEST = EasyDict() cfg.TEST.MODEL_G_PATH = cfg.TRAIN.MODEL_G_SAVE_PATH.format('final') cfg.TEST.DATA_PATH = './data' cfg.TEST.MINIBATCH = 10 cfg.TEST.ITERATION = 2 cfg.TEST.RESULT_SAVE = False cfg.TEST.RESULT_IMAGE_PATH = 'result/' + cfg.PREFIX + '_result_{}.jpg' # random seed torch.manual_seed(0) # make model save directory def make_dir(path): if '/' in path: model_save_dir = '/'.join(path.split('/')[:-1]) os.makedirs(model_save_dir, exist_ok=True) make_dir(cfg.TRAIN.MODEL_G_SAVE_PATH) make_dir(cfg.TRAIN.MODEL_D_SAVE_PATH) make_dir(cfg.TRAIN.LEARNING_PROCESS_RESULT_IMAGE_PATH) make_dir(cfg.TRAIN.LEARNING_PROCESS_RESULT_LOSS_PATH) ``` ## Define Model ``` class Generator(torch.nn.Module): def __init__(self): super(Generator, self).__init__() self.module = torch.nn.Sequential(OrderedDict({ 'G_layer_1' : torch.nn.ConvTranspose2d(cfg.INPUT_Z_DIM, cfg.G_DIM * 4, kernel_size=[cfg.OUTPUT_HEIGHT // 8, cfg.OUTPUT_WIDTH // 8], stride=1, bias=False), 'G_layer_1_bn' : torch.nn.BatchNorm2d(cfg.G_DIM * 4), 'G_layer_1_ReLU' : torch.nn.ReLU(), 'G_layer_2' : torch.nn.ConvTranspose2d(cfg.G_DIM * 4, cfg.G_DIM * 2, kernel_size=4, stride=2, padding=1, bias=False), 'G_layer_2_bn' : torch.nn.BatchNorm2d(cfg.G_DIM * 2), 'G_layer_2_ReLU' : torch.nn.ReLU(), 'G_layer_3' : torch.nn.ConvTranspose2d(cfg.G_DIM * 2, cfg.G_DIM, kernel_size=4, stride=2, padding=1, bias=False), 'G_layer_3_bn' : torch.nn.BatchNorm2d(cfg.G_DIM), 'G_layer_3_ReLU' : torch.nn.ReLU(), 'G_layer_out' : torch.nn.ConvTranspose2d(cfg.G_DIM, cfg.OUTPUT_CHANNEL, kernel_size=4, stride=2, padding=1, bias=False), 'G_layer_out_tanh' : torch.nn.Tanh() })) def forward(self, x): x = self.module(x) return x class Discriminator(torch.nn.Module): def __init__(self): super(Discriminator, self).__init__() self.module = torch.nn.Sequential(OrderedDict({ 'D_layer_1' : torch.nn.Conv2d(cfg.OUTPUT_CHANNEL, cfg.D_DIM, kernel_size=4, padding=1, stride=2, bias=False), 'D_layer_1_leakyReLU' : torch.nn.LeakyReLU(0.2, inplace=True), 'D_layer_2' : torch.nn.Conv2d(cfg.D_DIM, cfg.D_DIM * 2, kernel_size=4, padding=1, stride=2, bias=False), 'D_layer_2_bn' : torch.nn.BatchNorm2d(cfg.D_DIM * 2), 'D_layer_2_leakyReLU' : torch.nn.LeakyReLU(0.2, inplace=True), 'D_layer_3' : torch.nn.Conv2d(cfg.D_DIM * 2, cfg.D_DIM * 4, kernel_size=4, padding=1, stride=2, bias=False), 'G_layer_3_bn' : torch.nn.BatchNorm2d(cfg.D_DIM * 4), 'D_layer_3_leakyReLU' : torch.nn.LeakyReLU(0.2, inplace=True), 'D_layer_out' : torch.nn.Conv2d(cfg.D_DIM * 4, 1, kernel_size=[cfg.OUTPUT_HEIGHT // 8, cfg.OUTPUT_WIDTH // 8], padding=0, stride=1, bias=False), })) def forward(self, x): x = self.module(x) return x ``` ## Train ``` def result_show(G, z, path=None, save=False, show=False): if (save or show) is False: print('argument save >> {} and show >> {}, so skip') return Gz = G(z) Gz = Gz.detach().cpu().numpy() Gz = (Gz * 127.5 + 127.5).astype(np.uint8) Gz = Gz.reshape([-1, cfg.OUTPUT_CHANNEL, cfg.OUTPUT_HEIGHT, cfg.OUTPUT_WIDTH]) Gz = Gz.transpose(0, 2, 3, 1) for i in range(cfg.TEST.MINIBATCH): _G = Gz[i] plt.subplot(1, cfg.TEST.MINIBATCH, i + 1) plt.imshow(_G) plt.axis('off') if path is not None: plt.savefig(path) print('result was saved to >> {}'.format(path)) if show: plt.show() # train def train(): # model G = Generator().to(cfg.DEVICE) D = Discriminator().to(cfg.DEVICE) opt_G = cfg.TRAIN.OPTIMIZER_G(G.parameters(), **cfg.TRAIN.LEARNING_PARAMS_G) opt_D = cfg.TRAIN.OPTIMIZER_D(D.parameters(), **cfg.TRAIN.LEARNING_PARAMS_D) #path_dict = data_load(cfg) #paths = path_dict['paths'] #paths_gt = path_dict['paths_gt'] trainset = torchvision.datasets.CIFAR10(root=cfg.TRAIN.DATA_PATH , train=True, download=True, transform=None) train_Xs = trainset.data train_ys = trainset.targets # training mbi = 0 train_N = len(train_Xs) train_ind = np.arange(train_N) np.random.seed(0) np.random.shuffle(train_ind) list_iter = [] list_loss_G = [] list_loss_D = [] list_loss_D_real = [] list_loss_D_fake = [] list_loss_WDistance = [] one = torch.FloatTensor([1]) minus_one = one * -1 print('training start') progres_bar = '' for i in range(cfg.TRAIN.ITERATION): if mbi + cfg.TRAIN.MINIBATCH > train_N: mb_ind = train_ind[mbi:] np.random.shuffle(train_ind) mb_ind = np.hstack((mb_ind, train_ind[ : (cfg.TRAIN.MINIBATCH - (train_N - mbi))])) mbi = cfg.TRAIN.MINIBATCH - (train_N - mbi) else: mb_ind = train_ind[mbi : mbi + cfg.TRAIN.MINIBATCH] mbi += cfg.TRAIN.MINIBATCH # update D for _ in range(cfg.TRAIN.WGAN_CRITIC_N): opt_D.zero_grad() # parameter clipping > [-clip_value, clip_value] for param in D.parameters(): param.data.clamp_(- cfg.TRAIN.WGAN_CLIPS_VALUE, cfg.TRAIN.WGAN_CLIPS_VALUE) # sample X Xs = torch.tensor(preprocess(train_Xs[mb_ind], cfg, cfg.OUTPUT_MODE), dtype=torch.float).to(cfg.DEVICE) # sample x z = np.random.uniform(-1, 1, size=(cfg.TRAIN.MINIBATCH, cfg.INPUT_Z_DIM, 1, 1)) z = torch.tensor(z, dtype=torch.float).to(cfg.DEVICE) # forward Gz = G(z) loss_D_fake = D(Gz).mean(0).view(1) loss_D_real = D(Xs).mean(0).view(1) loss_D = loss_D_fake - loss_D_real loss_D_real.backward(one) loss_D_fake.backward(minus_one) opt_D.step() Wasserstein_distance = loss_D_real - loss_D_fake # update G opt_G.zero_grad() z = np.random.uniform(-1, 1, size=(cfg.TRAIN.MINIBATCH, cfg.INPUT_Z_DIM, 1, 1)) z = torch.tensor(z, dtype=torch.float).to(cfg.DEVICE) loss_G = D(G(z)).mean(0).view(1) loss_G.backward(one) opt_G.step() progres_bar += '|' print('\r' + progres_bar, end='') _loss_G = loss_G.item() _loss_D = loss_D.item() _loss_D_real = loss_D_real.item() _loss_D_fake = loss_D_fake.item() _Wasserstein_distance = Wasserstein_distance.item() if (i + 1) % 10 == 0: progres_bar += str(i + 1) print('\r' + progres_bar, end='') # save process result if cfg.TRAIN.LEARNING_PROCESS_RESULT_SAVE: list_iter.append(i + 1) list_loss_G.append(_loss_G) list_loss_D.append(_loss_D) list_loss_D_real.append(_loss_D_real) list_loss_D_fake.append(_loss_D_fake) list_loss_WDistance.append(_Wasserstein_distance) # display training state if (i + 1) % cfg.TRAIN.DISPAY_ITERATION_INTERVAL == 0: print('\r' + ' ' * len(progres_bar), end='') print('\rIter:{}, LossG (fake:{:.4f}), LossD:{:.4f} (real:{:.4f}, fake:{:.4f}), WDistance:{:.4f}'.format( i + 1, _loss_G, _loss_D, _loss_D_real, _loss_D_fake, _Wasserstein_distance)) progres_bar = '' # save parameters if (cfg.TRAIN.MODEL_SAVE_INTERVAL != False) and ((i + 1) % cfg.TRAIN.MODEL_SAVE_INTERVAL == 0): G_save_path = cfg.TRAIN.MODEL_G_SAVE_PATH.format('iter{}'.format(i + 1)) D_save_path = cfg.TRAIN.MODEL_D_SAVE_PATH.format('iter{}'.format(i + 1)) torch.save(G.state_dict(), G_save_path) torch.save(D.state_dict(), D_save_path) print('save G >> {}, D >> {}'.format(G_save_path, D_save_path)) # save process result if cfg.TRAIN.LEARNING_PROCESS_RESULT_SAVE and ((i + 1) % cfg.TRAIN.LEARNING_PROCESS_RESULT_INTERVAL == 0): result_show( G, z, cfg.TRAIN.LEARNING_PROCESS_RESULT_IMAGE_PATH.format('iter' + str(i + 1)), save=cfg.TRAIN.LEARNING_PROCESS_RESULT_SAVE, show=True) G_save_path = cfg.TRAIN.MODEL_G_SAVE_PATH.format('final') D_save_path = cfg.TRAIN.MODEL_D_SAVE_PATH.format('final') torch.save(G.state_dict(), G_save_path) torch.save(D.state_dict(), D_save_path) print('final paramters were saved to G >> {}, D >> {}'.format(G_save_path, D_save_path)) if cfg.TRAIN.LEARNING_PROCESS_RESULT_SAVE: f = open(cfg.TRAIN.LEARNING_PROCESS_RESULT_LOSS_PATH, 'w') df = pd.DataFrame({'iteration' : list_iter, 'loss_G' : list_loss_G, 'loss_D' : list_loss_D, 'loss_D_real' : list_loss_D_real, 'loss_D_fake' : list_loss_D_fake, 'Wasserstein_Distance' : list_loss_WDistance}) df.to_csv(cfg.TRAIN.LEARNING_PROCESS_RESULT_LOSS_PATH, index=False) print('loss was saved to >> {}'.format(cfg.TRAIN.LEARNING_PROCESS_RESULT_LOSS_PATH)) train() ``` ## Test ``` # test def test(): print('-' * 20) print('test function') print('-' * 20) G = Generator().to(cfg.DEVICE) G.load_state_dict(torch.load(cfg.TEST.MODEL_G_PATH, map_location=torch.device(cfg.DEVICE))) G.eval() np.random.seed(0) for i in range(cfg.TEST.ITERATION): z = np.random.uniform(-1, 1, size=(cfg.TEST.MINIBATCH, cfg.INPUT_Z_DIM, 1, 1)) z = torch.tensor(z, dtype=torch.float).to(cfg.DEVICE) result_show(G, z, cfg.TEST.RESULT_IMAGE_PATH.format(i + 1), save=cfg.TEST.RESULT_SAVE, show=True) test() def arg_parse(): parser = argparse.ArgumentParser(description='CNN implemented with Keras') parser.add_argument('--train', dest='train', action='store_true') parser.add_argument('--test', dest='test', action='store_true') args = parser.parse_args() return args # main if __name__ == '__main__': args = arg_parse() if args.train: train() if args.test: test() if not (args.train or args.test): print("please select train or test flag") print("train: python main.py --train") print("test: python main.py --test") print("both: python main.py --train --test") ```
true
code
0.790551
null
null
null
null
Copyright (c) Microsoft Corporation. All rights reserved. Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/contrib/fairness/fairlearn-azureml-mitigation.png) # Unfairness Mitigation with Fairlearn and Azure Machine Learning **This notebook shows how to upload results from Fairlearn's GridSearch mitigation algorithm into a dashboard in Azure Machine Learning Studio** ## Table of Contents 1. [Introduction](#Introduction) 1. [Loading the Data](#LoadingData) 1. [Training an Unmitigated Model](#UnmitigatedModel) 1. [Mitigation with GridSearch](#Mitigation) 1. [Uploading a Fairness Dashboard to Azure](#AzureUpload) 1. Registering models 1. Computing Fairness Metrics 1. Uploading to Azure 1. [Conclusion](#Conclusion) <a id="Introduction"></a> ## Introduction This notebook shows how to use [Fairlearn (an open source fairness assessment and unfairness mitigation package)](http://fairlearn.github.io) and Azure Machine Learning Studio for a binary classification problem. This example uses the well-known adult census dataset. For the purposes of this notebook, we shall treat this as a loan decision problem. We will pretend that the label indicates whether or not each individual repaid a loan in the past. We will use the data to train a predictor to predict whether previously unseen individuals will repay a loan or not. The assumption is that the model predictions are used to decide whether an individual should be offered a loan. Its purpose is purely illustrative of a workflow including a fairness dashboard - in particular, we do **not** include a full discussion of the detailed issues which arise when considering fairness in machine learning. For such discussions, please [refer to the Fairlearn website](http://fairlearn.github.io/). We will apply the [grid search algorithm](https://fairlearn.github.io/master/api_reference/fairlearn.reductions.html#fairlearn.reductions.GridSearch) from the Fairlearn package using a specific notion of fairness called Demographic Parity. This produces a set of models, and we will view these in a dashboard both locally and in the Azure Machine Learning Studio. ### Setup To use this notebook, an Azure Machine Learning workspace is required. Please see the [configuration notebook](../../configuration.ipynb) for information about creating one, if required. This notebook also requires the following packages: * `azureml-contrib-fairness` * `fairlearn==0.4.6` (v0.5.0 will work with minor modifications) * `joblib` * `shap` Fairlearn relies on features introduced in v0.22.1 of `scikit-learn`. If you have an older version already installed, please uncomment and run the following cell: ``` # !pip install --upgrade scikit-learn>=0.22.1 ``` Finally, please ensure that when you downloaded this notebook, you also downloaded the `fairness_nb_utils.py` file from the same location, and placed it in the same directory as this notebook. <a id="LoadingData"></a> ## Loading the Data We use the well-known `adult` census dataset, which we will fetch from the OpenML website. We start with a fairly unremarkable set of imports: ``` from fairlearn.reductions import GridSearch, DemographicParity, ErrorRate from fairlearn.widget import FairlearnDashboard from sklearn.compose import ColumnTransformer from sklearn.datasets import fetch_openml from sklearn.impute import SimpleImputer from sklearn.linear_model import LogisticRegression from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler, OneHotEncoder from sklearn.compose import make_column_selector as selector from sklearn.pipeline import Pipeline import pandas as pd ``` We can now load and inspect the data: ``` from fairness_nb_utils import fetch_openml_with_retries data = fetch_openml_with_retries(data_id=1590) # Extract the items we want X_raw = data.data y = (data.target == '>50K') * 1 X_raw["race"].value_counts().to_dict() ``` We are going to treat the sex and race of each individual as protected attributes, and in this particular case we are going to remove these attributes from the main data (this is not always the best option - see the [Fairlearn website](http://fairlearn.github.io/) for further discussion). Protected attributes are often denoted by 'A' in the literature, and we follow that convention here: ``` A = X_raw[['sex','race']] X_raw = X_raw.drop(labels=['sex', 'race'],axis = 1) ``` We now preprocess our data. To avoid the problem of data leakage, we split our data into training and test sets before performing any other transformations. Subsequent transformations (such as scalings) will be fit to the training data set, and then applied to the test dataset. ``` (X_train, X_test, y_train, y_test, A_train, A_test) = train_test_split( X_raw, y, A, test_size=0.3, random_state=12345, stratify=y ) # Ensure indices are aligned between X, y and A, # after all the slicing and splitting of DataFrames # and Series X_train = X_train.reset_index(drop=True) X_test = X_test.reset_index(drop=True) y_train = y_train.reset_index(drop=True) y_test = y_test.reset_index(drop=True) A_train = A_train.reset_index(drop=True) A_test = A_test.reset_index(drop=True) ``` We have two types of column in the dataset - categorical columns which will need to be one-hot encoded, and numeric ones which will need to be rescaled. We also need to take care of missing values. We use a simple approach here, but please bear in mind that this is another way that bias could be introduced (especially if one subgroup tends to have more missing values). For this preprocessing, we make use of `Pipeline` objects from `sklearn`: ``` numeric_transformer = Pipeline( steps=[ ("impute", SimpleImputer()), ("scaler", StandardScaler()), ] ) categorical_transformer = Pipeline( [ ("impute", SimpleImputer(strategy="most_frequent")), ("ohe", OneHotEncoder(handle_unknown="ignore", sparse=False)), ] ) preprocessor = ColumnTransformer( transformers=[ ("num", numeric_transformer, selector(dtype_exclude="category")), ("cat", categorical_transformer, selector(dtype_include="category")), ] ) ``` Now, the preprocessing pipeline is defined, we can run it on our training data, and apply the generated transform to our test data: ``` X_train = preprocessor.fit_transform(X_train) X_test = preprocessor.transform(X_test) ``` <a id="UnmitigatedModel"></a> ## Training an Unmitigated Model So we have a point of comparison, we first train a model (specifically, logistic regression from scikit-learn) on the raw data, without applying any mitigation algorithm: ``` unmitigated_predictor = LogisticRegression(solver='liblinear', fit_intercept=True) unmitigated_predictor.fit(X_train, y_train) ``` We can view this model in the fairness dashboard, and see the disparities which appear: ``` FairlearnDashboard(sensitive_features=A_test, sensitive_feature_names=['Sex', 'Race'], y_true=y_test, y_pred={"unmitigated": unmitigated_predictor.predict(X_test)}) ``` Looking at the disparity in accuracy when we select 'Sex' as the sensitive feature, we see that males have an error rate about three times greater than the females. More interesting is the disparity in opportunitiy - males are offered loans at three times the rate of females. Despite the fact that we removed the feature from the training data, our predictor still discriminates based on sex. This demonstrates that simply ignoring a protected attribute when fitting a predictor rarely eliminates unfairness. There will generally be enough other features correlated with the removed attribute to lead to disparate impact. <a id="Mitigation"></a> ## Mitigation with GridSearch The `GridSearch` class in `Fairlearn` implements a simplified version of the exponentiated gradient reduction of [Agarwal et al. 2018](https://arxiv.org/abs/1803.02453). The user supplies a standard ML estimator, which is treated as a blackbox - for this simple example, we shall use the logistic regression estimator from scikit-learn. `GridSearch` works by generating a sequence of relabellings and reweightings, and trains a predictor for each. For this example, we specify demographic parity (on the protected attribute of sex) as the fairness metric. Demographic parity requires that individuals are offered the opportunity (a loan in this example) independent of membership in the protected class (i.e., females and males should be offered loans at the same rate). *We are using this metric for the sake of simplicity* in this example; the appropriate fairness metric can only be selected after *careful examination of the broader context* in which the model is to be used. ``` sweep = GridSearch(LogisticRegression(solver='liblinear', fit_intercept=True), constraints=DemographicParity(), grid_size=71) ``` With our estimator created, we can fit it to the data. After `fit()` completes, we extract the full set of predictors from the `GridSearch` object. The following cell trains a many copies of the underlying estimator, and may take a minute or two to run: ``` sweep.fit(X_train, y_train, sensitive_features=A_train.sex) # For Fairlearn v0.5.0, need sweep.predictors_ predictors = sweep._predictors ``` We could load these predictors into the Fairness dashboard now. However, the plot would be somewhat confusing due to their number. In this case, we are going to remove the predictors which are dominated in the error-disparity space by others from the sweep (note that the disparity will only be calculated for the protected attribute; other potentially protected attributes will *not* be mitigated). In general, one might not want to do this, since there may be other considerations beyond the strict optimisation of error and disparity (of the given protected attribute). ``` errors, disparities = [], [] for m in predictors: classifier = lambda X: m.predict(X) error = ErrorRate() error.load_data(X_train, pd.Series(y_train), sensitive_features=A_train.sex) disparity = DemographicParity() disparity.load_data(X_train, pd.Series(y_train), sensitive_features=A_train.sex) errors.append(error.gamma(classifier)[0]) disparities.append(disparity.gamma(classifier).max()) all_results = pd.DataFrame( {"predictor": predictors, "error": errors, "disparity": disparities}) dominant_models_dict = dict() base_name_format = "census_gs_model_{0}" row_id = 0 for row in all_results.itertuples(): model_name = base_name_format.format(row_id) errors_for_lower_or_eq_disparity = all_results["error"][all_results["disparity"]<=row.disparity] if row.error <= errors_for_lower_or_eq_disparity.min(): dominant_models_dict[model_name] = row.predictor row_id = row_id + 1 ``` We can construct predictions for the dominant models (we include the unmitigated predictor as well, for comparison): ``` predictions_dominant = {"census_unmitigated": unmitigated_predictor.predict(X_test)} models_dominant = {"census_unmitigated": unmitigated_predictor} for name, predictor in dominant_models_dict.items(): value = predictor.predict(X_test) predictions_dominant[name] = value models_dominant[name] = predictor ``` These predictions may then be viewed in the fairness dashboard. We include the race column from the dataset, as an alternative basis for assessing the models. However, since we have not based our mitigation on it, the variation in the models with respect to race can be large. ``` FairlearnDashboard(sensitive_features=A_test, sensitive_feature_names=['Sex', 'Race'], y_true=y_test.tolist(), y_pred=predictions_dominant) ``` When using sex as the sensitive feature and accuracy as the metric, we see a Pareto front forming - the set of predictors which represent optimal tradeoffs between accuracy and disparity in predictions. In the ideal case, we would have a predictor at (1,0) - perfectly accurate and without any unfairness under demographic parity (with respect to the protected attribute "sex"). The Pareto front represents the closest we can come to this ideal based on our data and choice of estimator. Note the range of the axes - the disparity axis covers more values than the accuracy, so we can reduce disparity substantially for a small loss in accuracy. Finally, we also see that the unmitigated model is towards the top right of the plot, with high accuracy, but worst disparity. By clicking on individual models on the plot, we can inspect their metrics for disparity and accuracy in greater detail. In a real example, we would then pick the model which represented the best trade-off between accuracy and disparity given the relevant business constraints. <a id="AzureUpload"></a> ## Uploading a Fairness Dashboard to Azure Uploading a fairness dashboard to Azure is a two stage process. The `FairlearnDashboard` invoked in the previous section relies on the underlying Python kernel to compute metrics on demand. This is obviously not available when the fairness dashboard is rendered in AzureML Studio. By default, the dashboard in Azure Machine Learning Studio also requires the models to be registered. The required stages are therefore: 1. Register the dominant models 1. Precompute all the required metrics 1. Upload to Azure Before that, we need to connect to Azure Machine Learning Studio: ``` from azureml.core import Workspace, Experiment, Model ws = Workspace.from_config() ws.get_details() ``` <a id="RegisterModels"></a> ### Registering Models The fairness dashboard is designed to integrate with registered models, so we need to do this for the models we want in the Studio portal. The assumption is that the names of the models specified in the dashboard dictionary correspond to the `id`s (i.e. `<name>:<version>` pairs) of registered models in the workspace. We register each of the models in the `models_dominant` dictionary into the workspace. For this, we have to save each model to a file, and then register that file: ``` import joblib import os os.makedirs('models', exist_ok=True) def register_model(name, model): print("Registering ", name) model_path = "models/{0}.pkl".format(name) joblib.dump(value=model, filename=model_path) registered_model = Model.register(model_path=model_path, model_name=name, workspace=ws) print("Registered ", registered_model.id) return registered_model.id model_name_id_mapping = dict() for name, model in models_dominant.items(): m_id = register_model(name, model) model_name_id_mapping[name] = m_id ``` Now, produce new predictions dictionaries, with the updated names: ``` predictions_dominant_ids = dict() for name, y_pred in predictions_dominant.items(): predictions_dominant_ids[model_name_id_mapping[name]] = y_pred ``` <a id="PrecomputeMetrics"></a> ### Precomputing Metrics We create a _dashboard dictionary_ using Fairlearn's `metrics` package. The `_create_group_metric_set` method has arguments similar to the Dashboard constructor, except that the sensitive features are passed as a dictionary (to ensure that names are available), and we must specify the type of prediction. Note that we use the `predictions_dominant_ids` dictionary we just created: ``` sf = { 'sex': A_test.sex, 'race': A_test.race } from fairlearn.metrics._group_metric_set import _create_group_metric_set dash_dict = _create_group_metric_set(y_true=y_test, predictions=predictions_dominant_ids, sensitive_features=sf, prediction_type='binary_classification') ``` <a id="DashboardUpload"></a> ### Uploading the Dashboard Now, we import our `contrib` package which contains the routine to perform the upload: ``` from azureml.contrib.fairness import upload_dashboard_dictionary, download_dashboard_by_upload_id ``` Now we can create an Experiment, then a Run, and upload our dashboard to it: ``` exp = Experiment(ws, "Test_Fairlearn_GridSearch_Census_Demo") print(exp) run = exp.start_logging() try: dashboard_title = "Dominant Models from GridSearch" upload_id = upload_dashboard_dictionary(run, dash_dict, dashboard_name=dashboard_title) print("\nUploaded to id: {0}\n".format(upload_id)) downloaded_dict = download_dashboard_by_upload_id(run, upload_id) finally: run.complete() ``` The dashboard can be viewed in the Run Details page. Finally, we can verify that the dashboard dictionary which we downloaded matches our upload: ``` print(dash_dict == downloaded_dict) ``` <a id="Conclusion"></a> ## Conclusion In this notebook we have demonstrated how to use the `GridSearch` algorithm from Fairlearn to generate a collection of models, and then present them in the fairness dashboard in Azure Machine Learning Studio. Please remember that this notebook has not attempted to discuss the many considerations which should be part of any approach to unfairness mitigation. The [Fairlearn website](http://fairlearn.github.io/) provides that discussion
true
code
0.525978
null
null
null
null
``` %load_ext autoreload %autoreload 2 import os import datetime import numpy as np import scipy import pandas as pd import torch from torch import nn import criscas from criscas.utilities import create_directory, get_device, report_available_cuda_devices from criscas.predict_model import * base_dir = os.path.abspath('..') base_dir ``` ### Read sample data ``` seq_df = pd.read_csv(os.path.join(base_dir, 'sample_data', 'abemax_sampledata.csv'), header=0) seq_df ``` The models expect sequences (i.e. target sites) to be wrapped in a `pandas.DataFrame` with a header that includes `ID` of the sequence and `seq` columns. The sequences should be of length 20 (i.e. 20 bases) and represent the protospacer target site. ``` # create a directory where we dump the predictions of the models csv_dir = create_directory(os.path.join(base_dir, 'sample_data', 'predictions')) ``` ### Specify device (i.e. CPU or GPU) to run the models on Specify device to run the model on. The models can run on `GPU` or `CPU`. We can instantiate a device by running `get_device(to_gpu,gpu_index)` function. - To run on GPU we pass `to_gpu = True` and specify which card to use if we have multiple cards `gpu_index=int` (i.e. in case we have multiple GPU cards we specify the index counting from 0). - If there is no GPU installed, the function will return a `CPU` device. We can get a detailed information on the GPU cards installed on the compute node by calling `report_available_cuda_devices` function. ``` report_available_cuda_devices() # instantiate a device using the only one available :P device = get_device(True, 0) device ``` ### Create a BE-DICT model by sepcifying the target base editor We start `BE-DICT` model by calling `BEDICT_CriscasModel(base_editor, device)` where we specify which base editor to use (i.e. `ABEmax`, `BE4max`, `ABE8e`, `Target-AID`) and the `device` we create earlier to run on. ``` base_editor = 'ABEmax' bedict = BEDICT_CriscasModel(base_editor, device) ``` We generate predictions by calling `predict_from_dataframe(seq_df)` where we pass the data frame wrapping the target sequences. The function returns two objects: - `pred_w_attn_runs_df` which is a data frame that contains predictions per target base and the attentions scores across all positions. - `proc_df` which is a data frame that represents the processed sequence data frame we passed (i.e. `seq_df`) ``` pred_w_attn_runs_df, proc_df = bedict.predict_from_dataframe(seq_df) ``` `pred_w_attn_runs_df` contains predictions from 5 trained models for `ABEmax` base editor (we have 5 runs trained per base editor). For more info, see our [paper](https://www.biorxiv.org/content/10.1101/2020.07.05.186544v1) on biorxiv. Target positions in the sequence reported in `base_pos` column in `pred_w_attn_runs_df` uses 0-based indexing (i.e. 0-19) ``` pred_w_attn_runs_df proc_df ``` Given that we have 5 predictions per sequence, we can further reduce to one prediction by either `averaging` across all models, or taking the `median` or `max` prediction based on the probability of editing scores. For this we use `select_prediction(pred_w_attn_runs_df, pred_option)` where `pred_w_attn_runs_df` is the data frame containing predictions from 5 models for each sequence. `pred_option` can be assume one of {`mean`, `median`, `max`}. ``` pred_option = 'mean' pred_w_attn_df = bedict.select_prediction(pred_w_attn_runs_df, pred_option) pred_w_attn_df ``` We can dump the prediction results on a specified directory on disk. We will dump the predictions with all 5 runs `pred_w_attn_runs_df` and the one average across runs `pred_w_attn_df`. Under `sample_data` directory we will have the following tree: <pre> sample_data └── predictions ├── predictions_allruns.csv └── predictions_predoption_mean.csv </pre> ``` pred_w_attn_runs_df.to_csv(os.path.join(csv_dir, f'predictions_allruns.csv')) pred_w_attn_df.to_csv(os.path.join(csv_dir, f'predictions_predoption_{pred_option}.csv')) ``` ### Generate attention plots We can generate attention plots for the prediction of each target base in the sequence using `highlight_attn_per_seq` method that takes the following arguments: - `pred_w_attn_runs_df`: data frame that contains model's predictions (5 runs) for each target base of each sequence (see above). - `proc_df`: data frame that represents the processed sequence data frame we passed (i.e. seq_df) - `seqid_pos_map`: dictionary `{seq_id:list of positions}` where `seq_id` is the ID of the target sequence, and list of positions that we want to generate attention plots for. Users can specify a `position from 1 to 20` (i.e. length of protospacer sequence) - `pred_option`: selection option for aggregating across 5 models' predictions. That is we can average the predictions across 5 runs, or take `max`, `median`, `min` or `None` (i.e. keep all 5 runs) - `apply_attnscore_filter`: boolean (`True` or `False`) to further apply filtering on the generated attention scores. This filtering allow to plot only predictions where the associated attention scores have a maximum that is >= 3 times the base attention score value <=> (3 * 1/20) - `fig_dir`: directory where to dump the generated plots or `None` (to return the plots inline) ``` # create a dictionary to specify target sequence and the position we want attention plot for # we are targeting position 5 in the sequence seqid_pos_map = {'CTRL_HEKsiteNO1':[5], 'CTRL_HEKsiteNO2':[5]} pred_option = 'mean' apply_attn_filter = False bedict.highlight_attn_per_seq(pred_w_attn_runs_df, proc_df, seqid_pos_map=seqid_pos_map, pred_option=pred_option, apply_attnscore_filter=apply_attn_filter, fig_dir=None) ``` We can save the plots on disk without returning them by specifing `fig_dir` ``` # create a dictionary to specify target sequence and the position I want attention plot for # we are targeting position 5 in the sequence seqid_pos_map = {'CTRL_HEKsiteNO1':[5], 'CTRL_HEKsiteNO2':[5]} pred_option = 'mean' apply_attn_filter = False fig_dir = create_directory(os.path.join(base_dir, 'sample_data', 'fig_dir')) bedict.highlight_attn_per_seq(pred_w_attn_runs_df, proc_df, seqid_pos_map=seqid_pos_map, pred_option=pred_option, apply_attnscore_filter=apply_attn_filter, fig_dir=create_directory(os.path.join(fig_dir, pred_option))) ``` We will generate the following files: <pre> sample_data ├── abemax_sampledata.csv ├── fig_dir │   └── mean │   ├── ABEmax_seqattn_CTRL_HEKsiteNO1_basepos_5_predoption_mean.pdf │   └── ABEmax_seqattn_CTRL_HEKsiteNO2_basepos_5_predoption_mean.pdf └── predictions ├── predictions_allruns.csv └── predictions_predoption_mean.csv </pre> Similarly we can change the other arguments such as `pred_option` `apply_attnscore_filter` and so on to get different filtering options - We leave this as an exercise for the user/reader :D
true
code
0.646376
null
null
null
null
``` from extra import * import keras from keras.datasets import mnist from keras.models import Sequential, Model from keras import regularizers from keras.layers import Dense, Dropout, Conv2D, Input, GlobalAveragePooling2D, GlobalMaxPooling2D from keras.layers import Add, Concatenate, BatchNormalization import keras.backend as K from keras.optimizers import Adam import pandas as pd import numpy as np %matplotlib inline from matplotlib import pyplot as plt batch_size = 128 num_classes = 10 # input image dimensions HEIGHT, WIDTH = 28, 28 K.set_image_data_format('channels_first') keras.__version__ (x_train, y_train), (x_test, y_test) = mnist.load_data() print('x_train shape:', x_train.shape) print(x_train.shape[0], 'train samples') print(x_test.shape[0], 'test samples') print('pixel range',x_train.min(), x_train.max()) ``` images are as pixel values, ranging from 0-255 ``` pd.DataFrame(y_train)[0].value_counts().plot(kind='bar') ## changes pixel range to 0 to 1 def normalize(images): images /= 255. return images x_train = normalize(x_train.astype(np.float32)) x_test = normalize(x_test.astype(np.float32)) x_train = x_train.reshape(x_train.shape[0], 1, WIDTH, HEIGHT) x_test = x_test.reshape(x_test.shape[0], 1, WIDTH, HEIGHT) y_train = keras.utils.to_categorical(y_train, num_classes) y_test = keras.utils.to_categorical(y_test, num_classes) ``` now we have images that are normalized, and labels are one hot encoded ``` def show_images(rows, columns): fig, axes = plt.subplots(rows,columns) for rows in axes: for ax in rows: idx = np.random.randint(0, len(y_train)) ax.title.set_text(np.argmax(y_train[idx])) ax.imshow(x_train[idx][0], cmap='gray') ax.axis('off') plt.show() show_images(2,4) def build_model(): inp = Input((1, HEIGHT, WIDTH)) x = Conv2D(16, kernel_size=(7,7), strides=(2,2), padding='same', activation='relu', kernel_regularizer=regularizers.l2(0.002))(inp) x = BatchNormalization()(x) y = Conv2D(16, kernel_size=(3,3), strides=(1,1), padding='same', activation='relu', kernel_regularizer=regularizers.l2(0.002))(x) y = BatchNormalization()(y) y = Conv2D(16, kernel_size=(3,3), strides=(1,1), padding='same', activation='relu', kernel_regularizer=regularizers.l2(0.002))(y) y = BatchNormalization()(y) x = Add()([x,y]) x = Conv2D(32, kernel_size=(3,3), strides=(2,2), padding='same', activation='relu', kernel_regularizer=regularizers.l2(0.002))(x) x = BatchNormalization()(x) y = Conv2D(32, kernel_size=(3,3), strides=(1,1), padding='same', activation='relu', kernel_regularizer=regularizers.l2(0.002))(x) y = BatchNormalization()(y) y = Conv2D(32, kernel_size=(3,3), strides=(1,1), padding='same', activation='relu', kernel_regularizer=regularizers.l2(0.002))(y) y = BatchNormalization()(y) x = Add()([x,y]) x = Conv2D(64, kernel_size=(3,3), strides=(2,2), padding='same', activation='relu', kernel_regularizer=regularizers.l2(0.002))(x) x = BatchNormalization()(x) x = Concatenate()([GlobalMaxPooling2D(data_format='channels_first')(x) , GlobalAveragePooling2D(data_format='channels_first')(x)]) x = Dropout(0.3)(x) out = Dense(10, activation='softmax')(x) return Model(inputs=inp, outputs=out) model = build_model() model.summary() model.compile(Adam(), loss='categorical_crossentropy', metrics=['acc']) K.get_value(model.optimizer.lr), K.get_value(model.optimizer.beta_1) lr_find(model, data=(x_train, y_train)) ## use generator if using generator insted of (x_train, y_train) and pass parameter, generator=True ``` selecting lr as 2e-3 ### high lr for demonstration of decay, from above graph anything b/w 0.002 to 0.004 seems nice ``` recorder = RecorderCallback() clr = CyclicLRCallback(max_lr=0.4, cycles=4, decay=0.6, DEBUG_MODE=True, patience=1, auto_decay=True, pct_start=0.3, monitor='val_loss') K.get_value(model.optimizer.lr), K.get_value(model.optimizer.beta_1) model.fit(x_train, y_train, batch_size=128, epochs=4, callbacks=[recorder, clr], validation_data=(x_test, y_test)) K.get_value(model.optimizer.lr), K.get_value(model.optimizer.beta_1) recorder.plot_losses() recorder.plot_losses(log=True) #take log scale for loss recorder.plot_losses(clip=True) #clips loss between 2.5 and 97.5 precentile recorder.plot_losses(clip=True, log=True) recorder.plot_lr() recorder.plot_mom() ##plots momentum, beta_1 in adam family of optimizers ```
true
code
0.883312
null
null
null
null
``` import numpy as np import matplotlib.pyplot as plt %matplotlib inline # a) import sse Lx, Ly = 8, 8 n_updates_measure = 10000 # b) spins, op_string, bonds = sse.init_SSE_square(Lx, Ly) for beta in [0.1, 1., 64.]: op_string = sse.thermalize(spins, op_string, bonds, beta, n_updates_measure//10) ns = sse.measure(spins, op_string, bonds, beta, n_updates_measure) plt.figure() plt.hist(ns, bins=np.arange(len(op_string)+1)) plt.axvline(len(op_string), color='r', ) # mark the length of the operator string plt.xlim(0, len(op_string)*1.1) plt.title("T=1./{beta:.1f}, len of op_string={l:d}".format(beta=beta, l=len(op_string))) plt.xlabel("number of operators $n$") ``` The red bar indicates the size of the operator string after thermalization. These histograms justify that we can fix the length of the operator string `M` (called $n*$ in the lecture notes). Since `M` is automatically chosen as large as needed, we effectively take into account *all* relevant terms of the full series $\sum_{n=0}^\infty$ in the expansion, even if our numerical simulations only use a finite `M`. ``` # c) Ts = np.linspace(2., 0., 20, endpoint=False) betas = 1./Ts Ls = [4, 8, 16] Es_Eerrs = [] for L in Ls: print("="*80) print("L =", L) E = sse.run_simulation(L, L, betas) Es_Eerrs.append(E) plt.figure() for E, L in zip(Es_Eerrs, Ls): plt.errorbar(Ts, E[:, 0], yerr=E[:, 1], label="L={L:d}".format(L=L)) plt.legend() plt.xlim(0, np.max(1./betas)) plt.xlabel("temperature $T$") plt.ylabel("energy $E$ per site") ``` # specific heat ``` # d) def run_simulation(Lx, Ly, betas=[1.], n_updates_measure=10000, n_bins=10): """A full simulation: initialize, thermalize and measure for various betas.""" spins, op_string, bonds = sse.init_SSE_square(Lx, Ly) n_sites = len(spins) n_bonds = len(bonds) Es_Eerrs = [] Cs_Cerrs = [] for beta in betas: print("beta = {beta:.3f}".format(beta=beta), flush=True) op_string = sse.thermalize(spins, op_string, bonds, beta, n_updates_measure//10) Es = [] Cs = [] for _ in range(n_bins): ns = sse.measure(spins, op_string, bonds, beta, n_updates_measure) # energy per site n_mean = np.mean(ns) E = (-n_mean/beta + 0.25*n_bonds) / n_sites Es.append(E) Cv = (np.mean(ns**2) - n_mean - n_mean**2)/ n_sites Cs.append(Cv) E, Eerr = np.mean(Es), np.std(Es)/np.sqrt(n_bins) Es_Eerrs.append((E, Eerr)) C, Cerr = np.mean(Cs), np.std(Cs)/np.sqrt(n_bins) Cs_Cerrs.append((C, Cerr)) return np.array(Es_Eerrs), np.array(Cs_Cerrs) Es_Errs, Cs_Cerrs = run_simulation(8, 8, betas) plt.figure() plt.errorbar(Ts, Cs_Cerrs[:, 0], yerr=Cs_Cerrs[:, 1], label="L={L:d}".format(L=L)) plt.xlim(0, np.max(1./betas)) plt.xlabel("temperature $T$") plt.ylabel("Specific heat $C_v$ per site") ``` ## Interpretation We see the behaviour expected from the previous plot considering $C_v= \partial_T <E> $. However, as $T \rightarrow 0$ or $\beta \rightarrow \infty$ the error of $C_v$ blows up! Looking at the formula $C_v = <n^2> - <n>^2 - <n>$, we see that it consist of larger terms which should cancel to zero. Statistical noise is of the order of the large terms $<n^2>$, hence the relative error in $C_v$ explodes. This is the essential problem of the infamous "sign problem" of quantum monte carlo (QMC): in many models (e.g. in our case of the SSE if we don't have a bipartite lattice) one encounters negative weights for some configurations in the partition function, and a cancelation of different terms. Similar as for the $C_v$ at low temperatures, this often leads to error bars which are often exponentially large in the system size. Obviously, phases from a "time evolution" lead to a similar problem. There is no generic solution to circumvent the sign problem (it's NP hard!), but for many specific models, there were actually sign-problem free solutions found. On the other hand, whenever QMC has no sign problem, it is for sure one of the most powerful numerical methods we have. For example, it allows beautiful finite size scaling collapses to extract critical exponents etc. for quantum phase transitions even in 2D or 3D. # Staggered Magnetization ``` # e) def get_staggering(Lx, Ly): stag = np.zeros(Lx*Ly, np.intp) for x in range(Lx): for y in range(Ly): s = sse.site(x, y, Lx, Ly) stag[s] = (-1)**(x+y) return stag def staggered_magnetization(spins, stag): return 0.5*np.sum(spins * stag) def measure(spins, op_string, bonds, stag, beta, n_updates_measure): """Perform a lot of updates with measurements.""" ns = [] ms = [] for _ in range(n_updates_measure): n = sse.diagonal_update(spins, op_string, bonds, beta) m = staggered_magnetization(spins, stag) sse.loop_update(spins, op_string, bonds) ns.append(n) ms.append(m) return np.array(ns), np.array(ms) def run_simulation(Lx, Ly, betas=[1.], n_updates_measure=10000, n_bins=10): """A full simulation: initialize, thermalize and measure for various betas.""" spins, op_string, bonds = sse.init_SSE_square(Lx, Ly) stag = get_staggering(Lx, Ly) n_sites = len(spins) n_bonds = len(bonds) Es_Eerrs = [] Cs_Cerrs = [] Ms_Merrs = [] for beta in betas: print("beta = {beta:.3f}".format(beta=beta), flush=True) op_string = sse.thermalize(spins, op_string, bonds, beta, n_updates_measure//10) Es = [] Cs = [] Ms = [] for _ in range(n_bins): ns, ms = measure(spins, op_string, bonds, stag, beta, n_updates_measure) # energy per site n_mean = np.mean(ns) E = (-n_mean/beta + 0.25*n_bonds) / n_sites Es.append(E) Cv = (np.mean(ns**2) - n_mean - n_mean**2)/ n_sites Cs.append(Cv) Ms.append(np.mean(np.abs(ms))/n_sites) # note that we need the absolute value here! # there is a symmetry of flipping all spins which ensures that <Ms> = 0 E, Eerr = np.mean(Es), np.std(Es)/np.sqrt(n_bins) Es_Eerrs.append((E, Eerr)) C, Cerr = np.mean(Cs), np.std(Cs)/np.sqrt(n_bins) Cs_Cerrs.append((C, Cerr)) M, Merr = np.mean(Ms), np.std(Ms)/np.sqrt(n_bins) Ms_Merrs.append((M, Merr)) return np.array(Es_Eerrs), np.array(Cs_Cerrs), np.array(Ms_Merrs) # f) Ls = [4, 8, 16] Ms_Merrs = [] for L in Ls: print("="*80) print("L =", L) E, C, M = run_simulation(L, L, betas) Ms_Merrs.append(M) plt.figure() for M, L in zip(Ms_Merrs, Ls): plt.errorbar(Ts, M[:, 0], yerr=M[:, 1], label="L={L:d}".format(L=L)) plt.legend() plt.xlim(0, np.max(1./betas)) plt.xlabel("temperature $T$") plt.ylabel("staggered magnetization $<|M_s|>$ per site") ``` # Honeycomb lattice ``` def site_honeycomb(x, y, u, Lx, Ly): """Defines a numbering of the sites, given positions x and y and u=0,1 within the unit cell""" return y * Lx * 2 + x*2 + u def init_SSE_honeycomb(Lx, Ly): """Initialize a starting configuration on a 2D square lattice.""" n_sites = Lx*Ly*2 # initialize spins randomly with numbers +1 or -1, but the average magnetization is 0 spins = 2*np.mod(np.random.permutation(n_sites), 2) - 1 op_string = -1 * np.ones(10, np.intp) # initialize with identities bonds = [] for x0 in range(Lx): for y0 in range(Ly): sA = site_honeycomb(x0, y0, 0, Lx, Ly) sB0 = site_honeycomb(x0, y0, 1, Lx, Ly) bonds.append([sA, sB0]) sB1 = site_honeycomb(np.mod(x0+1, Lx), np.mod(y0-1, Ly), 1, Lx, Ly) bonds.append([sA, sB1]) sB2 = site_honeycomb(x0, np.mod(y0-1, Ly), 1, Lx, Ly) bonds.append([sA, sB2]) bonds = np.array(bonds, dtype=np.intp) return spins, op_string, bonds def get_staggering_honeycomb(Lx, Ly): stag = np.zeros(Lx*Ly*2, np.intp) for x in range(Lx): for y in range(Ly): stag[site_honeycomb(x, y, 0, Lx, Ly)] = +1 stag[site_honeycomb(x, y, 1, Lx, Ly)] = -1 return stag def run_simulation_honeycomb(Lx, Ly, betas=[1.], n_updates_measure=10000, n_bins=10): """A full simulation: initialize, thermalize and measure for various betas.""" spins, op_string, bonds = init_SSE_honeycomb(Lx, Ly) stag = get_staggering_honeycomb(Lx, Ly) n_sites = len(spins) n_bonds = len(bonds) Es_Eerrs = [] Cs_Cerrs = [] Ms_Merrs = [] for beta in betas: print("beta = {beta:.3f}".format(beta=beta), flush=True) op_string = sse.thermalize(spins, op_string, bonds, beta, n_updates_measure//10) Es = [] Cs = [] Ms = [] for _ in range(n_bins): ns, ms = measure(spins, op_string, bonds, stag, beta, n_updates_measure) # energy per site n_mean = np.mean(ns) E = (-n_mean/beta + 0.25*n_bonds) / n_sites Es.append(E) Cv = (np.mean(ns**2) - n_mean - n_mean**2)/ n_sites Cs.append(Cv) Ms.append(np.mean(np.abs(ms))/n_sites) E, Eerr = np.mean(Es), np.std(Es)/np.sqrt(n_bins) Es_Eerrs.append((E, Eerr)) C, Cerr = np.mean(Cs), np.std(Cs)/np.sqrt(n_bins) Cs_Cerrs.append((C, Cerr)) M, Merr = np.mean(Ms), np.std(Ms)/np.sqrt(n_bins) Ms_Merrs.append((M, Merr)) return np.array(Es_Eerrs), np.array(Cs_Cerrs), np.array(Ms_Merrs) # just to check: plot the generated lattice L =4 spins, op_string, bonds = init_SSE_honeycomb(L, L) stag = get_staggering_honeycomb(L, L) n_sites = len(spins) n_bonds = len(bonds) # use non-trivial unit-vectors unit_vectors = np.array([[1, 0], [0.5, 0.5*np.sqrt(3)]]) dx = np.array([0., 0.5]) site_positions = np.zeros((n_sites, 2), np.float) for x in range(L): for y in range(L): pos = x* unit_vectors[0, :] + y*unit_vectors[1, :] s0 = site_honeycomb(x, y, 0, L, L) site_positions[s0, :] = pos s1 = site_honeycomb(x, y, 1, L, L) site_positions[s1, :] = pos + dx # plot the sites and bonds plt.figure() for bond in bonds: linestyle = '-' s0, s1 = bond if np.max(np.abs(site_positions[s0, :] - site_positions[s1, :])) > L/2: linestyle = ':' # plot bonds from the periodic boundary conditions dotted plt.plot(site_positions[bond, 0], site_positions[bond, 1], linestyle=linestyle, color='k') plt.plot(site_positions[:, 0], site_positions[:, 1], marker='o', linestyle='') plt.show() Ls = [4, 8, 16] result_honeycomb = [] for L in Ls: print("="*80) print("L =", L) res = run_simulation_honeycomb(L, L, betas) result_honeycomb.append(res) fig, axes = plt.subplots(nrows=3, figsize=(10, 15), sharex=True) for res, L in zip(result_honeycomb, Ls): for data, ax in zip(res, axes): ax.errorbar(Ts, data[:, 0], yerr=data[:, 1], label="L={L:d}".format(L=L)) for ax, ylabel in zip(axes, ["energy $E$", "specific heat $C_v$", "stag. magnetization $<|M_s|>$"]): ax.legend() ax.set_ylabel(ylabel) axes[0].set_xlim(0, np.max(1./betas)) axes[-1].set_xlabel("temperature $T$") ```
true
code
0.46035
null
null
null
null
# Sklearn # Визуализация данных ``` import pandas as pd import numpy as np import matplotlib as mpl import matplotlib.pyplot as plt import scipy.stats as sts import seaborn as sns from contextlib import contextmanager sns.set() sns.set_style("whitegrid") color_palette = sns.color_palette('deep') + sns.color_palette('husl', 6) + sns.color_palette('bright') + sns.color_palette('pastel') %matplotlib inline sns.palplot(color_palette) def ndprint(a, precision=3): with np.printoptions(precision=precision, suppress=True): print(a) from sklearn import datasets, metrics, model_selection as mdsel ``` ### Загрузка выборки ``` digits = datasets.load_digits() print(digits.DESCR) print('target:', digits.target[0]) print('features: \n', digits.data[0]) print('number of features:', len(digits.data[0])) ``` ## Визуализация объектов выборки ``` #не будет работать: Invalid dimensions for image data plt.imshow(digits.data[0]) digits.data[0].shape digits.data[0].reshape(8,8) digits.data[0].reshape(8,8).shape plt.imshow(digits.data[0].reshape(8,8)) digits.keys() digits.images[0] plt.imshow(digits.images[0]) plt.figure(figsize=(8, 8)) plt.subplot(2, 2, 1) plt.imshow(digits.images[0]) plt.subplot(2, 2, 2) plt.imshow(digits.images[0], cmap='hot') plt.subplot(2, 2, 3) plt.imshow(digits.images[0], cmap='gray') plt.subplot(2, 2, 4) plt.imshow(digits.images[0], cmap='gray', interpolation='sinc') plt.figure(figsize=(20, 8)) for plot_number, plot in enumerate(digits.images[:10]): plt.subplot(2, 5, plot_number + 1) plt.imshow(plot, cmap = 'gray') plt.title('digit: ' + str(digits.target[plot_number])) ``` ## Уменьшение размерности ``` from sklearn.neighbors import KNeighborsClassifier from sklearn.metrics import classification_report from collections import Counter data = digits.data[:1000] labels = digits.target[:1000] print(Counter(labels)) plt.figure(figsize = (10, 6)) plt.bar(Counter(labels).keys(), Counter(labels).values()) classifier = KNeighborsClassifier() classifier.fit(data, labels) print(classification_report(classifier.predict(data), labels)) ``` ### Random projection ``` from sklearn import random_projection projection = random_projection.SparseRandomProjection(n_components = 2, random_state = 0) data_2d_rp = projection.fit_transform(data) plt.figure(figsize=(10, 6)) plt.scatter(data_2d_rp[:, 0], data_2d_rp[:, 1], c = labels) classifier.fit(data_2d_rp, labels) print(classification_report(classifier.predict(data_2d_rp), labels)) ``` ### PCA ``` from sklearn.decomposition import PCA pca = PCA(n_components = 2, random_state = 0, svd_solver='randomized') data_2d_pca = pca.fit_transform(data) plt.figure(figsize = (10, 6)) plt.scatter(data_2d_pca[:, 0], data_2d_pca[:, 1], c = labels) classifier.fit(data_2d_pca, labels) print(classification_report(classifier.predict(data_2d_pca), labels)) ``` ### MDS ``` from sklearn import manifold mds = manifold.MDS(n_components = 2, n_init = 1, max_iter = 100) data_2d_mds = mds.fit_transform(data) plt.figure(figsize=(10, 6)) plt.scatter(data_2d_mds[:, 0], data_2d_mds[:, 1], c = labels) classifier.fit(data_2d_mds, labels) print(classification_report(classifier.predict(data_2d_mds), labels)) ``` ### t- SNE ``` tsne = manifold.TSNE(n_components = 2, init = 'pca', random_state = 0) data_2d_tsne = tsne.fit_transform(data) plt.figure(figsize = (10, 6)) plt.scatter(data_2d_tsne[:, 0], data_2d_tsne[:, 1], c = labels) classifier.fit(data_2d_tsne, labels) print(classification_report(classifier.predict(data_2d_tsne), labels)) ```
true
code
0.438094
null
null
null
null
## Amazon SageMaker Feature Store: Client-side Encryption using AWS Encryption SDK This notebook demonstrates how client-side encryption with SageMaker Feature Store is done using the [AWS Encryption SDK library](https://docs.aws.amazon.com/encryption-sdk/latest/developer-guide/introduction.html) to encrypt your data prior to ingesting it into your Online or Offline Feature Store. We first demonstrate how to encrypt your data using the AWS Encryption SDK library, and then show how to use [Amazon Athena](https://aws.amazon.com/athena/) to query for a subset of encrypted columns of features for model training. Currently, Feature Store supports encryption at rest and encryption in transit. With this notebook, we showcase an additional layer of security where your data is encrypted and then stored in your Feature Store. This notebook also covers the scenario where you want to query a subset of encrypted data using Amazon Athena for model training. This becomes particularly useful when you want to store encrypted data sets in a single Feature Store, and want to perform model training using only a subset of encrypted columns, forcing privacy over the remaining columns. If you are interested in server side encryption with Feature Store, see [Feature Store: Encrypt Data in your Online or Offline Feature Store using KMS key](https://sagemaker-examples.readthedocs.io/en/latest/sagemaker-featurestore/feature_store_kms_key_encryption.html). For more information on the AWS Encryption library, see [AWS Encryption SDK library](https://docs.aws.amazon.com/encryption-sdk/latest/developer-guide/introduction.html). For detailed information about Feature Store, see the [Developer Guide](https://docs.aws.amazon.com/sagemaker/latest/dg/feature-store.html). ### Overview 1. Set up 2. Load in and encrypt your data using AWS Encryption library (`aws-encryption-sdk`) 3. Create Feature Group and ingest your encrypted data into it 4. Query your encrypted data in your feature store using Amazon Athena 5. Decrypt the data you queried ### Prerequisites This notebook uses the Python SDK library for Feature Store, the AWS Encryption SDK library, `aws-encryption-sdk` and the `Python 3 (DataScience)` kernel. To use the`aws-encryption-sdk` library you will need to have an active KMS key that you created. If you do not have a KMS key, then you can create one by following the [KMS Policy Template](https://sagemaker-examples.readthedocs.io/en/latest/sagemaker-featurestore/feature_store_kms_key_encryption.html#KMS-Policy-Template) steps, or you can visit the [KMS section in the console](https://console.aws.amazon.com/kms/home) and follow the button prompts for creating a KMS key. This notebook works with SageMaker Studio, Jupyter, and JupyterLab. ### Library Dependencies: * `sagemaker>=2.0.0` * `numpy` * `pandas` * `aws-encryption-sdk` ### Data This notebook uses a synthetic data set that has the following features: `customer_id`, `ssn` (social security number), `credit_score`, `age`, and aims to simulate a relaxed data set that has some important features that would be needed during the credit card approval process. ``` import sagemaker import pandas as pd import numpy as np pip install -q 'aws-encryption-sdk' ``` ### Set up ``` sagemaker_session = sagemaker.Session() s3_bucket_name = sagemaker_session.default_bucket() prefix = "sagemaker-featurestore-demo" role = sagemaker.get_execution_role() region = sagemaker_session.boto_region_name ``` Instantiate an encryption SDK client and provide your KMS ARN key to the `StrictAwsKmsMasterKeyProvider` object. This will be needed for data encryption and decryption by the AWS Encryption SDK library. You will need to substitute your KMS Key ARN for `kms_key`. ``` import aws_encryption_sdk from aws_encryption_sdk.identifiers import CommitmentPolicy client = aws_encryption_sdk.EncryptionSDKClient( commitment_policy=CommitmentPolicy.REQUIRE_ENCRYPT_REQUIRE_DECRYPT ) kms_key_provider = aws_encryption_sdk.StrictAwsKmsMasterKeyProvider( key_ids=[kms_key] ## Add your KMS key here ) ``` Load in your data. ``` credit_card_data = pd.read_csv("data/credit_card_approval_synthetic.csv") credit_card_data.head() credit_card_data.dtypes ``` ### Client-Side Encryption Methods Below are some methods that use the Amazon Encryption SDK library for data encryption, and decryption. Note that the data type of the encryption is byte which we convert to an integer prior to storing it into Feature Store and do the reverse prior to decrypting. This is because Feature Store doesn't support byte format directly, thus why we convert the byte encryption to an integer. ``` def encrypt_data_frame(df, columns): """ Input: df: A pandas Dataframe columns: A list of column names. Encrypt the provided columns in df. This method assumes that column names provided in columns exist in df, and uses the AWS Encryption SDK library. """ for col in columns: buffer = [] for entry in np.array(df[col]): entry = str(entry) encrypted_entry, encryptor_header = client.encrypt( source=entry, key_provider=kms_key_provider ) buffer.append(encrypted_entry) df[col] = buffer def decrypt_data_frame(df, columns): """ Input: df: A pandas Dataframe columns: A list of column names. Decrypt the provided columns in df. This method assumes that column names provided in columns exist in df, and uses the AWS Encryption SDK library. """ for col in columns: buffer = [] for entry in np.array(df[col]): decrypted_entry, decryptor_header = client.decrypt( source=entry, key_provider=kms_key_provider ) buffer.append(float(decrypted_entry)) df[col] = np.array(buffer) def bytes_to_int(df, columns): """ Input: df: A pandas Dataframe columns: A list of column names. Convert the provided columns in df of type bytes to integers. This method assumes that column names provided in columns exist in df and that the columns passed in are of type bytes. """ for col in columns: for index, entry in enumerate(np.array(df[col])): df[col][index] = int.from_bytes(entry, "little") def int_to_bytes(df, columns): """ Input: df: A pandas Dataframe columns: A list of column names. Convert the provided columns in df of type integers to bytes. This method assumes that column names provided in columns exist in df and that the columns passed in are of type integers. """ for col in columns: buffer = [] for index, entry in enumerate(np.array(df[col])): current = int(df[col][index]) current_bit_length = current.bit_length() + 1 # include the sign bit, 1 current_byte_length = (current_bit_length + 7) // 8 buffer.append(current.to_bytes(current_byte_length, "little")) df[col] = pd.Series(buffer) ## Encrypt credit card data. Note that we treat `customer_id` as a primary key, and since it's encryption is unique we can encrypt it. encrypt_data_frame(credit_card_data, ["customer_id", "age", "SSN", "credit_score"]) credit_card_data print(credit_card_data.dtypes) ## Cast encryption of type bytes to an integer so it can be stored in Feature Store. bytes_to_int(credit_card_data, ["customer_id", "age", "SSN", "credit_score"]) print(credit_card_data.dtypes) credit_card_data def cast_object_to_string(data_frame): """ Input: data_frame: A pandas Dataframe Cast all columns of data_frame of type object to type string. """ for label in data_frame.columns: if data_frame.dtypes[label] == object: data_frame[label] = data_frame[label].astype("str").astype("string") return data_frame credit_card_data = cast_object_to_string(credit_card_data) print(credit_card_data.dtypes) credit_card_data ``` ### Create your Feature Group and Ingest your encrypted data into it Below we start by appending the `EventTime` feature to your data to timestamp entries, then we load the feature definition, and instantiate the Feature Group object. Then lastly we ingest the data into your feature store. ``` from time import gmtime, strftime, sleep credit_card_feature_group_name = "credit-card-feature-group-" + strftime("%d-%H-%M-%S", gmtime()) ``` Instantiate a FeatureGroup object for `credit_card_data`. ``` from sagemaker.feature_store.feature_group import FeatureGroup credit_card_feature_group = FeatureGroup( name=credit_card_feature_group_name, sagemaker_session=sagemaker_session ) import time current_time_sec = int(round(time.time())) ## Recall customer_id is encrypted therefore unique, and so it can be used as a record identifier. record_identifier_feature_name = "customer_id" ``` Append the `EventTime` feature to your data frame. This parameter is required, and time stamps each data point. ``` credit_card_data["EventTime"] = pd.Series( [current_time_sec] * len(credit_card_data), dtype="float64" ) credit_card_data.head() print(credit_card_data.dtypes) credit_card_feature_group.load_feature_definitions(data_frame=credit_card_data) credit_card_feature_group.create( s3_uri=f"s3://{s3_bucket_name}/{prefix}", record_identifier_name=record_identifier_feature_name, event_time_feature_name="EventTime", role_arn=role, enable_online_store=False, ) time.sleep(60) ``` Ingest your data into your feature group. ``` credit_card_feature_group.ingest(data_frame=credit_card_data, max_workers=3, wait=True) time.sleep(30) ``` Continually check your offline store until your data is available in it. ``` s3_client = sagemaker_session.boto_session.client("s3", region_name=region) credit_card_feature_group_s3_uri = ( credit_card_feature_group.describe() .get("OfflineStoreConfig") .get("S3StorageConfig") .get("ResolvedOutputS3Uri") ) credit_card_feature_group_s3_prefix = credit_card_feature_group_s3_uri.replace( f"s3://{s3_bucket_name}/", "" ) offline_store_contents = None while offline_store_contents is None: objects_in_bucket = s3_client.list_objects( Bucket=s3_bucket_name, Prefix=credit_card_feature_group_s3_prefix ) if "Contents" in objects_in_bucket and len(objects_in_bucket["Contents"]) > 1: offline_store_contents = objects_in_bucket["Contents"] else: print("Waiting for data in offline store...\n") time.sleep(60) print("Data available.") ``` ### Use Amazon Athena to Query your Encrypted Data in your Feature Store Using Amazon Athena, we query columns `customer_id`, `age`, and `credit_score` from your offline feature store where your encrypted data is. ``` credit_card_query = credit_card_feature_group.athena_query() credit_card_table = credit_card_query.table_name query_credit_card_table = 'SELECT customer_id, age, credit_score FROM "' + credit_card_table + '"' print("Running " + query_credit_card_table) # Run the Athena query credit_card_query.run( query_string=query_credit_card_table, output_location="s3://" + s3_bucket_name + "/" + prefix + "/query_results/", ) time.sleep(60) credit_card_dataset = credit_card_query.as_dataframe() print(credit_card_dataset.dtypes) credit_card_dataset int_to_bytes(credit_card_dataset, ["customer_id", "age", "credit_score"]) credit_card_dataset decrypt_data_frame(credit_card_dataset, ["customer_id", "age", "credit_score"]) ``` In this notebook, we queried a subset of encrypted features. From here you can now train a model on this new dataset while remaining privacy over other columns e.g., `ssn`. ``` credit_card_dataset ``` ### Clean Up Resources Remove the Feature Group that was created. ``` credit_card_feature_group.delete() ``` ### Next Steps In this notebook we covered client-side encryption with Feature Store. If you are interested in understanding how server-side encryption is done with Feature Store, see [Feature Store: Encrypt Data in your Online or Offline Feature Store using KMS key](https://sagemaker-examples.readthedocs.io/en/latest/sagemaker-featurestore/feature_store_kms_key_encryption.html). For more information on the AWS Encryption library, see [AWS Encryption SDK library](https://docs.aws.amazon.com/encryption-sdk/latest/developer-guide/introduction.html). For detailed information about Feature Store, see the [Developer Guide](https://docs.aws.amazon.com/sagemaker/latest/dg/feature-store.html).
true
code
0.36608
null
null
null
null
# Strings ### **Splitting strings** ``` 'a,b,c'.split(',') latitude = '37.24N' longitude = '-115.81W' 'Coordinates {0},{1}'.format(latitude,longitude) f'Coordinates {latitude},{longitude}' '{0},{1},{2}'.format(*('abc')) coord = {"latitude":latitude,"longitude":longitude} 'Coordinates {latitude},{longitude}'.format(**coord) ``` ### **Access argument' s attribute ** ``` class Point: def __init__(self,x,y): self.x,self.y = x,y def __str__(self): return 'Point({self.x},{self.y})'.format(self = self) def __repr__(self): return f'Point({self.x},{self.y})' test_point = Point(4,2) test_point str(Point(4,2)) ``` ### **Replace with %s , %r ** : ``` " repr() shows the quote {!r}, while str() doesn't:{!s} ".format('a1','a2') ``` ### **Aligning the text with width** : ``` '{:<30}'.format('left aligned') '{:>30}'.format('right aligned') '{:^30}'.format('centerd') '{:*^30}'.format('centerd') ``` ### **Replace with %x , %o and convert the value to different base ** : ``` "int:{0:d}, hex:{0:x}, oct:{0:o}, bin:{0:b}".format(42) '{:,}'.format(12345677) ``` ### **Percentage ** : ``` points = 19 total = 22 'Correct answers: {:.2%}'.format(points/total) import datetime as dt f"{dt.datetime.now():%Y-%m-%d}" f"{dt.datetime.now():%d_%m_%Y}" today = dt.datetime.today().strftime("%d_%m_%Y") today ``` ### **Splitting without parameters ** : ``` "this is a test".split() ``` ### **Concatenating and joining Strings ** : ``` 'do'*2 orig_string ='Hello' orig_string+',World' full_sentence = orig_string+',World' full_sentence ``` ### **Concatenating with join() , other basic funstions** : ``` strings = ['do','re','mi'] ', '.join(strings) 'z' not in 'abc' ord('a'), ord('#') chr(97) s = "foodbar" s[2:5] s[:4] + s[4:] s[:4] + s[4:] == s t=s[:] id(s) id(t) s is t s[0:6:2] s[5:0:-2] s = 'tomorrow is monday' reverse_s = s[::-1] reverse_s s.capitalize() s.upper() s.title() s.count('o') "foobar".startswith('foo') "foobar".endswith('ar') "foobar".endswith('oob',0,4) "foobar".endswith('oob',2,4) "My name is yaozeliang, I work at Societe Generale".find('yao') # If can't find the string, return -1 "My name is yaozeliang, I work at Societe Generale".find('gent') # Check a string if consists of alphanumeric characters "abc123".isalnum() "abc%123".isalnum() "abcABC".isalpha() "abcABC1".isalpha() '123'.isdigit() '123abc'.isdigit() 'abc'.islower() "This Is A Title".istitle() "This is a title".istitle() 'ABC'.isupper() 'ABC1%'.isupper() 'foo'.center(10) ' foo bar baz '.strip() ' foo bar baz '.lstrip() ' foo bar baz '.rstrip() "foo abc foo def fo ljk ".replace('foo','yao') 'www.realpython.com'.strip('w.moc') 'www.realpython.com'.strip('w.com') 'www.realpython.com'.strip('w.ncom') ``` ### **Convert between strings and lists** : ``` ', '.join(['foo','bar','baz','qux']) list('corge') ':'.join('corge') 'www.foo'.partition('.') 'foo@@bar@@baz'.partition('@@') 'foo@@bar@@baz'.rpartition('@@') 'foo.bar'.partition('@@') # By default , rsplit split a string with white space 'foo bar adf yao'.rsplit() 'foo.bar.adf.ert'.split('.') 'foo\nbar\nadfa\nlko'.splitlines() ```
true
code
0.519704
null
null
null
null
# Extension Input Data Validation When using extensions in Fugue, you may add input data validation logic inside your code. However, there is standard way to add your validation logic. Here is a simple example: ``` from typing import List, Dict, Any # partitionby_has: a # schema: a:int,ct:int def get_count(df:List[Dict[str,Any]]) -> List[List[Any]]: return [[df[0]["a"],len(df)]] ``` The following commented-out code will fail, because of the hint `partitionby_has: a` requires the input dataframe to be prepartitioned by at least column `a`. ``` from fugue import FugueWorkflow with FugueWorkflow() as dag: df = dag.df([[0,1],[1,1],[0,2]], "a:int,b:int") # df.transform(get_count).show() # will fail because of no partition by df.partition(by=["a"]).transform(get_count).show() df.partition(by=["b","a"]).transform(get_count).show() # b,a is a super set of a ``` You can also have multiple rules, the following requires partition keys to contain `a`, and presort to be exactly `b asc` (`b == b asc`) ``` from typing import List, Dict, Any # partitionby_has: a # presort_is: b # schema: a:int,ct:int def get_count2(df:List[Dict[str,Any]]) -> List[List[Any]]: return [[df[0]["a"],len(df)]] from fugue import FugueWorkflow with FugueWorkflow() as dag: df = dag.df([[0,1],[1,1],[0,2]], "a:int,b:int") # df.partition(by=["a"]).transform(get_count).show() # will fail because of no presort df.partition(by=["a"], presort="b asc").transform(get_count).show() ``` ## Supported Validations The following are all supported validations. **Compile time validations** will happen when you construct the [FugueWorkflow](/dag.ipynb) while **runtime validations** happen during execution. Compile time validations are very useful to quickly identify logical issues. Runtime validations may take longer time to happen but they are still useful.On Fugue level, we are trying to move runtime validations to compile time as much as we can. Rule | Description | Compile Time | Order Matters | Examples :---|:---|:---|:---|:--- **partitionby_has** | assert the input dataframe is prepartitioned, and the partition keys contain these values | Yes | No | `partitionby_has: a,b` means the partition keys must contain `a` and `b` columns **partitionby_is** | assert the input dataframe is prepartitioned, and the partition keys are exactly these values | Yes | Yes | `partitionby_is: a,b` means the partition keys must contain and only contain `a` and `b` columns **presort_has** | assert the input dataframe is prepartitioned and [presorted](./partition.ipynb#Presort), and the presort keys contain these values | Yes | No | `presort_has: a,b desc` means the presort contains `a asc` and `b desc` (`a == a asc`) **presort_is** | assert the input dataframe is prepartitioned and [presorted](./partition.ipynb#Presort), and the presort keys are exactly these values | Yes | Yes | `presort_is: a,b desc` means the presort is exactly `a asc, b desc` **schema_has** | assert input dataframe schema has certain keys or key type pairs | No | No | `schema_has: a,b:str` means input dataframe schema contains column `a` regardless of type, and `b` of type string, order doesn't matter. So `b:str,a:int` is valid, `b:int,a:int` is invalid because of `b` type, and `b:str` is invalid because `a` is not in the schema **schema_is** | assert input dataframe schema is exactly this value (the value must be a [schema expression](./schema_dataframes.ipynb#Schema)) | No | Yes | `schema_is: a:int,b:str`, then `b:str,a:int` is invalid because of order, `a:str,b:str` is invalid because of `a` type ## Extensions Compatibility Extension Type | Supported | Not Supported :---|:---|:--- Transformer | `partitionby_has`, `partitionby_is`, `presort_has`, `presort_is`, `schema_has`, `schema_is` | None CoTransformer | None | `partitionby_has`, `partitionby_is`, `presort_has`, `presort_is`, `schema_has`, `schema_is` OutputTransformer | `partitionby_has`, `partitionby_is`, `presort_has`, `presort_is`, `schema_has`, `schema_is` | None OutputCoTransformer | None | `partitionby_has`, `partitionby_is`, `presort_has`, `presort_is`, `schema_has`, `schema_is` Creator | N/A | N/A Processor | `partitionby_has`, `partitionby_is`, `presort_has`, `presort_is`, `schema_has`, `schema_is` | None Outputter | `partitionby_has`, `partitionby_is`, `presort_has`, `presort_is`, `schema_has`, `schema_is` | None ## How To Add Validations It depends on how you write your extension, by comment, by decorator or by interface, feature wise, they are equivalent. ## By Comment ``` from typing import List, Dict, Any # schema: a:int,ct:int def get_count2(df:List[Dict[str,Any]]) -> List[List[Any]]: return [[df[0]["a"],len(df)]] ``` ## By Decorator ``` import pandas as pd from typing import List, Dict, Any from fugue import processor, transformer @transformer(schema="*", partitionby_has=["a","d"], presort_is="b, c desc") def example1(df:pd.DataFrame) -> pd.DataFrame: return df @transformer(schema="*", partitionby_has="a,d", presort_is=["b",("c",False)]) def example2(df:pd.DataFrame) -> pd.DataFrame: return df # partitionby_has: a # presort_is: b @transformer(schema="*") def example3(df:pd.DataFrame) -> pd.DataFrame: return df @processor(partitionby_has=["a","d"], presort_is="b, c desc") def example4(df:pd.DataFrame) -> pd.DataFrame: return df ``` ## By Interface In every extension, you can override `validation_rules` ``` from fugue import Transformer class T(Transformer): @property def validation_rules(self): return { "partitionby_has": ["a"] } def get_output_schema(self, df): return df.schema def transform(self, df): return df ```
true
code
0.67878
null
null
null
null
``` # Copyright (c) 2020-2021 Adrian Georg Herrmann import os import matplotlib.pyplot as plt import pandas as pd import numpy as np from scipy import interpolate from sklearn.linear_model import LinearRegression from datetime import datetime data_root = "../../data" locations = { "berlin": ["52.4652025", "13.3412466"], "wijchen": ["51.8235504", "5.7329005"] } dfs = { "berlin": None, "wijchen": None } ``` ## Sunlight angles ``` def get_julian_day(time): if time.month > 2: y = time.year m = time.month else: y = time.year - 1 m = time.month + 12 d = time.day + time.hour / 24 + time.minute / 1440 + time.second / 86400 b = 2 - np.floor(y / 100) + np.floor(y / 400) jd = np.floor(365.25 * (y + 4716)) + np.floor(30.6001 * (m + 1)) + d + b - 1524.5 return jd def get_angle(time, latitude, longitude): # Source: # https://de.wikipedia.org/wiki/Sonnenstand#Genauere_Ermittlung_des_Sonnenstandes_f%C3%BCr_einen_Zeitpunkt # 1. Eclipctical coordinates of the sun # Julian day jd = get_julian_day(time) n = jd - 2451545 # Median ecliptic longitude of the sun< l = np.mod(280.46 + 0.9856474 * n, 360) # Median anomaly g = np.mod(357.528 + 0.9856003 * n, 360) # Ecliptic longitude of the sun lbd = l + 1.915 * np.sin(np.radians(g)) + 0.01997 * np.sin(np.radians(2*g)) # 2. Equatorial coordinates of the sun # Ecliptic eps = 23.439 - 0.0000004 * n # Right ascension alpha = np.degrees(np.arctan(np.cos(np.radians(eps)) * np.tan(np.radians(lbd)))) if np.cos(np.radians(lbd)) < 0: alpha += 180 # Declination delta = np.degrees(np.arcsin(np.sin(np.radians(eps)) * np.sin(np.radians(lbd)))) # 3. Horizontal coordinates of the sun t0 = (get_julian_day(time.replace(hour=0, minute=0, second=0)) - 2451545) / 36525 # Median sidereal time theta_hg = np.mod(6.697376 + 2400.05134 * t0 + 1.002738 * (time.hour + time.minute / 60), 24) theta_g = theta_hg * 15 theta = theta_g + longitude # Hour angle of the sun tau = theta - alpha # Elevation angle h = np.cos(np.radians(delta)) * np.cos(np.radians(tau)) * np.cos(np.radians(latitude)) h += np.sin(np.radians(delta)) * np.sin(np.radians(latitude)) h = np.degrees(np.arcsin(h)) return (h if h > 0 else 0) ``` ## Energy data ``` for location, _ in locations.items(): # This list contains all time points for which energy measurements exist, therefore delimiting # the time frame that is to our interest. energy = {} data_path = os.path.join(data_root, location) for filename in os.listdir(data_path): with open(os.path.join(data_path, filename), "r") as file: for line in file: key = datetime.strptime(line.split(";")[0], '%Y-%m-%d %H:%M:%S').timestamp() energy[key] = int(line.split(";")[1].strip()) df = pd.DataFrame( data={"time": energy.keys(), "energy": energy.values()}, columns=["time", "energy"] ) dfs[location] = df.sort_values(by="time", ascending=True) # Summarize energy data per hour instead of keeping it per 15 minutes for location, _ in locations.items(): times = [] energy = [] df = dfs[location] for i, row in dfs[location].iterrows(): if row["time"] % 3600 == 0: try: t4 = row["time"] e4 = row["energy"] e3 = df["energy"][df["time"] == t4 - 900].values[0] e2 = df["energy"][df["time"] == t4 - 1800].values[0] e1 = df["energy"][df["time"] == t4 - 2700].values[0] times += [t4] energy += [e1 + e2 + e3 + e4] except: pass df = pd.DataFrame(data={"time": times, "energy_h": energy}, columns=["time", "energy_h"]) df = df.sort_values(by="time", ascending=True) dfs[location] = dfs[location].join(df.set_index("time"), on="time", how="right").drop("energy", axis=1) dfs[location].rename(columns={"energy_h": "energy"}, inplace=True) # These lists contain the time tuples that delimit connected ranges without interruptions. time_delimiters = {} for location, _ in locations.items(): delimiters = [] df = dfs[location] next_couple = [df["time"].iloc[0], None] interval = df["time"].iloc[1] - df["time"].iloc[0] for i in range(len(df["time"].index) - 1): if df["time"].iloc[i+1] - df["time"].iloc[i] > interval: next_couple[1] = df["time"].iloc[i] delimiters += [next_couple] next_couple = [df["time"].iloc[i+1], None] next_couple[1] = df["time"].iloc[-1] delimiters += [next_couple] time_delimiters[location] = delimiters # This are lists of dataframes containing connected ranges without interruptions. dataframes_wijchen = [] for x in time_delimiters["wijchen"]: dataframes_wijchen += [dfs["wijchen"].loc[(dfs["wijchen"].time >= x[0]) & (dfs["wijchen"].time <= x[1])]] dataframes_berlin = [] for x in time_delimiters["berlin"]: dataframes_berlin += [dfs["berlin"].loc[(dfs["berlin"].time >= x[0]) & (dfs["berlin"].time <= x[1])]] for location, _ in locations.items(): print(location, ":") for delimiters in time_delimiters[location]: t0 = datetime.fromtimestamp(delimiters[0]) t1 = datetime.fromtimestamp(delimiters[1]) print(t0, "-", t1) print() ``` ### Wijchen dataset ``` for d in dataframes_wijchen: print(len(d)) plt.figure(figsize=(200, 25)) plt.plot(dfs["wijchen"]["time"], dfs["wijchen"]["energy"], drawstyle="steps-pre") energy_max_wijchen = dfs["wijchen"]["energy"].max() energy_max_wijchen_idx = dfs["wijchen"]["energy"].argmax() energy_max_wijchen_time = datetime.fromtimestamp(dfs["wijchen"]["time"].iloc[energy_max_wijchen_idx]) print(energy_max_wijchen_time, ":", energy_max_wijchen) energy_avg_wijchen = dfs["wijchen"]["energy"].mean() print(energy_avg_wijchen) ``` ### Berlin dataset ``` for d in dataframes_berlin: print(len(d)) plt.figure(figsize=(200, 25)) plt.plot(dfs["berlin"]["time"], dfs["berlin"]["energy"], drawstyle="steps-pre") energy_max_berlin = dfs["berlin"]["energy"].max() energy_max_berlin_idx = dfs["berlin"]["energy"].argmax() energy_max_berlin_time = datetime.fromtimestamp(dfs["berlin"]["time"].iloc[energy_max_berlin_idx]) print(energy_max_berlin_time, ":", energy_max_berlin) energy_avg_berlin = dfs["berlin"]["energy"].mean() print(energy_avg_berlin) ``` ## Sunlight angles ``` for location, lonlat in locations.items(): angles = [ get_angle( datetime.fromtimestamp(x - 3600), float(lonlat[0]), float(lonlat[1]) ) for x in dfs[location]["time"] ] dfs[location]["angles"] = angles ``` ## Weather data ``` # Contact the author for a sample of data, see doc/thesis.pdf, page 72. weather_data = np.load(os.path.join(data_root, "weather.npy"), allow_pickle=True).item() # There is no cloud cover data for berlin2, so use the data of berlin1. weather_data["berlin2"]["cloud"] = weather_data["berlin1"]["cloud"] # There is no radiation data for berlin1, so use the data of berlin2. weather_data["berlin1"]["rad"] = weather_data["berlin2"]["rad"] # Preprocess weather data weather_params = [ "temp", "humid", "press", "cloud", "rad" ] stations = [ "wijchen1", "wijchen2", "berlin1", "berlin2" ] for station in stations: for param in weather_params: to_del = [] for key, val in weather_data[station][param].items(): if val is None: to_del.append(key) for x in to_del: del weather_data[station][param][x] def interpolate_map(map, time_range): ret = { "time": [], "value": [] } keys = list(map.keys()) values = list(map.values()) f = interpolate.interp1d(keys, values) ret["time"] = time_range ret["value"] = f(ret["time"]) return ret def update_df(df, time_range, map1, map2, param1, param2): map1_ = interpolate_map(map1, time_range) df1 = pd.DataFrame( data={"time": map1_["time"], param1: map1_["value"]}, columns=["time", param1] ) map2_ = interpolate_map(map2, time_range) df2 = pd.DataFrame( data={"time": map2_["time"], param2: map2_["value"]}, columns=["time", param2] ) df_ = df.join(df1.set_index("time"), on="time").join(df2.set_index("time"), on="time") return df_ # Insert weather data into dataframes for location, _ in locations.items(): df = dfs[location] station1 = location + "1" station2 = location + "2" for param in weather_params: param1 = param + "1" param2 = param + "2" df = update_df( df, df["time"], weather_data[station1][param], weather_data[station2][param], param1, param2 ) dfs[location] = df.set_index(keys=["time"], drop=False) # These are lists of dataframes containing connected ranges without interruptions. dataframes_wijchen = [] for x in time_delimiters["wijchen"]: dataframes_wijchen += [dfs["wijchen"].loc[(dfs["wijchen"].time >= x[0]) & (dfs["wijchen"].time <= x[1])]] dataframes_berlin = [] for x in time_delimiters["berlin"]: dataframes_berlin += [dfs["berlin"].loc[(dfs["berlin"].time >= x[0]) & (dfs["berlin"].time <= x[1])]] ``` ### Linear regression model #### Wijchen ``` df_train = dataframes_wijchen[9].iloc[17:258] # df_train = dataframes_wijchen[9].iloc[17:234] # df_train = pd.concat([dataframes_wijchen[9].iloc[17:], dataframes_wijchen[10], dataframes_wijchen[11]]) df_val = dataframes_wijchen[-3].iloc[:241] # df_val = dataframes_wijchen[-2].iloc[:241] lr_x1 = df_train[["angles", "temp1", "humid1", "press1", "cloud1", "rad1"]].to_numpy() lr_y1 = df_train[["energy"]].to_numpy() lr_model1 = LinearRegression() lr_model1.fit(lr_x1, lr_y1) lr_model1.score(lr_x1, lr_y1) lr_x2 = df_train[["angles", "temp2", "humid2", "press2", "cloud2", "rad2"]].to_numpy() lr_y2 = df_train[["energy"]].to_numpy() lr_model2 = LinearRegression() lr_model2.fit(lr_x2, lr_y2) lr_model2.score(lr_x2, lr_y2) lr_x3 = df_train[["angles", "temp1", "temp2", "humid1", "humid2", "press1", "press2", "cloud1", "cloud2", "rad1", "rad2"]].to_numpy() lr_y3 = df_train[["energy"]].to_numpy() lr_model3 = LinearRegression() lr_model3.fit(lr_x3, lr_y3) lr_model3.score(lr_x3, lr_y3) # filename = "lr_model.pkl" # with open(filename, 'wb') as file: # pickle.dump(lr_model3, file) xticks = df_train["time"].iloc[::24] lr_x3 = df_train[["angles", "temp1", "temp2", "humid1", "humid2", "press1", "press2", "cloud1", "cloud2", "rad1", "rad2"]].to_numpy() fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(20, 5)) ax.set_xticks(ticks=xticks) ax.set_xticklabels(labels=[datetime.fromtimestamp(x).strftime("%d-%m-%y") for x in xticks]) ax.tick_params(labelsize=18) ax.plot(df_train["time"], df_train["energy"], label="Actual energy production in Wh", drawstyle="steps-pre") ax.plot(df_train["time"], lr_model3.predict(lr_x3), label="Predicted energy production in Wh (Volkel + Deelen)", drawstyle="steps-pre") ax.legend(prop={'size': 18}) xticks = df_val["time"].iloc[::24] lr_x1 = df_val[["angles", "temp1", "humid1", "press1", "cloud1", "rad1"]].to_numpy() lr_x2 = df_val[["angles", "temp2", "humid2", "press2", "cloud2", "rad2"]].to_numpy() lr_x3 = df_val[["angles", "temp1", "temp2", "humid1", "humid2", "press1", "press2", "cloud1", "cloud2", "rad1", "rad2"]].to_numpy() print(lr_model1.score(lr_x1, df_val[["energy"]].to_numpy())) print(lr_model2.score(lr_x2, df_val[["energy"]].to_numpy())) print(lr_model3.score(lr_x3, df_val[["energy"]].to_numpy())) fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(20, 5)) ax.set_xticks(ticks=xticks) ax.set_xticklabels(labels=[datetime.fromtimestamp(x).strftime("%d-%m-%y") for x in xticks]) ax.tick_params(labelsize=18) ax.plot(df_val["time"], df_val["energy"], label="Actual energy production in Wh", drawstyle="steps-pre") ax.plot(df_val["time"], lr_model3.predict(lr_x3), label="Predicted energy production in Wh (Volkel + Deelen)", drawstyle="steps-pre") ax.legend(prop={'size': 18}) print(df["angles"].min(), df_val["angles"].max()) print(df["angles"].min(), df_train["angles"].max()) ``` #### Berlin ``` df_train = dataframes_berlin[1].iloc[:241] # df_train = dataframes_berlin[1].iloc[:720] df_val = dataframes_berlin[1].iloc[312:553] # df_val = dataframes_berlin[1].iloc[720:961] lr_x1 = df_train[["angles", "temp1", "humid1", "press1", "cloud1", "rad1"]].to_numpy() lr_y1 = df_train[["energy"]].to_numpy() lr_model1 = LinearRegression() lr_model1.fit(lr_x1, lr_y1) lr_model1.score(lr_x1, lr_y1) lr_x2 = df_train[["angles", "temp2", "humid2", "press2", "cloud2", "rad2"]].to_numpy() lr_y2 = df_train[["energy"]].to_numpy() lr_model2 = LinearRegression() lr_model2.fit(lr_x2, lr_y2) lr_model2.score(lr_x2, lr_y2) lr_x3 = df_train[["angles", "temp1", "temp2", "humid1", "humid2", "press1", "press2", "cloud1", "cloud2", "rad1", "rad2"]].to_numpy() lr_y3 = df_train[["energy"]].to_numpy() lr_model3 = LinearRegression() lr_model3.fit(lr_x3, lr_y3) lr_model3.score(lr_x3, lr_y3) # filename = "lr_model.pkl" # with open(filename, 'wb') as file: # pickle.dump(lr_model3, file) xticks = df_train["time"].iloc[::24] lr_x3 = df_train[["angles", "temp1", "temp2", "humid1", "humid2", "press1", "press2", "cloud1", "cloud2", "rad1", "rad2"]].to_numpy() fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(20, 5)) ax.set_xticks(ticks=xticks) ax.set_xticklabels(labels=[datetime.fromtimestamp(x).strftime("%d-%m-%y") for x in xticks]) ax.tick_params(labelsize=18) ax.plot(df_train["time"], df_train["energy"], label="Actual energy production in Wh", drawstyle="steps-pre") ax.plot(df_train["time"], lr_model3.predict(lr_x3), label="Predicted energy production in Wh", drawstyle="steps-pre") ax.legend(prop={'size': 18}) xticks = df_val["time"].iloc[::24] lr_x1 = df_val[["angles", "temp1", "humid1", "press1", "cloud1", "rad1"]].to_numpy() lr_x2 = df_val[["angles", "temp2", "humid2", "press2", "cloud2", "rad2"]].to_numpy() lr_x3 = df_val[["angles", "temp1", "temp2", "humid1", "humid2", "press1", "press2", "cloud1", "cloud2", "rad1", "rad2"]].to_numpy() print(lr_model1.score(lr_x1, df_val[["energy"]].to_numpy())) print(lr_model2.score(lr_x2, df_val[["energy"]].to_numpy())) print(lr_model3.score(lr_x3, df_val[["energy"]].to_numpy())) fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(20, 5)) ax.set_xticks(ticks=xticks) ax.set_xticklabels(labels=[datetime.fromtimestamp(x).strftime("%d-%m-%y") for x in xticks]) ax.tick_params(labelsize=18) ax.plot(df_val["time"], df_val["energy"], label="Actual energy production in Wh", drawstyle="steps-pre") ax.plot(df_val["time"], lr_model3.predict(lr_x3), label="Predicted energy production in Wh", drawstyle="steps-pre") ax.legend(prop={'size': 18}) ```
true
code
0.476945
null
null
null
null
<a href="https://colab.research.google.com/github/krakowiakpawel9/machine-learning-bootcamp/blob/master/unsupervised/04_anomaly_detection/01_local_outlier_factor.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ### scikit-learn Strona biblioteki: [https://scikit-learn.org](https://scikit-learn.org) Dokumentacja/User Guide: [https://scikit-learn.org/stable/user_guide.html](https://scikit-learn.org/stable/user_guide.html) Podstawowa biblioteka do uczenia maszynowego w języku Python. Aby zainstalować bibliotekę scikit-learn, użyj polecenia poniżej: ``` !pip install scikit-learn ``` Aby zaktualizować do najnowszej wersji bibliotekę scikit-learn, użyj polecenia poniżej: ``` !pip install --upgrade scikit-learn ``` Kurs stworzony w oparciu o wersję `0.22.1` ### Spis treści: 1. [Import bibliotek](#0) 2. [Wygenerowanie danych](#1) 3. [Wizualizacja danych](#2) 4. [Algorytm K-średnich](#3) 5. [Wizualizacja klastrów](#4) ### <a name='0'></a> Import bibliotek ``` import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns from sklearn.datasets import make_blobs import plotly.express as px import plotly.graph_objects as go sns.set(font_scale=1.2) np.random.seed(10) ``` ### <a name='1'></a> Wygenerowanie danych ``` data = make_blobs(n_samples=300, cluster_std=2.0, random_state=10)[0] data[:5] ``` ### <a name='2'></a> Wizualizacja danych ``` tmp = pd.DataFrame(data=data, columns={'x1', 'x2'}) px.scatter(tmp, x='x1', y='x2', width=950, title='Local Outlier Factor', template='plotly_dark') fig = go.Figure() fig1 = px.density_heatmap(tmp, x='x1', y='x2', width=700, title='Outliers', nbinsx=20, nbinsy=20) fig2 = px.scatter(tmp, x='x1', y='x2', width=700, title='Outliers', opacity=0.5) fig.add_trace(fig1['data'][0]) fig.add_trace(fig2['data'][0]) fig.update_traces(marker=dict(size=4, line=dict(width=2, color='white')), selector=dict(mode='markers')) fig.update_layout(template='plotly_dark', width=950) fig.show() plt.figure(figsize=(12, 7)) plt.scatter(data[:, 0], data[:, 1], label='data', cmap='tab10') plt.title('Local Outlier Factor') plt.legend() plt.show() from sklearn.neighbors import LocalOutlierFactor lof = LocalOutlierFactor(n_neighbors=20) y_pred = lof.fit_predict(data) y_pred[:10] all_data = np.c_[data, y_pred] all_data[:5] tmp['y_pred'] = y_pred px.scatter(tmp, x='x1', y='x2', color='y_pred', width=950, title='Local Outlier Factor', template='plotly_dark') plt.figure(figsize=(12, 7)) plt.scatter(all_data[:, 0], all_data[:, 1], c=all_data[:, 2], cmap='tab10', label='data') plt.title('Local Outlier Factor') plt.legend() plt.show() LOF_scores = lof.negative_outlier_factor_ radius = (LOF_scores.max() - LOF_scores) / (LOF_scores.max() - LOF_scores.min()) radius[:5] plt.figure(figsize=(12, 7)) plt.scatter(all_data[:, 0], all_data[:, 1], label='data', cmap='tab10') plt.scatter(all_data[:, 0], all_data[:, 1], s=2000 * radius, edgecolors='r', facecolors='none', label='outlier scores') plt.title('Local Outlier Factor') legend = plt.legend() legend.legendHandles[1]._sizes = [40] plt.show() plt.figure(figsize=(12, 7)) plt.scatter(all_data[:, 0], all_data[:, 1], c=all_data[:, 2], cmap='tab10', label='data') plt.scatter(all_data[:, 0], all_data[:, 1], s=2000 * radius, edgecolors='r', facecolors='none', label='outlier scores') plt.title('Local Outlier Factor') legend = plt.legend() legend.legendHandles[1]._sizes = [40] plt.show() ```
true
code
0.736989
null
null
null
null
# Initial data and problem exploration ``` import xarray as xr import pandas as pd import urllib.request import numpy as np from glob import glob import cartopy.crs as ccrs import matplotlib.pyplot as plt import os import cartopy.feature as cfeature states_provinces = cfeature.NaturalEarthFeature( category='cultural', name='admin_1_states_provinces_lines', scale='50m', facecolor='none') ``` # Data preprocessing ## TIGGE ECMWF ### Control run ``` tigge_ctrl = xr.open_mfdataset("/datadrive/tigge/16km/2m_temperature/2019-10.nc") tigge_ctrl tigge_ctrl.lat.min() tigge_2dslice = tigge_ctrl.t2m.isel(lead_time=4, init_time=0) p = tigge_2dslice.plot( subplot_kws=dict(projection=ccrs.Orthographic(-80, 35), facecolor="gray"), transform=ccrs.PlateCarree(),) #p.axes.set_global() p.axes.coastlines() ``` ### TIGGE CTRL precip ``` prec = xr.open_mfdataset("/datadrive/tigge/raw/total_precipitation/*.nc") prec # aggregated precipitation prec.tp.mean('init_time').diff('lead_time').plot(col='lead_time', col_wrap=3) # that takes a while! ``` ### Checking regridding ``` t2m_raw = xr.open_mfdataset("/datadrive/tigge/raw/2m_temperature/2019-10.nc") t2m_32 = xr.open_mfdataset("/datadrive/tigge/32km/2m_temperature/2019-10.nc") t2m_16 = xr.open_mfdataset("/datadrive/tigge/16km/2m_temperature/2019-10.nc") for ds in [t2m_raw, t2m_16, t2m_32]: tigge_2dslice = ds.t2m.isel(lead_time=4, init_time=-10) plt.figure() p = tigge_2dslice.plot(levels=np.arange(270,305), subplot_kws=dict(projection=ccrs.Orthographic(-80, 35), facecolor="gray"), transform=ccrs.PlateCarree(),) p.axes.coastlines() ``` ### Ensemble ``` !ls -lh ../data/tigge/2020-10-23_ens2.grib tigge = xr.open_mfdataset('../data/tigge/2020-10-23_ens2.grib', engine='pynio').isel() tigge = tigge.rename({ 'tp_P11_L1_GGA0_acc': 'tp', 'initial_time0_hours': 'init_time', 'forecast_time0': 'lead_time', 'lat_0': 'latitude', 'lon_0': 'longitude', 'ensemble0' : 'member' }).diff('lead_time').tp tigge = tigge.where(tigge >= 0, 0) # tigge = tigge * 1000 # m to mm tigge.coords['valid_time'] = xr.concat([i + tigge.lead_time for i in tigge.init_time], 'init_time') tigge tigge.to_netcdf('../data/tigge/2020-10-23_ens_preprocessed.nc') ``` ### Deterministic ``` tigge = xr.open_mfdataset('../data/tigge/2020-10-23.grib', engine='pynio') tigge = tigge.rename({ 'tp_P11_L1_GGA0_acc': 'tp', 'initial_time0_hours': 'init_time', 'forecast_time0': 'lead_time', 'lat_0': 'latitude', 'lon_0': 'longitude', }).diff('lead_time').tp tigge = tigge.where(tigge >= 0, 0) tigge.coords['valid_time'] = xr.concat([i + tigge.lead_time for i in tigge.init_time], 'init_time') tigge tigge.to_netcdf('../data/tigge/2020-10-23_preprocessed.nc') ``` ## YOPP ``` yopp = xr.open_dataset('../data/yopp/2020-10-23.grib', engine='pynio').TP_GDS4_SFC yopp2 = xr.open_dataset('../data/yopp/2020-10-23_12.grib', engine='pynio').TP_GDS4_SFC yopp = xr.merge([yopp, yopp2]).rename({ 'TP_GDS4_SFC': 'tp', 'initial_time0_hours': 'init_time', 'forecast_time1': 'lead_time', 'g4_lat_2': 'latitude', 'g4_lon_3': 'longitude' }) yopp = yopp.diff('lead_time').tp yopp = yopp.where(yopp >= 0, 0) yopp = yopp * 1000 # m to mm yopp.coords['valid_time'] = xr.concat([i + yopp.lead_time for i in yopp.init_time], 'init_time') yopp.to_netcdf('../data/yopp/2020-10-23_preprocessed.nc') ``` ## NRMS data ``` def time_from_fn(fn): s = fn.split('/')[-1].split('_')[-1] year = s[:4] month = s[4:6] day = s[6:8] hour = s[9:11] return np.datetime64(f'{year}-{month}-{day}T{hour}') def open_nrms(path): fns = sorted(glob(f'{path}/*')) dss = [xr.open_dataset(fn, engine='pynio') for fn in fns] times = [time_from_fn(fn) for fn in fns] times = xr.DataArray(times, name='time', dims=['time'], coords={'time': times}) ds = xr.concat(dss, times).rename({'lat_0': 'latitude', 'lon_0': 'longitude'}) da = ds[list(ds)[0]].rename('tp') return da def get_mrms_fn(path, source, year, month, day, hour): month, day, hour = [str(x).zfill(2) for x in [month, day, hour]] fn = f'{path}/{source}/MRMS_{source}_00.00_{year}{month}{day}-{hour}0000.grib2' # print(fn) return fn def load_mrms_data(path, start_time, stop_time, accum=3): times = pd.to_datetime(np.arange(start_time, stop_time, np.timedelta64(accum, 'h'), dtype='datetime64[h]')) das = [] for t in times: if os.path.exists(get_mrms_fn(path, f'MultiSensor_QPE_0{accum}H_Pass1', t.year, t.month, t.day, t.hour)): ds = xr.open_dataset(get_mrms_fn(path, f'MultiSensor_QPE_0{accum}H_Pass1', t.year, t.month, t.day, t.hour), engine='pynio') elif os.path.exists(get_mrms_fn(path, f'MultiSensor_QPE_0{accum}H_Pass2', t.year, t.month, t.day, t.hour)): ds = xr.open_dataset(get_mrms_fn(path, f'MultiSensor_QPE_0{accum}H_Pass2', t.year, t.month, t.day, t.hour), engine='pynio') elif os.path.exists(get_mrms_fn(path, f'RadarOnly_QPE_0{accum}H', t.year, t.month, t.day, t.hour)): ds = xr.open_dataset(get_mrms_fn(path, f'RadarOnly_QPE_0{accum}H', t.year, t.month, t.day, t.hour), engine='pynio') else: raise Exception(f'No data found for {t}') ds = ds.rename({'lat_0': 'latitude', 'lon_0': 'longitude'}) da = ds[list(ds)[0]].rename('tp') das.append(da) times = xr.DataArray(times, name='time', dims=['time'], coords={'time': times}) da = xr.concat(das, times) return da mrms = load_mrms_data('../data/', '2020-10-23', '2020-10-25') mrms6h = mrms.rolling(time=2).sum().isel(time=slice(0, None, 2)) mrms.to_netcdf('../data/mrms/mrms_preprocessed.nc') mrms6h.to_netcdf('../data/mrms/mrms6_preprocessed.nc') ``` # Analysis ``` !ls ../data tigge_det = xr.open_dataarray('../data/tigge/2020-10-23_preprocessed.nc').rename({'latitude': 'lat', 'longitude': 'lon'}) tigge_ens = xr.open_dataarray('../data/tigge/2020-10-23_ens_preprocessed.nc').rename({'latitude': 'lat', 'longitude': 'lon'}) yopp = xr.open_dataarray('../data/yopp/2020-10-23_preprocessed.nc').rename({'latitude': 'lat', 'longitude': 'lon'}) mrms = xr.open_dataarray('../data/mrms/mrms_preprocessed.nc').rename({'latitude': 'lat', 'longitude': 'lon'}) mrms6h = xr.open_dataarray('../data/mrms/mrms6_preprocessed.nc').rename({'latitude': 'lat', 'longitude': 'lon'}) ``` ## Regrid ``` import xesmf as xe lons = slice(260, 280) lats = slice(45, 25) def regrid(ds, km, lats, lons): deg = km/100. grid = xr.Dataset( { 'lat': (['lat'], np.arange(lats.start, lats.stop, -deg)), 'lon': (['lon'], np.arange(lons.start, lons.stop, deg)) } ) regridder = xe.Regridder(ds.sel(lat=lats, lon=lons), grid, 'bilinear') return regridder(ds.sel(lat=lats, lon=lons), keep_attrs=True) mrms4km = regrid(mrms, 4, lats, lons) mrms2km = regrid(mrms, 2, lats, lons) mrms4km6h = regrid(mrms6h, 4, lats, lons) mrms2km6h = regrid(mrms6h, 2, lats, lons) mrms4km6h = mrms4km6h.rename('tp') mrms2km6h =mrms2km6h.rename('tp') yopp16km = regrid(yopp, 16, lats, lons) yopp32km = regrid(yopp, 32, lats, lons) tigge_det16km = regrid(tigge_det, 16, lats, lons) tigge_det32km = regrid(tigge_det, 32, lats, lons) tigge_ens16km = regrid(tigge_ens, 16, lats, lons) tigge_ens32km = regrid(tigge_ens, 32, lats, lons) !mkdir ../data/regridded mrms2km.to_netcdf('../data/regridded/mrms2km.nc') mrms4km.to_netcdf('../data/regridded/mrms4km.nc') mrms2km6h.to_netcdf('../data/regridded/mrms2km6h.nc') mrms4km6h.to_netcdf('../data/regridded/mrms4km6h.nc') yopp16km.to_netcdf('../data/regridded/yopp16km.nc') yopp32km.to_netcdf('../data/regridded/yopp32km.nc') tigge_det16km.to_netcdf('../data/regridded/tigge_det16km.nc') tigge_det32km.to_netcdf('../data/regridded/tigge_det32km.nc') tigge_ens16km.to_netcdf('../data/regridded/tigge_ens16km.nc') tigge_ens32km.to_netcdf('../data/regridded/tigge_ens32km.nc') mrms2km = xr.open_dataarray('../data/regridded/mrms2km.nc') mrms4km = xr.open_dataarray('../data/regridded/mrms4km.nc') mrms2km6h = xr.open_dataarray('../data/regridded/mrms2km6h.nc') mrms4km6h = xr.open_dataarray('../data/regridded/mrms4km6h.nc') yopp16km = xr.open_dataarray('../data/regridded/yopp16km.nc') yopp32km = xr.open_dataarray('../data/regridded/yopp32km.nc') tigge_det16km = xr.open_dataarray('../data/regridded/tigge_det16km.nc') tigge_det32km = xr.open_dataarray('../data/regridded/tigge_det32km.nc') tigge_ens16km = xr.open_dataarray('../data/regridded/tigge_ens16km.nc') tigge_ens32km = xr.open_dataarray('../data/regridded/tigge_ens32km.nc') ``` ### Matplotlib #### Compare different resolutions ``` mrms4km np.arange(lons.start, lons.stop, 512/100) def add_grid(axs): for ax in axs: ax.set_xticks(np.arange(lons.start, lons.stop, 512/100)) ax.set_yticks(np.arange(lats.start, lats.stop, -512/100)) ax.grid(True) ax.set_aspect('equal') yopp16km yopp16km.isel(init_time=i, lead_time=slice(0, 3)).valid_time i = 3 valid_time = yopp16km.isel(init_time=i, lead_time=slice(0, 3)).valid_time figsize = (16, 5) axs = mrms4km.sel(time=valid_time.values).plot(vmin=0, vmax=50, col='time', cmap='gist_ncar_r', figsize=figsize).axes[0] add_grid(axs) axs = yopp16km.isel(init_time=i, lead_time=slice(0, 3)).plot(vmin=0, vmax=50, col='lead_time', cmap='gist_ncar_r', figsize=figsize).axes[0] add_grid(axs) axs = yopp32km.isel(init_time=i, lead_time=slice(0, 3)).plot(vmin=0, vmax=50, col='lead_time', cmap='gist_ncar_r', figsize=figsize).axes[0] add_grid(axs) i = 2 valid_time = tigge_det16km.isel(init_time=i, lead_time=slice(0, 3)).valid_time figsize = (16, 5) axs = mrms4km6h.sel(time=valid_time.values, method='nearest').assign_coords({'time': valid_time.values}).plot(vmin=0, vmax=50, col='time', cmap='gist_ncar_r', figsize=figsize).axes[0] add_grid(axs) axs = tigge_det16km.isel(init_time=i, lead_time=slice(0, 3)).plot(vmin=0, vmax=50, col='lead_time', cmap='gist_ncar_r', figsize=figsize).axes[0] add_grid(axs) axs = tigge_det32km.isel(init_time=i, lead_time=slice(0, 3)).plot(vmin=0, vmax=50, col='lead_time', cmap='gist_ncar_r', figsize=figsize).axes[0] add_grid(axs) tigge_ens16km.isel(init_time=i, lead_time=l) i = 3 l = 0 t = tigge_ens16km.isel(init_time=i, lead_time=slice(l, l+2)).valid_time.values axs = mrms4km6h.sel(time=t, method='nearest').assign_coords({'time': t}).plot(vmin=0, vmax=50, cmap='gist_ncar_r', figsize=(10, 4), col='time').axes[0] add_grid(axs) axs = tigge_ens16km.isel(init_time=i, lead_time=l, member=slice(0, 6)).plot(vmin=0, vmax=50, cmap='gist_ncar_r', figsize=(24, 4), col='member').axes[0] add_grid(axs) axs = tigge_ens16km.isel(init_time=i, lead_time=l+1, member=slice(0, 6)).plot(vmin=0, vmax=50, cmap='gist_ncar_r', figsize=(24, 4), col='member').axes[0] add_grid(axs) ``` ### Holoviews ``` import holoviews as hv hv.extension('bokeh') hv.config.image_rtol = 1 # from holoviews import opts # opts.defaults(opts.Scatter3D(color='Value', cmap='viridis', edgecolor='black', s=50)) lons2 = slice(268, 273) lats2 = slice(40, 35) lons2 = lons lats2 = lats def to_hv(da, dynamic=False, opts={'clim': (1, 50)}): hv_ds = hv.Dataset(da) img = hv_ds.to(hv.Image, kdims=["lon", "lat"], dynamic=dynamic) return img.opts(**opts) valid_time = yopp16km.isel(lead_time=slice(0, 4), init_time=slice(0, 3)).valid_time valid_time2 = tigge_det16km.isel(lead_time=slice(0, 3), init_time=slice(0, 3)).valid_time mrms2km_hv = to_hv(mrms2km.sel(time=valid_time, method='nearest').sel(lat=lats2, lon=lons2)) mrms4km_hv = to_hv(mrms4km.sel(time=valid_time, method='nearest').sel(lat=lats2, lon=lons2)) mrms2km6h_hv = to_hv(mrms2km6h.sel(time=valid_time2, method='nearest').sel(lat=lats2, lon=lons2)) mrms4km6h_hv = to_hv(mrms4km6h.sel(time=valid_time2, method='nearest').sel(lat=lats2, lon=lons2)) yopp16km_hv = to_hv(yopp16km.isel(lead_time=slice(0, 4), init_time=slice(0, 3)).sel(lat=lats2, lon=lons2)) yopp32km_hv = to_hv(yopp32km.isel(lead_time=slice(0, 4), init_time=slice(0, 3)).sel(lat=lats2, lon=lons2)) tigge_det16km_hv = to_hv(tigge_det16km.isel(lead_time=slice(0, 3), init_time=slice(0, 3)).sel(lat=lats2, lon=lons2)) tigge_det32km_hv = to_hv(tigge_det32km.isel(lead_time=slice(0, 3), init_time=slice(0, 3)).sel(lat=lats2, lon=lons2)) ``` ### Which resolution for MRMS? ``` %%output holomap='widgets' %%opts Image style(cmap='gist_ncar_r') [width=600, height=600] # mrms4km6h_hv + tigge_det16km_hv + tigge_det32km_hv # mrms4km_hv + yopp16km_hv + yopp32km_hv %%output holomap='widgets' %%opts Image style(cmap='gist_ncar_r') [width=600, height=600] mrms4km_hv + mrms4km6h_hv hv_yopp = yopp.isel(init_time=0).sel(latitude=lats, longitude=lons) hv_yopp.coords['time'] = hv_yopp.init_time + hv_yopp.lead_time hv_yopp = hv_yopp.swap_dims({'lead_time': 'time'}) # hv_yopp hv_mrms = hv.Dataset(mrms.sel(latitude=lats, longitude=lons)[1:]) hv_yopp = hv.Dataset(hv_yopp.sel(time=mrms.time[1:])) img1 = hv_mrms.to(hv.Image, kdims=["longitude", "latitude"], dynamic=False) img2 = hv_yopp.to(hv.Image, kdims=["longitude", "latitude"], dynamic=False) %%output holomap='widgets' %%opts Image style(cmap='gist_ncar_r') plot[colorbar=True] %%opts Image [width=500, height=400] img1 + img2 hv_yopp = yopp.sel(latitude=lats, longitude=lons) hv_yopp = hv.Dataset(hv_yopp) img1 = hv_yopp.to(hv.Image, kdims=["longitude", "latitude"], dynamic=False) %%output holomap='widgets' %%opts Image style(cmap='gist_ncar_r') plot[colorbar=True] %%opts Image [width=500, height=400] img1 hv_ds = hv.Dataset(da.sel(latitude=lats, longitude=lons)) hv_ds a = hv_ds.to(hv.Image, kdims=["longitude", "latitude"], dynamic=False) a.opts(colorbar=True, fig_size=200, cmap='viridis') ``` # Old ``` path = '../data/MultiSensor_QPE_01H_Pass1/' da1 = open_nrms('../data/MultiSensor_QPE_01H_Pass1/') da3 = open_nrms('../data/MultiSensor_QPE_03H_Pass1/') dar = open_nrms('../data/RadarOnly_QPE_03H/') da3p = open_nrms('../data/MultiSensor_QPE_03H_Pass2/') da1 da3 da13 = da1.rolling(time=3).sum() (da13 - da3).isel(time=3).sel(latitude=lats, longitude=lons).plot() da13.isel(time=slice(0, 7)).sel(latitude=slice(44, 40), longitude=slice(268, 272)).plot(col='time', vmin=0, vmax=50) plt.suptitle('1h accumulation with rolling(time=3).sum()', y=1.05) da3.isel(time=slice(0, 7)).sel(latitude=slice(44, 40), longitude=slice(268, 272)).plot(col='time', vmin=0, vmax=50) plt.suptitle('3h accumulation', y=1.05) dar.isel(time=slice(0, 7)).sel(latitude=slice(44, 40), longitude=slice(268, 272)).plot(col='time', vmin=0, vmax=50) plt.suptitle('3h accumulation radar', y=1.05) da3.isel(time=slice(0, 7)).sel(latitude=slice(44, 43), longitude=slice(269, 270)).plot(col='time', vmin=0, vmax=50) plt.suptitle('3h accumulation', y=1.05) dar.isel(time=slice(0, 7)).sel(latitude=slice(44, 43), longitude=slice(269, 270)).plot(col='time', vmin=0, vmax=50) plt.suptitle('3h accumulation radar', y=1.05) for t in np.arange('2020-10-23', '2020-10-25', np.timedelta64(3, 'h'), dtype='datetime64[h]'): print(t) print('Radar', (dar.time.values == t).sum() > 0) print('Pass1', (da3.time.values == t).sum() > 0) print('Pass2', (da3p.time.values == t).sum() > 0) t (dar.time.values == t).sum() > 0 da3.time.values def plot_facet(da, title='', **kwargs): p = da.plot( col='time', col_wrap=3, subplot_kws={'projection': ccrs.PlateCarree()}, transform=ccrs.PlateCarree(), figsize=(15, 15), **kwargs ) for ax in p.axes.flat: ax.coastlines() ax.add_feature(states_provinces, edgecolor='gray') # ax.set_extent([113, 154, -11, -44], crs=ccrs.PlateCarree()) plt.suptitle(title); plot_facet(da.isel(time=slice(0, 9)).sel(latitude=lats, longitude=lons), vmin=0, vmax=10, add_colorbar=False) import holoviews as hv hv.extension('matplotlib') from holoviews import opts opts.defaults(opts.Scatter3D(color='Value', cmap='fire', edgecolor='black', s=50)) hv_ds = hv.Dataset(da.sel(latitude=lats, longitude=lons)) hv_ds a = hv_ds.to(hv.Image, kdims=["longitude", "latitude"], dynamic=False) a.opts(colorbar=True, fig_size=200, cmap='viridis') da.longitude.diff('longitude').min() !cp ../data/yopp/2020-10-23.nc ../data/yopp/2020-10-23.grib a = xr.open_dataset('../data/yopp/2020-10-23.grib', engine='pynio') a a.g4_lat_2.diff('g4_lat_2') a.g4_lon_3.diff('g4_lon_3') !cp ../data/tigge/2020-10-23.nc ../data/tigge/2020-10-23.grib b = xr.open_dataset('../data/tigge/2020-10-23.grib', engine='pynio') b ```
true
code
0.306819
null
null
null
null
# Descriptive analysis for the manuscript Summarize geotagged tweets of the multiple regions used for the experiment and the application. ``` %load_ext autoreload %autoreload 2 import os import numpy as np import pandas as pd import yaml import scipy.stats as stats from tqdm import tqdm def load_region_tweets(region=None): df = pd.read_csv(f'../../dbs/{region}/geotweets.csv') df['day'] = df['createdat'].apply(lambda x: x.split(' ')[0]) df['createdat'] = pd.to_datetime(df['createdat'], infer_datetime_format=True) t_max, t_min = df.createdat.max(), df.createdat.min() time_span = f'{t_min} - {t_max}' num_users = len(df.userid.unique()) num_geo = len(df) num_days = np.median(df.groupby(['userid'])['day'].nunique()) num_geo_freq = np.median(df.groupby(['userid']).size() / df.groupby(['userid'])['day'].nunique()) return region, time_span, num_users, num_geo, num_days, num_geo_freq def user_stats_cal(data): time_span = data.createdat.max() - data.createdat.min() time_span = time_span.days if time_span == 0: time_span += 1 num_days = data['day'].nunique() num_geo = len(data) geo_freq = num_geo / num_days share_active = num_days / time_span return pd.DataFrame.from_dict({'time_span': [time_span], 'num_days': [num_days], 'num_geo': [num_geo], 'geo_freq': [geo_freq], 'share_active': [share_active] }) def region_tweets_stats_per_user(region=None): df = pd.read_csv(f'../../dbs/{region}/geotweets.csv') df['day'] = df['createdat'].apply(lambda x: x.split(' ')[0]) df['createdat'] = pd.to_datetime(df['createdat'], infer_datetime_format=True) tqdm.pandas(desc=region) df_users = df.groupby('userid').progress_apply(user_stats_cal).reset_index() df_users.loc[:, 'region'] = region df_users.drop(columns=['level_1'], inplace=True) return df_users region_list = ['sweden', 'netherlands', 'saopaulo', 'australia', 'austria', 'barcelona', 'capetown', 'cebu', 'egypt', 'guadalajara', 'jakarta', 'johannesburg', 'kualalumpur', 'lagos', 'madrid', 'manila', 'mexicocity', 'moscow', 'nairobi', 'rio', 'saudiarabia', 'stpertersburg', 'surabaya'] with open('../../lib/regions.yaml', encoding='utf8') as f: region_manager = yaml.load(f, Loader=yaml.FullLoader) ``` ## 1 Summarize the geotagged tweets used as input to the model Geotagged tweets: Time span, No. of Twitter users, No. of geotagged tweets, Days covered/user, No. of geotagged tweets/day/user ``` df = pd.DataFrame([load_region_tweets(region=x) for x in region_list], columns=('region', 'time_span', 'num_users', 'num_geo', 'num_days', 'num_geo_freq')) df.loc[:, 'gdp_capita'] = df.loc[:, 'region'].apply(lambda x: region_manager[x]['gdp_capita']) df.loc[:, 'country'] = df.loc[:, 'region'].apply(lambda x: region_manager[x]['country']) df.loc[:, 'pop'] = df.loc[:, 'region'].apply(lambda x: region_manager[x]['pop']) df.loc[:, 'time_span'] = df.loc[:, 'time_span'].apply(lambda x: ' - '.join([x_t.split(' ')[0] for x_t in x.split(' - ')])) df.loc[:, 'region'] = df.loc[:, 'region'].apply(lambda x: region_manager[x]['name']) df df.to_clipboard(index=False) ``` ## 1-extra Summarize the geotagged tweets used as input to the model - by user This is for dissertation presentation - sparsity issue. Geotagged tweets: Time span, No. of Twitter users, No. of geotagged tweets, Days covered/user, No. of geotagged tweets/day/user ``` df = pd.concat([region_tweets_stats_per_user(region=x) for x in region_list]) df.loc[:, 'gdp_capita'] = df.loc[:, 'region'].apply(lambda x: region_manager[x]['gdp_capita']) df.loc[:, 'country'] = df.loc[:, 'region'].apply(lambda x: region_manager[x]['country']) df.loc[:, 'pop'] = df.loc[:, 'region'].apply(lambda x: region_manager[x]['pop']) df.loc[:, 'region'] = df.loc[:, 'region'].apply(lambda x: region_manager[x]['name']) df.to_csv(f'../../dbs/regional_stats.csv', index=False) ``` ## 2 Merge ODMs for visualisation This part applies to Sweden, The Netherlands, and Sao Paulo, Brazil. Separate files will be deleted. ``` for region in ['sweden', 'netherlands', 'saopaulo']: df = pd.read_csv(f'../../dbs/{region}/odm_gt.csv') df_c = pd.read_csv(f'../../dbs/{region}/odm_calibration.csv') df_v = pd.read_csv(f'../../dbs/{region}/odm_validation.csv') df_cb = pd.read_csv(f'../../dbs/{region}/odm_benchmark_c.csv') df_vb = pd.read_csv(f'../../dbs/{region}/odm_benchmark_v.csv') df = pd.merge(df, df_c, on=['ozone', 'dzone']) df = df.rename(columns={'model': 'model_c'}) df = pd.merge(df, df_v, on=['ozone', 'dzone']) df = df.rename(columns={'model': 'model_v'}) df = pd.merge(df, df_cb, on=['ozone', 'dzone']) df = df.rename(columns={'benchmark': 'benchmark_c'}) df = pd.merge(df, df_vb, on=['ozone', 'dzone']) df = df.rename(columns={'benchmark': 'benchmark_v'}) df.loc[:, ['ozone', 'dzone', 'gt', 'model_c', 'model_v', 'benchmark_c', 'benchmark_v']].to_csv(f'../../dbs/{region}/odms.csv', index=False) os.remove(f'../../dbs/{region}/odm_gt.csv') os.remove(f'../../dbs/{region}/odm_calibration.csv') os.remove(f'../../dbs/{region}/odm_validation.csv') os.remove(f'../../dbs/{region}/odm_benchmark_c.csv') os.remove(f'../../dbs/{region}/odm_benchmark_v.csv') ``` ## 3 Quantify the od-pair similarity This part applies to Sweden, The Netherlands, and Sao Paulo, Brazil. The overall similarity. ``` quant_list = [] for region in ['sweden', 'netherlands', 'saopaulo']: df = pd.read_csv(f'../../dbs/{region}/odms.csv') df_c = df.loc[(df.gt != 0) & (df.model_c != 0) & (df.benchmark_c != 0), :] mc = stats.kendalltau(df_c.loc[:, 'gt'], df_c.loc[:, 'model_c']) quant_list.append((region, 'model', 'c', mc.correlation, mc.pvalue)) bc = stats.kendalltau(df_c.loc[:, 'gt'], df_c.loc[:, 'benchmark_c']) quant_list.append((region, 'benchmark', 'c', bc.correlation, bc.pvalue)) df_v = df.loc[(df.gt != 0) & (df.model_v != 0) & (df.benchmark_v != 0), :] mv = stats.kendalltau(df_v.loc[:, 'gt'], df_v.loc[:, 'model_v']) quant_list.append((region, 'model', 'v', mv.correlation, mv.pvalue)) bv = stats.kendalltau(df_v.loc[:, 'gt'], df_v.loc[:, 'benchmark_v']) quant_list.append((region, 'benchmark', 'v', bv.correlation, bv.pvalue)) df_stats = pd.DataFrame(quant_list, columns=['region', 'type', 'data', 'cor', 'p']) df_stats df_stats.groupby(['region', 'type'])['cor'].mean() ```
true
code
0.284489
null
null
null
null
You now know the following 1. Generate open-loop control from a given route 2. Simulate vehicular robot motion using bicycle/ unicycle model Imagine you want to make an utility for your co-workers to try and understand vehicle models. Dashboards are common way to do this. There are several options out there : Streamlit, Voila, Observable etc Follow this <a href="https://medium.com/plotly/introducing-jupyterdash-811f1f57c02e">Medium post</a> on Jupyter Dash and see how to package what you learnt today in an interactive manner Here is a <a href="https://stackoverflow.com/questions/53622518/launch-a-dash-app-in-a-google-colab-notebook">stackoverflow question </a> on how to run dash applications on Collab What can you assume? + Fix $v,\omega$ or $v,\delta$ depending on the model (users can still pick the actual value) + fixed wheelbase for bicycle model Users can choose + unicycle and bicycle models + A pre-configured route ("S", "inverted-S", "figure-of-eight" etc) + 1 of 3 values for $v, \omega$ (or $\delta$) ``` !pip install jupyter-dash import plotly.express as px from jupyter_dash import JupyterDash import dash_core_components as dcc import dash_html_components as html from dash.dependencies import Input, Output import pandas as pd import numpy as np # Load Data velocities = ['1','2','3'] omegas = ['15','30','45'] shapes = ["S", "Inverted-S", "Figure of 8"] models = ["Unicycle", "Bicycle"] def unicycle_model(curr_pose, v, w, dt=1.0): ''' >>> unicycle_model((0.0,0.0,0.0), 1.0, 0.0) (1.0, 0.0, 0.0) >>> unicycle_model((0.0,0.0,0.0), 0.0, 1.0) (0.0, 0.0, 1.0) >>> unicycle_model((0.0, 0.0, 0.0), 1.0, 1.0) (1.0, 0.0, 1.0) ''' ## write code to calculate next_pose # refer to the kinematic equations of a unicycle model x, y, theta = curr_pose x += v*np.cos(theta)*dt y += v*np.sin(theta)*dt theta += w*dt # Keep theta bounded between [-pi, pi] theta = np.arctan2(np.sin(theta), np.cos(theta)) # return calculated (x, y, theta) return x, y, theta def bicycle_model(curr_pose, v, delta, dt=1.0): ''' >>> bicycle_model((0.0,0.0,0.0), 1.0, 0.0) (1.0, 0.0, 0.0) >>> bicycle_model((0.0,0.0,0.0), 0.0, np.pi/4) (0.0, 0.0, 0.0) >>> bicycle_model((0.0, 0.0, 0.0), 1.0, np.pi/4) (1.0, 0.0, 1.11) ''' # write code to calculate next_pose # refer to the kinematic equations of a bicycle model #x, y, theta = #x = #y = #theta = L = 0.9 x, y, theta = curr_pose x += v*np.cos(theta)*dt y += v*np.sin(theta)*dt theta += (v/L)*np.tan(delta)*dt # Keep theta bounded between [-pi, pi] theta = np.arctan2(np.sin(theta), np.cos(theta)) # return calculated (x, y, theta) return x, y, theta def get_open_loop_commands(route, vc_fast=1, wc=np.pi/12, dt=1.0): all_w = [] omegas = {'straight': 0, 'left': wc, 'right': -wc} for manoeuvre, command in route: u = np.ceil(command/vc_fast).astype('int') v = np.ceil(np.deg2rad(command)/wc).astype('int') t_cmd = u if manoeuvre == 'straight' else v all_w += [omegas[manoeuvre]]*t_cmd all_v = vc_fast * np.ones_like(all_w) return all_v, all_w def get_commands(shape): if(shape == shapes[0]): return [("right", 180),("left", 180)] elif(shape == shapes[1]): return [("left", 180),("right", 180)] return [("right", 180),("left", 180),("left", 180),("right", 180)] def get_angle(omega): if(omega == omegas[0]): return np.pi/12 elif(omega == omegas[1]): return np.pi/6 return np.pi/4 # Build App app = JupyterDash(__name__) app.layout = html.Div([ html.H1("Unicycle/Bicycle"), html.Label([ "velocity", dcc.Dropdown( id='velocity', clearable=False, value='1', options=[ {'label': c, 'value': c} for c in velocities ]) ]), html.Label([ "omega/delta", dcc.Dropdown( id='omega', clearable=False, value='15', options=[ {'label': c, 'value': c} for c in omegas ]) ]), html.Label([ "shape", dcc.Dropdown( id='shape', clearable=False, value='S', options=[ {'label': c, 'value': c} for c in shapes ]) ]), html.Label([ "model", dcc.Dropdown( id='model', clearable=False, value='Unicycle', options=[ {'label': c, 'value': c} for c in models ]) ]), dcc.Graph(id='graph'), ]) # Define callback to update graph @app.callback( Output('graph', 'figure'), [Input("velocity", "value"), Input("omega", "value"), Input("shape", "value"), Input("model", "value")] ) def update_figure(velocity, omega, shape, model): robot_trajectory = [] all_v, all_w = get_open_loop_commands(get_commands(shape), int(velocity), get_angle(omega)) pose = (0, 0, np.pi/2) for v, w in zip(all_v, all_w): robot_trajectory.append(pose) if model == models[0]: pose = unicycle_model(pose, v, w) else: pose = bicycle_model(pose,v,w) robot_trajectory = np.array(robot_trajectory) dt = pd.DataFrame({'x-axis': robot_trajectory[:,0],'y-axis': robot_trajectory[:,1]}) return px.line(dt, x="x-axis", y="y-axis", title='Simulate vehicular robot motion using unicycle/bicycle model') # Run app and display result inline in the notebook app.run_server(mode='inline') ```
true
code
0.531027
null
null
null
null
## Expressões Regulares Uma expressão regular é um método formal de se especificar um padrão de texto. Mais detalhadamente, é uma composição de símbolos, caracteres com funções especiais, que agrupados entre si e com caracteres literais, formam uma sequência, uma expressão,Essa expressão é interpretada como uma regra que indicará sucesso se uma entrada de dados qualquer casar com essa regra ou seja obdecer exatamente a todas as suas condições. ``` # importando o módulo re(regular expression) # Esse módulo fornece operações com expressões regulares (ER) import re # lista de termos para busca lista_pesquisa = ['informações', 'Negócios'] # texto para o parse texto = 'Existem muitos desafios para o Big Data. O primerio deles é a coleta dos dados, pois fala-se aquie de'\ 'enormes quantidades sendo geradas em uma taxa maior do que um servidor comum seria capaz de processar e armazenar.'\ 'O segundo desafio é justamente o de processar essas informações. Com elas então distribuídas, a aplicação deve ser'\ 'capaz de consumir partes das informações e gerar pequenas quantidades de dados processados, que serão calculados em'\ 'conjunto depois para criar o resultado final. Outro desafio é a exibição dos resultados, de forma que as informações'\ 'estejam disponíveis de forma clara para os tomadores de decisão.' # exemplo básico de Data Mining for item in lista_pesquisa: print('Buscando por "%s" em :\n\n"%s"'% (item, texto)) # verificando o termo da pesquisa existe no texto if re.search(item, texto): print('\n') print('Palavra encontrada. \n') print('\n') else: print('\n') print('Palavra não encontrada. \n') print('\n') # termo usado para dividir uma string split_term = '@' frase = 'Qual o domínio de alguém com o e-mail: [email protected]' # dividindo a frase re.split(split_term, frase) def encontrar_padrao(lista, frase): for item in lista: print('Pesquisa na f: %r'% item) print(re.findall(item, frase)) print('\n') frase_padrao = 'zLzL..zzzLLL...zLLLzLLL...LzLz..dzzzzz...zLLLLL' lista_padroes = [ 'zL*', # z seguido de zero ou mais L 'zL+', # z seguido por um ou mais L 'zL?', # z seguido por zero ou um L 'zL{3}', # z seguido por três L 'zL{2, 3}', # z seguido por dois a três L ] encontrar_padrao(lista_padroes, frase_padrao) frase = 'Esta é uma string com pontuação. Isso pode ser um problema quando fazemos mineração de dados em busca'\ 'de padrões! Não seria melhor retirar os sinais ao fim de cada frase?' # A expressão [^!.?] verifica por valores que não sejam pontuação # (!, ., ?) e o sinal de adição (+) verifica se o item aparece pelo menos # uma vez. Traduzindo: esta expressão diz: traga apenas as palavras na # frase re.findall('[^!.? ]+', frase) frase = 'Está é uma frase do exemplo. Vamos verificar quais padrões serâo encontradas.' lista_padroes = ['[a-z]+', # sequência de letras minúsculas '[A-Z]+', # sequência de letras maiúsculas '[a-zA-Z]+', # sequência de letras minúculas e maiúsculas '[A-Z][a-z]+'] # uma letra maiúscula, seguida de uma letra minúscula encontrar_padrao(lista_padroes, frase) ``` ## Escape code É possível usar códigos específicos para enocntrar padrões nos dados, tais como dígitos, não dígitos, espaços, etc.. <table border="1" class="docutils"> <colgroup> <col width="14%" /> <col width="86%" /> </colgroup> <thead valign="bottom"> <tr class="row-odd"><th class="head">Código</th> <th class="head">Significado</th> </tr> </thead> <tbody valign="top"> <tr class="row-even"><td><tt class="docutils literal"><span class="pre">\d</span></tt></td> <td>um dígito</td> </tr> <tr class="row-odd"><td><tt class="docutils literal"><span class="pre">\D</span></tt></td> <td>um não-dígito</td> </tr> <tr class="row-even"><td><tt class="docutils literal"><span class="pre">\s</span></tt></td> <td>espaço (tab, espaço, nova linha, etc.)</td> </tr> <tr class="row-odd"><td><tt class="docutils literal"><span class="pre">\S</span></tt></td> <td>não-espaço</td> </tr> <tr class="row-even"><td><tt class="docutils literal"><span class="pre">\w</span></tt></td> <td>alfanumérico</td> </tr> <tr class="row-odd"><td><tt class="docutils literal"><span class="pre">\W</span></tt></td> <td>não-alfanumérico</td> </tr> </tbody> </table> ``` # O prefixo r antes da expressão regular evita o pré-processamenta da ER # pela linguage. Colocamos o modificador r (do inglês 'raw', crú) # imediatamente antes das aspas r'\b' '\b' frase = 'Está é uma string com alguns números, como 1287 e um símbolo #hashtag' lista_padroes = [r'\d+', # sequência de digitos r'\D+', # sequência de não digitos r'\s+', # sequência de espaços r'\S+', # sequência de não espaços r'\w+', # caracteres alganuméricos r'\W+', # não-alfanúmerico ] encontrar_padrao(lista_padroes, frase) ```
true
code
0.203787
null
null
null
null
# Node classification with Graph ATtention Network (GAT) <table><tr><td>Run the latest release of this notebook:</td><td><a href="https://mybinder.org/v2/gh/stellargraph/stellargraph/master?urlpath=lab/tree/demos/node-classification/gat-node-classification.ipynb" alt="Open In Binder" target="_parent"><img src="https://mybinder.org/badge_logo.svg"/></a></td><td><a href="https://colab.research.google.com/github/stellargraph/stellargraph/blob/master/demos/node-classification/gat-node-classification.ipynb" alt="Open In Colab" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg"/></a></td></tr></table> Import NetworkX and stellar: ``` # install StellarGraph if running on Google Colab import sys if 'google.colab' in sys.modules: %pip install -q stellargraph[demos]==1.3.0b # verify that we're using the correct version of StellarGraph for this notebook import stellargraph as sg try: sg.utils.validate_notebook_version("1.3.0b") except AttributeError: raise ValueError( f"This notebook requires StellarGraph version 1.3.0b, but a different version {sg.__version__} is installed. Please see <https://github.com/stellargraph/stellargraph/issues/1172>." ) from None import networkx as nx import pandas as pd import os import stellargraph as sg from stellargraph.mapper import FullBatchNodeGenerator from stellargraph.layer import GAT from tensorflow.keras import layers, optimizers, losses, metrics, Model from sklearn import preprocessing, feature_extraction, model_selection from stellargraph import datasets from IPython.display import display, HTML import matplotlib.pyplot as plt %matplotlib inline ``` ## Loading the CORA network (See [the "Loading from Pandas" demo](../basics/loading-pandas.ipynb) for details on how data can be loaded.) ``` dataset = datasets.Cora() display(HTML(dataset.description)) G, node_subjects = dataset.load() print(G.info()) ``` We aim to train a graph-ML model that will predict the "subject" attribute on the nodes. These subjects are one of 7 categories: ``` set(node_subjects) ``` ### Splitting the data For machine learning we want to take a subset of the nodes for training, and use the rest for validation and testing. We'll use scikit-learn again to do this. Here we're taking 140 node labels for training, 500 for validation, and the rest for testing. ``` train_subjects, test_subjects = model_selection.train_test_split( node_subjects, train_size=140, test_size=None, stratify=node_subjects ) val_subjects, test_subjects = model_selection.train_test_split( test_subjects, train_size=500, test_size=None, stratify=test_subjects ) ``` Note using stratified sampling gives the following counts: ``` from collections import Counter Counter(train_subjects) ``` The training set has class imbalance that might need to be compensated, e.g., via using a weighted cross-entropy loss in model training, with class weights inversely proportional to class support. However, we will ignore the class imbalance in this example, for simplicity. ### Converting to numeric arrays For our categorical target, we will use one-hot vectors that will be fed into a soft-max Keras layer during training. To do this conversion ... ``` target_encoding = preprocessing.LabelBinarizer() train_targets = target_encoding.fit_transform(train_subjects) val_targets = target_encoding.transform(val_subjects) test_targets = target_encoding.transform(test_subjects) ``` We now do the same for the node attributes we want to use to predict the subject. These are the feature vectors that the Keras model will use as input. The CORA dataset contains attributes 'w_x' that correspond to words found in that publication. If a word occurs more than once in a publication the relevant attribute will be set to one, otherwise it will be zero. ## Creating the GAT model in Keras To feed data from the graph to the Keras model we need a generator. Since GAT is a full-batch model, we use the `FullBatchNodeGenerator` class to feed node features and graph adjacency matrix to the model. ``` generator = FullBatchNodeGenerator(G, method="gat") ``` For training we map only the training nodes returned from our splitter and the target values. ``` train_gen = generator.flow(train_subjects.index, train_targets) ``` Now we can specify our machine learning model, we need a few more parameters for this: * the `layer_sizes` is a list of hidden feature sizes of each layer in the model. In this example we use two GAT layers with 8-dimensional hidden node features for the first layer and the 7 class classification output for the second layer. * `attn_heads` is the number of attention heads in all but the last GAT layer in the model * `activations` is a list of activations applied to each layer's output * Arguments such as `bias`, `in_dropout`, `attn_dropout` are internal parameters of the model, execute `?GAT` for details. To follow the GAT model architecture used for Cora dataset in the original paper [Graph Attention Networks. P. Veličković et al. ICLR 2018 https://arxiv.org/abs/1710.10903], let's build a 2-layer GAT model, with the second layer being the classifier that predicts paper subject: it thus should have the output size of `train_targets.shape[1]` (7 subjects) and a softmax activation. ``` gat = GAT( layer_sizes=[8, train_targets.shape[1]], activations=["elu", "softmax"], attn_heads=8, generator=generator, in_dropout=0.5, attn_dropout=0.5, normalize=None, ) ``` Expose the input and output tensors of the GAT model for node prediction, via GAT.in_out_tensors() method: ``` x_inp, predictions = gat.in_out_tensors() ``` ### Training the model Now let's create the actual Keras model with the input tensors `x_inp` and output tensors being the predictions `predictions` from the final dense layer ``` model = Model(inputs=x_inp, outputs=predictions) model.compile( optimizer=optimizers.Adam(lr=0.005), loss=losses.categorical_crossentropy, metrics=["acc"], ) ``` Train the model, keeping track of its loss and accuracy on the training set, and its generalisation performance on the validation set (we need to create another generator over the validation data for this) ``` val_gen = generator.flow(val_subjects.index, val_targets) ``` Create callbacks for early stopping (if validation accuracy stops improving) and best model checkpoint saving: ``` from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint if not os.path.isdir("logs"): os.makedirs("logs") es_callback = EarlyStopping( monitor="val_acc", patience=20 ) # patience is the number of epochs to wait before early stopping in case of no further improvement mc_callback = ModelCheckpoint( "logs/best_model.h5", monitor="val_acc", save_best_only=True, save_weights_only=True ) ``` Train the model ``` history = model.fit( train_gen, epochs=50, validation_data=val_gen, verbose=2, shuffle=False, # this should be False, since shuffling data means shuffling the whole graph callbacks=[es_callback, mc_callback], ) ``` Plot the training history: ``` sg.utils.plot_history(history) ``` Reload the saved weights of the best model found during the training (according to validation accuracy) ``` model.load_weights("logs/best_model.h5") ``` Evaluate the best model on the test set ``` test_gen = generator.flow(test_subjects.index, test_targets) test_metrics = model.evaluate(test_gen) print("\nTest Set Metrics:") for name, val in zip(model.metrics_names, test_metrics): print("\t{}: {:0.4f}".format(name, val)) ``` ### Making predictions with the model Now let's get the predictions for all nodes: ``` all_nodes = node_subjects.index all_gen = generator.flow(all_nodes) all_predictions = model.predict(all_gen) ``` These predictions will be the output of the softmax layer, so to get final categories we'll use the `inverse_transform` method of our target attribute specification to turn these values back to the original categories Note that for full-batch methods the batch size is 1 and the predictions have shape $(1, N_{nodes}, N_{classes})$ so we we remove the batch dimension to obtain predictions of shape $(N_{nodes}, N_{classes})$. ``` node_predictions = target_encoding.inverse_transform(all_predictions.squeeze()) ``` Let's have a look at a few predictions after training the model: ``` df = pd.DataFrame({"Predicted": node_predictions, "True": node_subjects}) df.head(20) ``` ## Node embeddings Evaluate node embeddings as activations of the output of the 1st GraphAttention layer in GAT layer stack (the one before the top classification layer predicting paper subjects), and visualise them, coloring nodes by their true subject label. We expect to see nice clusters of papers in the node embedding space, with papers of the same subject belonging to the same cluster. Let's create a new model with the same inputs as we used previously `x_inp` but now the output is the embeddings rather than the predicted class. We find the embedding layer by taking the first graph attention layer in the stack of Keras layers. Additionally note that the weights trained previously are kept in the new model. ``` emb_layer = next(l for l in model.layers if l.name.startswith("graph_attention")) print( "Embedding layer: {}, output shape {}".format(emb_layer.name, emb_layer.output_shape) ) embedding_model = Model(inputs=x_inp, outputs=emb_layer.output) ``` The embeddings can now be calculated using the predict function. Note that the embeddings returned are 64 dimensional features (8 dimensions for each of the 8 attention heads) for all nodes. ``` emb = embedding_model.predict(all_gen) emb.shape ``` Project the embeddings to 2d using either TSNE or PCA transform, and visualise, coloring nodes by their true subject label ``` from sklearn.decomposition import PCA from sklearn.manifold import TSNE import pandas as pd import numpy as np ``` Note that the embeddings from the GAT model have a batch dimension of 1 so we `squeeze` this to get a matrix of $N_{nodes} \times N_{emb}$. ``` X = emb.squeeze() y = np.argmax(target_encoding.transform(node_subjects), axis=1) if X.shape[1] > 2: transform = TSNE # PCA trans = transform(n_components=2) emb_transformed = pd.DataFrame(trans.fit_transform(X), index=list(G.nodes())) emb_transformed["label"] = y else: emb_transformed = pd.DataFrame(X, index=list(G.nodes())) emb_transformed = emb_transformed.rename(columns={"0": 0, "1": 1}) emb_transformed["label"] = y alpha = 0.7 fig, ax = plt.subplots(figsize=(7, 7)) ax.scatter( emb_transformed[0], emb_transformed[1], c=emb_transformed["label"].astype("category"), cmap="jet", alpha=alpha, ) ax.set(aspect="equal", xlabel="$X_1$", ylabel="$X_2$") plt.title( "{} visualization of GAT embeddings for cora dataset".format(transform.__name__) ) plt.show() ``` <table><tr><td>Run the latest release of this notebook:</td><td><a href="https://mybinder.org/v2/gh/stellargraph/stellargraph/master?urlpath=lab/tree/demos/node-classification/gat-node-classification.ipynb" alt="Open In Binder" target="_parent"><img src="https://mybinder.org/badge_logo.svg"/></a></td><td><a href="https://colab.research.google.com/github/stellargraph/stellargraph/blob/master/demos/node-classification/gat-node-classification.ipynb" alt="Open In Colab" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg"/></a></td></tr></table>
true
code
0.725393
null
null
null
null
# Deep Learning on JuiceFS Tutorial - 01. Getting Started JuiceFS is a shared POSIX file system for the cloud. You may replace existing solutions with JuiceFS with zero cost, turns any object store into a shared POSIX file system. Sign up for 1T free quota now at https://juicefs.com Source code of this tutorial can be found in https://github.com/juicedata/juicefs-dl-tutorial ## 0. Requirements It's very easy to setup JuiceFS in your remote HPC machine or Google Colab or CoCalc by insert just one line of command into your Jupyter Notebook: ``` !curl -sL https://juicefs.com/static/juicefs -o juicefs && chmod +x juicefs ``` Here we go, let's try the magic of JuiceFS! ## 1. Mounting your JuiceFS After create your JuiceFS volumn followed by [documentation here](https://juicefs.com/docs/en/getting_started.html), you have two ways to mount your JuiceFS here: ### 1.1 The security way Just run the mount command, and input your access key and secret key from the public cloud or storage provider. This scene is for people who want to collaborate with others and protecting credentials. It can also let your teammates using their JuiceFS volume or share notebook publicly. ``` !./juicefs mount {JFS_VOLUMN_NAME} /jfs ``` ### 1.2 The convenient way However, maybe you are working alone, no worries about leak credentials, and don't want to do annoying input credentials every time restart kernel. Surely, you can save your token and access secrets in your notebook, just change the corresponding fields in the following command to your own. ``` !./juicefs auth --token {JUICEFS_TOKEN} --accesskey {ACCESSKEY} --secretkey {SECRETKEY} JuiceFS !./juicefs mount -h ``` ## 2. Preparing dataset Okay, let's assume you have already mounted your JuiceFS volume. You can test by list your file here. ``` !ls /jfs ``` You have many ways to get data into your JuiceFS volume, like mounting in your local machine and directly drag and drop, or mounting in cloud servers and write data or crawling data and save. Here we took the [MNIST dataset](http://yann.lecun.com/exdb/mnist/) (with a training set of 60,000 images, and a test set of 10,000 images) as an example. If you have not to get the MNIST dataset ready, you can execute the following block: ``` !curl -sL https://s3.amazonaws.com/img-datasets/mnist.npz -o /jfs/mnist.npz ``` ## 3. Training model Once we have got our dataset ready in JuiceFS, we can begin the training process. ``` %matplotlib inline import numpy as np import matplotlib.pyplot as plt import seaborn as sns import keras from keras.models import Sequential from keras.layers import Dense, Dropout, Flatten from keras.layers import Conv2D, MaxPooling2D import warnings warnings.simplefilter(action='ignore') ``` Firstly, load our MNIST dataset from JuiceFS volume. ``` with np.load('/jfs/mnist.npz') as f: X_train, y_train = f['x_train'], f['y_train'] X_test, y_test = f['x_test'], f['y_test'] ``` Visualize some data to ensure we have successfully loaded data from JuiceFS. ``` sns.countplot(y_train) fig, ax = plt.subplots(6, 6, figsize = (12, 12)) fig.suptitle('First 36 images in MNIST') fig.tight_layout(pad = 0.3, rect = [0, 0, 0.9, 0.9]) for x, y in [(i, j) for i in range(6) for j in range(6)]: ax[x, y].imshow(X_train[x + y * 6].reshape((28, 28)), cmap = 'gray') ax[x, y].set_title(y_train[x + y * 6]) ``` Cool! We have successfully loaded the MNIST dataset from JuiceFS! Let's training a CNN model. ``` batch_size = 128 num_classes = 10 epochs = 12 img_rows, img_cols = 28, 28 X_train = X_train.reshape(X_train.shape[0], img_rows, img_cols, 1) X_test = X_test.reshape(X_test.shape[0], img_rows, img_cols, 1) input_shape = (img_rows, img_cols, 1) X_train = X_train.astype('float32') X_test = X_test.astype('float32') X_train /= 255 X_test /= 255 y_train = keras.utils.to_categorical(y_train, num_classes) y_test = keras.utils.to_categorical(y_test, num_classes) model = Sequential() model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input_shape)) model.add(Conv2D(64, (3, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) model.add(Flatten()) model.add(Dense(128, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(num_classes, activation='softmax')) model.compile(loss=keras.losses.categorical_crossentropy, optimizer=keras.optimizers.Adadelta(), metrics=['accuracy']) model.fit(X_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1, validation_data=(X_test, y_test)) score = model.evaluate(X_test, y_test, verbose=0) print('Test loss:', score[0]) print('Test accuracy:', score[1]) ``` ## 4. Saving model Awesome! We have trained a simple CNN model, now let's try to write back the model into JuiceFS. Thanks to the POSIX-compatible feature of JuiceFS, we can easily save the model as usual. No additional effort need. ``` model.save('/jfs/mnist_model.h5') ``` ## 5. Loading model Assuming you want to debug the model in your local machine or want to sync with the production environment. You can load your model from JuiceFS in any machine in real time. JuiceFS's strong consistency feature will ensure all confirmed changes made to your data reflected in different machines immediately. ``` from keras.models import load_model model_from_jfs = load_model('/jfs/mnist_model.h5') ``` We have successfully load our previous model from JuiceFS here, let's randomly pick an image from test dataset and use loader model to make a prediction. ``` import random pick_idx = random.randint(0, X_test.shape[0]) ``` What image have we picked? ``` plt.imshow(X_test[pick_idx].reshape((28, 28)), cmap = 'gray') ``` Let's do prediction using the model loaded from JuiceFS. ``` y_pred = np.argmax(model_from_jfs.predict(np.expand_dims(X_test[pick_idx], axis=0))) print(f'Prediction: {y_pred}') ``` That's it. We will cover some advanced usages and public datasets in the next tutorials.
true
code
0.621713
null
null
null
null
[source](../../api/alibi_detect.cd.mmd_online.rst) # Online Maximum Mean Discrepancy ## Overview The online [Maximum Mean Discrepancy (MMD)](http://jmlr.csail.mit.edu/papers/v13/gretton12a.html) detector is a kernel-based method for online drift detection. The MMD is a distance-based measure between 2 distributions *p* and *q* based on the mean embeddings $\mu_{p}$ and $\mu_{q}$ in a reproducing kernel Hilbert space $F$: $$ MMD(F, p, q) = || \mu_{p} - \mu_{q} ||^2_{F} $$ Given reference samples $\{X_i\}_{i=1}^{N}$ and test samples $\{Y_i\}_{i=t}^{t+W}$ we may compute an unbiased estimate $\widehat{MMD}^2(F, \{X_i\}_{i=1}^N, \{Y_i\}_{i=t}^{t+W})$ of the squared MMD between the two underlying distributions. The estimate can be updated at low-cost as new data points enter into the test-window. We use by default a [radial basis function kernel](https://en.wikipedia.org/wiki/Radial_basis_function_kernel), but users are free to pass their own kernel of preference to the detector. Online detectors assume the reference data is large and fixed and operate on single data points at a time (rather than batches). These data points are passed into the test-window and a two-sample test-statistic (in this case squared MMD) between the reference data and test-window is computed at each time-step. When the test-statistic exceeds a preconfigured threshold, drift is detected. Configuration of the thresholds requires specification of the expected run-time (ERT) which specifies how many time-steps that the detector, on average, should run for in the absence of drift before making a false detection. It also requires specification of a test-window size, with smaller windows allowing faster response to severe drift and larger windows allowing more power to detect slight drift. For high-dimensional data, we typically want to reduce the dimensionality before passing it to the detector. Following suggestions in [Failing Loudly: An Empirical Study of Methods for Detecting Dataset Shift](https://arxiv.org/abs/1810.11953), we incorporate Untrained AutoEncoders (UAE) and black-box shift detection using the classifier's softmax outputs ([BBSDs](https://arxiv.org/abs/1802.03916)) as out-of-the box preprocessing methods and note that [PCA](https://en.wikipedia.org/wiki/Principal_component_analysis) can also be easily implemented using `scikit-learn`. Preprocessing methods which do not rely on the classifier will usually pick up drift in the input data, while BBSDs focuses on label shift. Detecting input data drift (covariate shift) $\Delta p(x)$ for text data requires a custom preprocessing step. We can pick up changes in the semantics of the input by extracting (contextual) embeddings and detect drift on those. Strictly speaking we are not detecting $\Delta p(x)$ anymore since the whole training procedure (objective function, training data etc) for the (pre)trained embeddings has an impact on the embeddings we extract. The library contains functionality to leverage pre-trained embeddings from [HuggingFace's transformer package](https://github.com/huggingface/transformers) but also allows you to easily use your own embeddings of choice. Both options are illustrated with examples in the [Text drift detection on IMDB movie reviews](../../examples/cd_text_imdb.ipynb) notebook. ## Usage ### Initialize Arguments: * `x_ref`: Data used as reference distribution. * `ert`: The expected run-time in the absence of drift, starting from *t=0*. * `window_size`: The size of the sliding test-window used to compute the test-statistic. Smaller windows focus on responding quickly to severe drift, larger windows focus on ability to detect slight drift. Keyword arguments: * `backend`: Backend used for the MMD implementation and configuration. * `preprocess_fn`: Function to preprocess the data before computing the data drift metrics. * `kernel`: Kernel used for the MMD computation, defaults to Gaussian RBF kernel. * `sigma`: Optionally set the GaussianRBF kernel bandwidth. Can also pass multiple bandwidth values as an array. The kernel evaluation is then averaged over those bandwidths. If `sigma` is not specified, the 'median heuristic' is adopted whereby `sigma` is set as the median pairwise distance between reference samples. * `n_bootstraps`: The number of bootstrap simulations used to configure the thresholds. The larger this is the more accurately the desired ERT will be targeted. Should ideally be at least an order of magnitude larger than the ERT. * `verbose`: Whether or not to print progress during configuration. * `input_shape`: Shape of input data. * `data_type`: Optionally specify the data type (tabular, image or time-series). Added to metadata. Additional PyTorch keyword arguments: * `device`: Device type used. The default None tries to use the GPU and falls back on CPU if needed. Can be specified by passing either 'cuda', 'gpu' or 'cpu'. Only relevant for 'pytorch' backend. Initialized drift detector example: ```python from alibi_detect.cd import MMDDriftOnline cd = MMDDriftOnline(x_ref, ert, window_size, backend='tensorflow') ``` The same detector in PyTorch: ```python cd = MMDDriftOnline(x_ref, ert, window_size, backend='pytorch') ``` We can also easily add preprocessing functions for both frameworks. The following example uses a randomly initialized image encoder in PyTorch: ```python from functools import partial import torch import torch.nn as nn from alibi_detect.cd.pytorch import preprocess_drift device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') # define encoder encoder_net = nn.Sequential( nn.Conv2d(3, 64, 4, stride=2, padding=0), nn.ReLU(), nn.Conv2d(64, 128, 4, stride=2, padding=0), nn.ReLU(), nn.Conv2d(128, 512, 4, stride=2, padding=0), nn.ReLU(), nn.Flatten(), nn.Linear(2048, 32) ).to(device).eval() # define preprocessing function preprocess_fn = partial(preprocess_drift, model=encoder_net, device=device, batch_size=512) cd = MMDDriftOnline(x_ref, ert, window_size, backend='pytorch', preprocess_fn=preprocess_fn) ``` The same functionality is supported in TensorFlow and the main difference is that you would import from `alibi_detect.cd.tensorflow import preprocess_drift`. Other preprocessing steps such as the output of hidden layers of a model or extracted text embeddings using transformer models can be used in a similar way in both frameworks. TensorFlow example for the hidden layer output: ```python from alibi_detect.cd.tensorflow import HiddenOutput, preprocess_drift model = # TensorFlow model; tf.keras.Model or tf.keras.Sequential preprocess_fn = partial(preprocess_drift, model=HiddenOutput(model, layer=-1), batch_size=128) cd = MMDDriftOnline(x_ref, ert, window_size, backend='tensorflow', preprocess_fn=preprocess_fn) ``` Check out the [Online Drift Detection on the Wine Quality Dataset](../../examples/cd_online_wine.ipynb) example for more details. Alibi Detect also includes custom text preprocessing steps in both TensorFlow and PyTorch based on Huggingface's [transformers](https://github.com/huggingface/transformers) package: ```python import torch import torch.nn as nn from transformers import AutoTokenizer from alibi_detect.cd.pytorch import preprocess_drift from alibi_detect.models.pytorch import TransformerEmbedding device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') model_name = 'bert-base-cased' tokenizer = AutoTokenizer.from_pretrained(model_name) embedding_type = 'hidden_state' layers = [5, 6, 7] embed = TransformerEmbedding(model_name, embedding_type, layers) model = nn.Sequential(embed, nn.Linear(768, 256), nn.ReLU(), nn.Linear(256, enc_dim)).to(device).eval() preprocess_fn = partial(preprocess_drift, model=model, tokenizer=tokenizer, max_len=512, batch_size=32) # initialise drift detector cd = MMDDriftOnline(x_ref, ert, window_size, backend='pytorch', preprocess_fn=preprocess_fn) ``` Again the same functionality is supported in TensorFlow but with `from alibi_detect.cd.tensorflow import preprocess_drift` and `from alibi_detect.models.tensorflow import TransformerEmbedding` imports. ### Detect Drift We detect data drift by sequentially calling `predict` on single instances `x_t` (no batch dimension) as they each arrive. We can return the test-statistic and the threshold by setting `return_test_stat` to *True*. The prediction takes the form of a dictionary with `meta` and `data` keys. `meta` contains the detector's metadata while `data` is also a dictionary which contains the actual predictions stored in the following keys: * `is_drift`: 1 if the test-window (of the most recent `window_size` observations) has drifted from the reference data and 0 otherwise. * `time`: The number of observations that have been so far passed to the detector as test instances. * `ert`: The expected run-time the detector was configured to run at in the absence of drift. * `test_stat`: MMD^2 metric between the reference data and the test_window if `return_test_stat` equals *True*. * `threshold`: The value the test-statsitic is required to exceed for drift to be detected if `return_test_stat` equals *True*. ```python preds = cd.predict(x_t, return_test_stat=True) ``` Resetting the detector with the same reference data and thresholds but with a new and empty test-window is straight-forward: ```python cd.reset() ``` ## Examples [Online Drift Detection on the Wine Quality Dataset](../../examples/cd_online_wine.ipynb) [Online Drift Detection on the Camelyon medical imaging dataset](../../examples/cd_online_camelyon.ipynb)
true
code
0.814607
null
null
null
null
# NLP 2 - Pré Processamento de Textos e Modelos Modernos Fala galera! Na aula passada, tivemos uma introdução ao mundo de NLP: o modelo BoW (Bag of Words) e o algoritmo TF-iDF. Embora muito práticos, observamos alguns fenômenos de NLP e dessas técnicas: - NLP é naturalmente um problema de grandes dimensionalidades, o que nos leva a cair no caso de "curse of dimensionality" - O modelo BoW, mesmo com o conceito de N-Grams, tem dificuldades para carregar informação sequencial de palavras, uma vez que ele só pega sequências de termos, não de conceitos - Entender e implementar conceitos de linguística é importantíssimo para que o processamento-modelagem produza uma boa performance. Dessa forma, NLP é norteado pelo entendimento linguístico <br> Dito isso, hoje no mundo de NLP temos ferramentas, approaches e tecnologias que implementam de formas mais eficientes conceitos linguísticos para que possamos realizar melhores modelagens. Nessa aula, veremos essa técnicas com as bibliotecas SpaCy, gensim e a arquitetura word2vec! Para quem não tem SpaCy ou gensim no computador, retire o comentário e rode as células abaixo: ``` # ! pip install spacy # ! pip install gensim ``` ## SpaCy Basics ``` import spacy # Precisamos instanciar um objeto de NLP especificando qual linguagem ele utilizará. # No caso, vamos começar com português nlp = spacy.load('pt') ``` Opa, deu um erro no comando acima! O SpaCy precisa não somente ser instalado, mas seus pacotes linguísticos precisam ser baixados também. Retire os comentários e rode as células abaixo para fazer o download dos pacotes English e Português ``` # ! python -m spacy download en # ! python -m spacy download pt ``` Ok! Agora tudo certo para começarmos a mexer com o SpaCy. Vamos instanciar a ferramenta linguística para português ``` nlp = spacy.load('pt') # Vamos criar um documento para testes e demonstrações do SpaCy! # É muito importante que os textos passados estejam em encoding unicode, # por isso a presença do u antes da string doc = nlp(u'Você encontrou o livro que eu te falei, Carla?') doc.text.split() ``` Ok, temos um problema de pontuação aqui: o método split (ou REGEX em geral) não entende que a vírgula é uma entidade - vamos chamar essas entidades de tokens. Assim, não faz muito sentir quebrar o texto com esses métodos. Vamos utilizar uma compreensão de lista pra isso. O `nlp` consegue entender a diferença entre eles e, portanto, quando usamos os tokens dentro da estrutura do documento, temos uma divisão mais coerente: ``` tokens = [token for token in doc] tokens ``` Para extrair as strings de cada token, utilizamos `orth_`: ``` [token.orth_ for token in doc] ``` Podemos ver que o SpaCy consegue entender a diferença de pontuações e palavras de fato: ``` [token.orth_ for token in doc if not token.is_punct] ``` Um conceito muito importante de NLP é o de similaridade. Como medir se 2 palavras carregam informações similares? Isso pode ser interessante para, por exemplo, compactarmos nosso texto, ou ainda para descobrir o significado de palavras, termos e gírias desconhecidas. Para isso, utilizamos o método `.similarity()` de um token em relação ao outro: ``` print(tokens[0].similarity(tokens[5])) print(tokens[0].similarity(tokens[3])) ``` Na célula abaixo, sinta-se livre para realizar os teste que quiser com similaridades em português! Quando realizamos o load de um pacote linguístico, também estamos carregando noções da estrutura gramatical, sintática e sintagmática da língua. Podemos, por exemplo, utilizar o atributo `.pos_`, de Part of Speech (POS), para extrair a função de cada token na frase: ``` [(token.orth_, token.pos_) for token in doc] ``` Ok, mas como lidamos com o problema da dimensionalidade? Podemos utilizar 2 conceitos chamados **lemmatization** e **stemming**. A lemmatization em lingüística é o processo de agrupar as formas flexionadas de uma palavra para que elas possam ser analisadas como um único item, identificado pelo lema da palavra ou pela forma de dicionário. Já o stemming busca o radical da palavra: ``` [token.lemma_ for token in doc if token.pos_ == 'VERB'] # lemmatization ``` Na célula abaixo, crie um novo doc e aplique uma lemmatization em seus verbos: ``` doc = nlp(u'encontrei, encontraram, encontrarão, encontrariam') [token.lemma_ for token in doc if token.pos_ == 'VERB'] # lemmatization doc = nlp(u'encontrar encontrei') tokens = [token for token in doc] tokens[0].is_ancestor(tokens[1]) #checagem de radicais ``` Por fim, queremos extrair entidades de uma frase. Entenda entidades como personagens num doc. Podemos acessar as entidades de uma frase ao chamar `ents` de um doc: ``` doc = nlp(u'Machado de Assis um dos melhores escritores do Brasil, foi o primeiro presidente da Academia Brasileira de Letras') doc.ents ``` Ao analisar as entidades de uma frase, podemos inclusive entender que tipo de entidade ela pertence: ``` [(entity, entity.label_) for entity in doc.ents] wiki_obama = """Barack Obama is an American politician who served as the 44th President of the United States from 2009 to 2017. He is the first African American to have served as president, as well as the first born outside the contiguous United States.""" ``` E isso funciona para cada pacote linguístico que você utilizar: ``` nlp = spacy.load('en') nlp_obama = nlp(wiki_obama) [(i, i.label_) for i in nlp_obama.ents] ``` ## SpaCy + Scikit Learn Para demonstrar como realizar o pré-processamento de um datasetv linguístico e como conectar SpaCy e sklearn, vamos fazer um reconhecedor de emoções simples: ``` # stopwords são tokens de uma língua que carregam pouca informação, como conectores e pontuações. # Fique atento ao utilizar isso! Por exemplo, @ e # são potnuações importantíssimas num case # utilizando dados do Twitter from sklearn.feature_extraction.stop_words import ENGLISH_STOP_WORDS as stopwords # Nosso modelo BoW from sklearn.feature_extraction.text import CountVectorizer from sklearn.metrics import accuracy_score from sklearn.base import TransformerMixin from sklearn.pipeline import Pipeline from sklearn.svm import LinearSVC import string punctuations = string.punctuation from spacy.lang.en import English parser = English() # Custom transformer using spaCy class predictors(TransformerMixin): def transform(self, X, **transform_params): return [clean_text(text) for text in X] def fit(self, X, y=None, **fit_params): return self def get_params(self, deep=True): return {} # Vamos limpar o texto jogando tudo para minúsculas def clean_text(text): return text.strip().lower() ``` Vamos criar uma função que tokeniza nosso dataset já tratando-o com lemmatization e removendo stopwords: ``` def spacy_tokenizer(sentence): tokens = parser(sentence) tokens = [tok.lemma_.lower().strip() if tok.lemma_ != "-PRON-" else tok.lower_ for tok in tokens] tokens = [tok for tok in tokens if (tok not in stopwords and tok not in punctuations)] return tokens # create vectorizer object to generate feature vectors, we will use custom spacy’s tokenizer vectorizer = CountVectorizer(tokenizer = spacy_tokenizer, ngram_range=(1,2)) classifier = LinearSVC() # Create the pipeline to clean, tokenize, vectorize, and classify pipe = Pipeline([("cleaner", predictors()), ('vectorizer', vectorizer), ('classifier', classifier)]) # Load sample data train = [('I love this sandwich.', 'pos'), ('this is an amazing place!', 'pos'), ('I feel very good about these beers.', 'pos'), ('this is my best work.', 'pos'), ("what an awesome view", 'pos'), ('I do not like this restaurant', 'neg'), ('I am tired of this stuff.', 'neg'), ("I can't deal with this", 'neg'), ('he is my sworn enemy!', 'neg'), ('my boss is horrible.', 'neg')] test = [('the beer was good.', 'pos'), ('I do not enjoy my job', 'neg'), ("I ain't feelin dandy today.", 'neg'), ("I feel amazing!", 'pos'), ('Gary is a good friend of mine.', 'pos'), ("I can't believe I'm doing this.", 'neg')] # Create model and measure accuracy pipe.fit([x[0] for x in train], [x[1] for x in train]) pred_data = pipe.predict([x[0] for x in test]) for (sample, pred) in zip(test, pred_data): print(sample, pred) print("Accuracy:", accuracy_score([x[1] for x in test], pred_data)) ``` Nice! Conseguimos conectar SpaCy e sklearn para uma ferramenta de análise de sentimentos simples!. Agora vamos para um problema mais complexo: <img src="imgs/simpsons.jpg" align="left" width="60%"> ## Simpsons Dataset Esse __[dataset](https://www.kaggle.com/wcukierski/the-simpsons-by-the-data/downloads/simpsons_script_lines.csv/1)__ é bem famoso em NLP, ele contém personagens, localizações, falas e outras infos de mais 600+ episódios de Simpsons! Vamos construir um classificador que consegue entender a linguagem de Simpsons e realizar operações linguísticas nela. ``` import re # For preprocessing import pandas as pd from time import time from collections import defaultdict # For word frequency import logging # Setting up the loggings to monitor gensim. DS SOBREVIVE DE LOGS logging.basicConfig(format="%(levelname)s - %(asctime)s: %(message)s", datefmt= '%H:%M:%S', level=logging.INFO) df = pd.read_csv('./data/simpsons_script_lines.csv', error_bad_lines=False, usecols = ['raw_character_text', 'spoken_words']) df.shape df.head() ``` Vamos fazer um exercício de sanidade e ver se temos valores nulos: ``` df.isnull().sum() ``` Ok, famoso `.dropna()` para limpar nosso dataset. Em casos de NLP, podemos fazer isso nessa escala ``` df = df.dropna().reset_index(drop=True) df.isnull().sum() nlp = spacy.load('en', disable=['ner', 'parser']) # disabling Named Entity Recognition for speed def cleaning(doc): # Lemmatizes and removes stopwords # doc needs to be a spacy Doc object txt = [token.lemma_ for token in doc if not token.is_stop] # Word2Vec uses context words to learn the vector representation of a target word, # if a sentence is only one or two words long, # the benefit for the training is very small if len(txt) > 2: return ' '.join(txt) ``` Vamos retirar os caracteres não alfabéticos: ``` brief_cleaning = (re.sub("[^A-Za-z']+", ' ', str(row)).lower() for row in df['spoken_words']) #REGEX ``` Ok, vamos executar nossa função de limpeza para todo o dataset! Observe como o shape vai mudar. O SpaCy nos permite criar pipelines para esse processo: ``` t = time() txt = [cleaning(doc) for doc in nlp.pipe(brief_cleaning, batch_size=5000, n_threads=-1)] print('Time to clean up everything: {} mins'.format(round((time() - t) / 60, 2))) df_clean = pd.DataFrame({'clean': txt}) df_clean = df_clean.dropna().drop_duplicates() df_clean.shape ``` Hora de utilizar a biblioteca Gensim. O Gensim é uma biblioteca de código aberto para modelagem de tópico não supervisionada e processamento de linguagem natural, usando o aprendizado de máquina estatístico moderno: ``` from gensim.models.phrases import Phrases, Phraser sent = [row.split() for row in df_clean['clean']] phrases = Phrases(sent, min_count=30, progress_per=10000) ``` Vamos utilizar os __[bigrams](https://radimrehurek.com/gensim/models/phrases.html)__ do Gensim para detectar expressões comuns, como Bart Simpson e Mr Burns ``` bigram = Phraser(phrases) sentences = bigram[sent] word_freq = defaultdict(int) for sent in sentences: for i in sent: word_freq[i] += 1 len(word_freq) sorted(word_freq, key=word_freq.get, reverse=True)[:10] ``` Vamos construir o modelo __[word2vec](https://radimrehurek.com/gensim/models/word2vec.html)__ do Gensim. Antes disso, vamos entender o modelo: <img src="imgs/word2vec.png" align="left" width="80%"> O modelo word2vec foi implementado pelo time do Google Reaserch em 2013 com o objetivo de vetorizar tokens e entidades. Sua premissa é de que termos similares aparecem sob contextos similares, portanto, se 2 termos aparecem sob o mesmo contexto, eles têm uma chance grande de carregar informações próximas. Dessa forma, conseguimos construir um espaço n-dimensional de termos e realizar operações vetoriais sob essas palavras! ``` import multiprocessing cores = multiprocessing.cpu_count() # Count the number of cores in a computer from gensim.models import Word2Vec w2v_model = Word2Vec(min_count=20, window=2, size=300, sample=6e-5, alpha=0.03, min_alpha=0.0007, negative=20, workers=cores-1) ``` Os hiperparâmetros utilizados são: - min_count = int - Ignores all words with total absolute frequency lower than this - (2, 100) - window = int - The maximum distance between the current and predicted word within a sentence. E.g. window words on the left and window words on the left of our target - (2, 10) - size = int - Dimensionality of the feature vectors. - (50, 300) - sample = float - The threshold for configuring which higher-frequency words are randomly downsampled. Highly influencial. - (0, 1e-5) - alpha = float - The initial learning rate - (0.01, 0.05) - min_alpha = float - Learning rate will linearly drop to min_alpha as training progresses. To set it: alpha - (min_alpha * epochs) ~ 0.00 - negative = int - If > 0, negative sampling will be used, the int for negative specifies how many "noise words" should be drown. If set to 0, no - negative sampling is used. - (5, 20) - workers = int - Use these many worker threads to train the model (=faster training with multicore machines) <br> Com o modelo instanciado, precisamos construir nosso **corpus**, ou vocabulário. Vamos alimentar nosso modelo com os docs: ``` t = time() w2v_model.build_vocab(sentences, progress_per=10000) print('Time to build vocab: {} mins'.format(round((time() - t) / 60, 2))) ``` Tudo pronto! Vamos treinar nosso modelo! ``` t = time() w2v_model.train(sentences, total_examples=w2v_model.corpus_count, epochs=30, report_delay=1) print('Time to train the model: {} mins'.format(round((time() - t) / 60, 2))) w2v_model.init_sims(replace=True) w2v_model.wv.most_similar(positive=["homer"]) w2v_model.wv.most_similar(positive=["marge"]) w2v_model.wv.most_similar(positive=["bart"]) w2v_model.wv.similarity('maggie', 'baby') w2v_model.wv.similarity('bart', 'nelson') w2v_model.wv.doesnt_match(['jimbo', 'milhouse', 'kearney']) w2v_model.wv.doesnt_match(["nelson", "bart", "milhouse"]) w2v_model.wv.most_similar(positive=["woman", "homer"], negative=["marge"], topn=3) w2v_model.wv.most_similar(positive=["woman", "bart"], negative=["man"], topn=3) import numpy as np import matplotlib.pyplot as plt %matplotlib inline import seaborn as sns sns.set_style("darkgrid") from sklearn.decomposition import PCA from sklearn.manifold import TSNE def tsnescatterplot(model, word, list_names): """ Plot in seaborn the results from the t-SNE dimensionality reduction algorithm of the vectors of a query word, its list of most similar words, and a list of words. """ arrays = np.empty((0, 300), dtype='f') word_labels = [word] color_list = ['red'] # adds the vector of the query word arrays = np.append(arrays, model.wv.__getitem__([word]), axis=0) # gets list of most similar words close_words = model.wv.most_similar([word]) # adds the vector for each of the closest words to the array for wrd_score in close_words: wrd_vector = model.wv.__getitem__([wrd_score[0]]) word_labels.append(wrd_score[0]) color_list.append('blue') arrays = np.append(arrays, wrd_vector, axis=0) # adds the vector for each of the words from list_names to the array for wrd in list_names: wrd_vector = model.wv.__getitem__([wrd]) word_labels.append(wrd) color_list.append('green') arrays = np.append(arrays, wrd_vector, axis=0) # Reduces the dimensionality from 300 to 50 dimensions with PCA reduc = PCA(n_components=19).fit_transform(arrays) # Finds t-SNE coordinates for 2 dimensions np.set_printoptions(suppress=True) Y = TSNE(n_components=2, random_state=0, perplexity=15).fit_transform(reduc) # Sets everything up to plot df = pd.DataFrame({'x': [x for x in Y[:, 0]], 'y': [y for y in Y[:, 1]], 'words': word_labels, 'color': color_list}) fig, _ = plt.subplots() fig.set_size_inches(9, 9) # Basic plot p1 = sns.regplot(data=df, x="x", y="y", fit_reg=False, marker="o", scatter_kws={'s': 40, 'facecolors': df['color'] } ) # Adds annotations one by one with a loop for line in range(0, df.shape[0]): p1.text(df["x"][line], df['y'][line], ' ' + df["words"][line].title(), horizontalalignment='left', verticalalignment='bottom', size='medium', color=df['color'][line], weight='normal' ).set_size(15) plt.xlim(Y[:, 0].min()-50, Y[:, 0].max()+50) plt.ylim(Y[:, 1].min()-50, Y[:, 1].max()+50) plt.title('t-SNE visualization for {}'.format(word.title())) tsnescatterplot(w2v_model, 'homer', ['dog', 'bird', 'ah', 'maude', 'bob', 'mel', 'apu', 'duff']) tsnescatterplot(w2v_model, 'maggie', [i[0] for i in w2v_model.wv.most_similar(negative=["maggie"])]) tsnescatterplot(w2v_model, "mr_burn", [t[0] for t in w2v_model.wv.most_similar(positive=["mr_burn"], topn=20)][10:]) ```
true
code
0.361108
null
null
null
null
``` from IPython.display import Markdown as md ### change to reflect your notebook _nb_loc = "07_training/07a_ingest.ipynb" _nb_title = "Writing an efficient ingest Loop" ### no need to change any of this _nb_safeloc = _nb_loc.replace('/', '%2F') md(""" <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://console.cloud.google.com/ai-platform/notebooks/deploy-notebook?name={1}&url=https%3A%2F%2Fgithub.com%2FGoogleCloudPlatform%2Fpractical-ml-vision-book%2Fblob%2Fmaster%2F{2}&download_url=https%3A%2F%2Fgithub.com%2FGoogleCloudPlatform%2Fpractical-ml-vision-book%2Fraw%2Fmaster%2F{2}"> <img src="https://raw.githubusercontent.com/GoogleCloudPlatform/practical-ml-vision-book/master/logo-cloud.png"/> Run in AI Platform Notebook</a> </td> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/GoogleCloudPlatform/practical-ml-vision-book/blob/master/{0}"> <img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/GoogleCloudPlatform/practical-ml-vision-book/blob/master/{0}"> <img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> <td> <a href="https://raw.githubusercontent.com/GoogleCloudPlatform/practical-ml-vision-book/master/{0}"> <img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> """.format(_nb_loc, _nb_title, _nb_safeloc)) ``` # Efficient Ingest In this notebook, we speed the ingest of training/evaluation data into the model. ## Enable GPU and set up helper functions This notebook and pretty much every other notebook in this repository will run faster if you are using a GPU. On Colab: - Navigate to Edit→Notebook Settings - Select GPU from the Hardware Accelerator drop-down On Cloud AI Platform Notebooks: - Navigate to https://console.cloud.google.com/ai-platform/notebooks - Create an instance with a GPU or select your instance and add a GPU Next, we'll confirm that we can connect to the GPU with tensorflow: ``` import tensorflow as tf print('TensorFlow version' + tf.version.VERSION) print('Built with GPU support? ' + ('Yes!' if tf.test.is_built_with_cuda() else 'Noooo!')) print('There are {} GPUs'.format(len(tf.config.experimental.list_physical_devices("GPU")))) device_name = tf.test.gpu_device_name() if device_name != '/device:GPU:0': raise SystemError('GPU device not found') print('Found GPU at: {}'.format(device_name)) ``` ## Original code This is the original code, from [../06_preprocessing/06e_colordistortion.ipynb](../06_preprocessing/06e_colordistortion.ipynb) We have a few variations of creating a preprocessed dataset. ``` import matplotlib.pylab as plt import numpy as np import tensorflow as tf import tensorflow_hub as hub import os # Load compressed models from tensorflow_hub os.environ['TFHUB_MODEL_LOAD_FORMAT'] = 'COMPRESSED' from tensorflow.data.experimental import AUTOTUNE IMG_HEIGHT = 448 # note *twice* what we used to have IMG_WIDTH = 448 IMG_CHANNELS = 3 CLASS_NAMES = 'daisy dandelion roses sunflowers tulips'.split() def training_plot(metrics, history): f, ax = plt.subplots(1, len(metrics), figsize=(5*len(metrics), 5)) for idx, metric in enumerate(metrics): ax[idx].plot(history.history[metric], ls='dashed') ax[idx].set_xlabel("Epochs") ax[idx].set_ylabel(metric) ax[idx].plot(history.history['val_' + metric]); ax[idx].legend([metric, 'val_' + metric]) class _Preprocessor: def __init__(self): # nothing to initialize pass def read_from_tfr(self, proto): feature_description = { 'image': tf.io.VarLenFeature(tf.float32), 'shape': tf.io.VarLenFeature(tf.int64), 'label': tf.io.FixedLenFeature([], tf.string, default_value=''), 'label_int': tf.io.FixedLenFeature([], tf.int64, default_value=0), } rec = tf.io.parse_single_example( proto, feature_description ) shape = tf.sparse.to_dense(rec['shape']) img = tf.reshape(tf.sparse.to_dense(rec['image']), shape) label_int = rec['label_int'] return img, label_int def read_from_jpegfile(self, filename): # same code as in 05_create_dataset/jpeg_to_tfrecord.py img = tf.io.read_file(filename) img = tf.image.decode_jpeg(img, channels=IMG_CHANNELS) img = tf.image.convert_image_dtype(img, tf.float32) return img def preprocess(self, img): return tf.image.resize_with_pad(img, IMG_HEIGHT, IMG_WIDTH) def create_preproc_dataset_plain(pattern): preproc = _Preprocessor() trainds = tf.data.TFRecordDataset( [filename for filename in tf.io.gfile.glob(pattern)], compression_type='GZIP' ).map(preproc.read_from_tfr).map( lambda img, label: (preproc.preprocess(img), label) ) return trainds # note: addition of AUTOTUNE to the map() calls def create_preproc_dataset_parallelmap(pattern): preproc = _Preprocessor() def _preproc_img_label(img, label): return (preproc.preprocess(img), label) trainds = ( tf.data.TFRecordDataset( [filename for filename in tf.io.gfile.glob(pattern)], compression_type='GZIP' ) .map(preproc.read_from_tfr, num_parallel_calls=AUTOTUNE) .map(_preproc_img_label, num_parallel_calls=AUTOTUNE) ) return trainds # note: splits the files into two halves and interleaves datasets def create_preproc_dataset_interleave(pattern, num_parallel=None): preproc = _Preprocessor() files = [filename for filename in tf.io.gfile.glob(pattern)] if len(files) > 1: print("Interleaving the reading of {} files.".format(len(files))) def _create_half_ds(x): if x == 0: half = files[:(len(files)//2)] else: half = files[(len(files)//2):] return tf.data.TFRecordDataset(half, compression_type='GZIP') trainds = tf.data.Dataset.range(2).interleave( _create_half_ds, num_parallel_calls=AUTOTUNE) else: trainds = tf.data.TFRecordDataset(files, compression_type='GZIP') def _preproc_img_label(img, label): return (preproc.preprocess(img), label) trainds = (trainds .map(preproc.read_from_tfr, num_parallel_calls=num_parallel) .map(_preproc_img_label, num_parallel_calls=num_parallel) ) return trainds def create_preproc_image(filename): preproc = _Preprocessor() img = preproc.read_from_jpegfile(filename) return preproc.preprocess(img) class RandomColorDistortion(tf.keras.layers.Layer): def __init__(self, contrast_range=[0.5, 1.5], brightness_delta=[-0.2, 0.2], **kwargs): super(RandomColorDistortion, self).__init__(**kwargs) self.contrast_range = contrast_range self.brightness_delta = brightness_delta def call(self, images, training=None): if not training: return images contrast = np.random.uniform( self.contrast_range[0], self.contrast_range[1]) brightness = np.random.uniform( self.brightness_delta[0], self.brightness_delta[1]) images = tf.image.adjust_contrast(images, contrast) images = tf.image.adjust_brightness(images, brightness) images = tf.clip_by_value(images, 0, 1) return images ``` ## Speeding up the reading of data To try it out, we'll simply read through the data several times and compute some quantity on the images. ``` def loop_through_dataset(ds, nepochs): lowest_mean = tf.constant(1.) for epoch in range(nepochs): thresh = np.random.uniform(0.3, 0.7) # random threshold count = 0 sumsofar = tf.constant(0.) for (img, label) in ds: # mean of channel values > thresh mean = tf.reduce_mean(tf.where(img > thresh, img, 0)) sumsofar = sumsofar + mean count = count + 1 if count%100 == 0: print('.', end='') mean = sumsofar/count print(mean) if mean < lowest_mean: lowest_mean = mean return lowest_mean PATTERN_SUFFIX, NUM_EPOCHS = '-0000[01]-*', 2 # 2 files, 2 epochs #PATTERN_SUFFIX, NUM_EPOCHS = '-*', 20 # 16 files, 20 epochs %%time ds = create_preproc_dataset_plain( 'gs://practical-ml-vision-book/flowers_tfr/train' + PATTERN_SUFFIX ) loop_through_dataset(ds, NUM_EPOCHS) %%time # parallel map ds = create_preproc_dataset_parallelmap( 'gs://practical-ml-vision-book/flowers_tfr/train' + PATTERN_SUFFIX ) loop_through_dataset(ds, NUM_EPOCHS) %%time # with interleave ds = create_preproc_dataset_interleave( 'gs://practical-ml-vision-book/flowers_tfr/train' + PATTERN_SUFFIX, num_parallel=None ) loop_through_dataset(ds, NUM_EPOCHS) %%time # with interleave and parallel mpas ds = create_preproc_dataset_interleave( 'gs://practical-ml-vision-book/flowers_tfr/train' + PATTERN_SUFFIX, num_parallel=AUTOTUNE ) loop_through_dataset(ds, NUM_EPOCHS) ``` When I did this, this is what I got: | Method | CPU time | Wall time | | ---------------------- | ----------- | ------------ | | Plain | 7.53s | 7.99s | | Parallel Map | 8.30s | 5.94s | | Interleave | 8.60s | 5.47s | | Interleave+Parallel Map| 8.44s | 5.23s | ## ML model The computation above was pretty cheap involving merely adding up all the pixel values. What happens if we need a bit more complexity (gradient calc, etc.)? ``` def train_simple_model(ds, nepochs): model = tf.keras.Sequential([ tf.keras.layers.Flatten( input_shape=(IMG_HEIGHT, IMG_WIDTH, IMG_CHANNELS)), #tf.keras.layers.Dense(32, activation='relu'), tf.keras.layers.Dense(len(CLASS_NAMES), activation='softmax') ]) model.compile(optimizer=tf.keras.optimizers.Adam(), loss=tf.keras.losses.SparseCategoricalCrossentropy( from_logits=False), metrics=['accuracy']) model.fit(ds, epochs=nepochs) %%time ds = create_preproc_dataset_plain( 'gs://practical-ml-vision-book/flowers_tfr/train' + PATTERN_SUFFIX ).batch(1) train_simple_model(ds, NUM_EPOCHS) %%time # parallel map ds = create_preproc_dataset_parallelmap( 'gs://practical-ml-vision-book/flowers_tfr/train' + PATTERN_SUFFIX ).batch(1) train_simple_model(ds, NUM_EPOCHS) %%time # with interleave ds = create_preproc_dataset_interleave( 'gs://practical-ml-vision-book/flowers_tfr/train' + PATTERN_SUFFIX, num_parallel=None ).batch(1) train_simple_model(ds, NUM_EPOCHS) %%time # with interleave and parallel mpas ds = create_preproc_dataset_interleave( 'gs://practical-ml-vision-book/flowers_tfr/train' + PATTERN_SUFFIX, num_parallel=AUTOTUNE ).batch(1) train_simple_model(ds, NUM_EPOCHS) ``` We note that the improvement remains: | Method | CPU time | Wall time | | -----------------------| ----------- | ------------ | | Plain | 9.91s | 9.39s | | Parallel Map | 10.7s | 8.17s | | Interleave | 10.5s | 7.54s | | Interleave+Parallel Map| 10.3s | 7.17s | ## Speeding up the handling of data ``` # alias to the more efficient one def create_preproc_dataset(pattern): return create_preproc_dataset_interleave(pattern, num_parallel=AUTOTUNE) %%time # add prefetching ds = create_preproc_dataset( 'gs://practical-ml-vision-book/flowers_tfr/train' + PATTERN_SUFFIX ).prefetch(AUTOTUNE).batch(1) train_simple_model(ds, NUM_EPOCHS) %%time # Add batching of different sizes ds = create_preproc_dataset( 'gs://practical-ml-vision-book/flowers_tfr/train' + PATTERN_SUFFIX ).prefetch(AUTOTUNE).batch(8) train_simple_model(ds, NUM_EPOCHS) %%time # Add batching of different sizes ds = create_preproc_dataset( 'gs://practical-ml-vision-book/flowers_tfr/train' + PATTERN_SUFFIX ).prefetch(AUTOTUNE).batch(16) train_simple_model(ds, NUM_EPOCHS) %%time # Add batching of different sizes ds = create_preproc_dataset( 'gs://practical-ml-vision-book/flowers_tfr/train' + PATTERN_SUFFIX ).prefetch(AUTOTUNE).batch(32) train_simple_model(ds, NUM_EPOCHS) %%time # add caching: always do this optimization last. ds = create_preproc_dataset( 'gs://practical-ml-vision-book/flowers_tfr/train' + PATTERN_SUFFIX ).cache().batch(32) train_simple_model(ds, NUM_EPOCHS) %%time # add caching: always do this optimization last. ds = create_preproc_dataset( 'gs://practical-ml-vision-book/flowers_tfr/train' + PATTERN_SUFFIX ).prefetch(AUTOTUNE).cache().batch(32) train_simple_model(ds, NUM_EPOCHS) %%time # add caching: always do this optimization last. ds = create_preproc_dataset( 'gs://practical-ml-vision-book/flowers_tfr/train' + PATTERN_SUFFIX ).cache().prefetch(AUTOTUNE).batch(32) train_simple_model(ds, NUM_EPOCHS) ``` Adding to the previous table: | Method | CPU time | Wall time | | -----------------------| ----------- | ------------ | | Plain | 9.91s | 9.39s | | Parallel Map | 10.7s | 8.17s | | Interleave | 10.5s | 7.54s | | Interleave+Parallel Map| 10.3s | 7.17s | | Interleave + Parallel, and then adding: | - | - | | Prefetch | 11.4s | 8.09s | | Batch size 8 | 9.56s | 6.90s | | Batch size 16 | 9.90s | 6.70s | | Batch size 32 | 9.68s | 6.37s | | Interleave + Parallel + batchsize 32, and then adding: | - | - | | Cache | 6.16s | 4.36s | | Prefetch + Cache | 5.76s | 4.04s | | Cache + Prefetch | 5.65s | 4.19s | So, the best option is: <pre> ds = create_preproc_dataset_interleave(pattern, num_parallel=AUTOTUNE).prefetch(AUTOTUNE).cache().batch(32) </pre> ## License Copyright 2020 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
true
code
0.598547
null
null
null
null
## OOP A programming paradigm that provides a means of structuring programs so that properties and behaviors are bundled into individual objects. Pros: * code modularisation thus ease in troubleshooting. * reuse of code through inheritance. * flexibility through polymorphism (multiple usage). ### 1. Class Definition > Classes define functions called methods, which identify the behaviors and actions that an object created from the class can perform with its data. ``` # function definition def fix_laptop(harddisk, money): # check if laptop is ok if 'not' in harddisk: print('Laptop has a problem') else: print('Laptop is OK') code modularisation thus ease in troubleshooting. # fix it kama iko na issue if money == 'yes': return 'DJ can get his laptop fixed' else: return 'DJ is fucked' # class definition class Kisauni(): # attributes # class attributes security = 'mateja everywhere' ethnicity = 'waswahili' # instance attributes // dunder methods def __init__(self, mtaa, drainage_system, housing_style): self.mtaa = mtaa self.drainage_system = drainage_system self.housing_style = housing_style def __str__(self): return 'A class indicating conditions in Kisauni' # instance methods (customised functions) def students(self, status, name, age, campus): if 'yes' in status.lower(): return f'{name} is a {age} year old at The {campus}' else: return f'{name} is a {age} year old non student' def relationships(self,status, name, sex): if 'YES'in status.upper(): return f'{name} is a {sex}' else: return f'{name} is a bi' def rehabilitations(self,status,name,age): if 'yes' in status.lower(): return f'{name}is a {age} must go to rehab.' else: return f'{name} is a {age} no rehab.' # inheritance ( - overriding; - extending) class Birds(): def flight(self): return 'ALmost all birds can fly' def edibility(self): return 'almost all birds are edible' class Chicken(Birds): def flight(self): print('chicken cannot fly') def food(self): return 'chicken feed on mash' class Student: # class attributes (uniform for all class objects) campus = 'Technical University of Munich' ## dunder methods ''' universal properties (instance attributes - not necessarily uniform for all objects) - arguments must be supplied when calling the class ''' def __init__(self, name, age, level, academic_year): self.name = name self.age = age self.level = level self.academic_year = academic_year ''' Class descriptor''' def __str__(self): return f" This is a Student class with methods: course, year and location." ## Instance Methods '''- begine with a self, and can only be called from an instance of the class ''' # course def course(self, course_name): return f"{self.name} is pursuing a {self.level} in {course_name} at the {self.campus}" # year def year(self, year, gender): if 'f' in gender.lower(): return f" She is a {self.age} year old currently in her {year} year." else: return f" He is a {self.age} year old currently in his {year} year." # location def location(self, location): return f" Residing in {location}" #race def race(self, race): pass ola = Student('ola', 25, 'PhD', 3) ola.course('MAchine LEarning') # creating a class object/instantiating the class student = Student('Ada', 21, 'B.Sc') print('Object type/description:', student) student.course('Mathematics and Computer Science'), student.year(4, 'female'), student.location('Kisauni') ``` ### 2. Inheritance > A class takes on the attributes/methods of another. Newly formed classes are called child classes, and the classes that child classes are derived from are called parent classes. > **extending** - having additional attributes. > **overriding** - overwriting inherited attributes. ``` ''' Using the example of class Student, every time we pass a new student, we have to pass the gender argument which determines the way the year function is returned. We can create child classes of different genders that inherit attributes of the Student Class and override the year method''' class Female(Student): # overriding def year(self, year, gender = 'female'): return f" She is a {self.age} year old currently in her {year} year." #extending def under_18(self): if self.age < 18: return True else: return False class Male(Student): # overriding def year(self, year, gender = 'male'): return f" He is a {self.age} year old currently in his {year} year." #extending def under_18(self): if self.age < 18: return True else: return False ada = Female('Ada', 21, 'B.Sc', 4) ada.year(4) f = Female('Denise', 17, 'B.Sc', 4) f.course('Mathematics and Computer Science'), f.year(4), f.location('Berlin'), f.under_18() m = Male('Denis', 20, 'B.Sc', 4) m.course('Mathematics and Finance'), m.year(3), f.location('Munich'), m.under_18() ``` ### 3. Polymorphism > same function name (but different signatures) being uses for different types. ``` print('ada is a lady') print(456) print([5,6]) ''' Polymorphism with uniform class methods''' class Kenya(): def capital(self): print("Nairobi is the capital of Kenya.") def president(self): print("Kenyatta is the president of Kenya") class USA(): def capital(self): print("Washington D.C. is the capital of USA.") def president(self): print("Biden is their newly elected president.") k = Kenya() u = USA() for country in [k, u]: country.capital() country.president() '''Polymorphism with a function and object''' # in the previous example. Instead of looping: creating a function. def func(obj): obj.capital() obj.president() k = Kenya() u = USA() func(k) func(u) '''Polymorphism with inheritance''' # This is equal to overriding in inheritance. class Bird: def intro(self): print("There are many types of birds.") def flight(self): print("Most of the birds can fly but some cannot.") class sparrow(Bird): def flight(self): print("Sparrows can fly.") ``` ## Procedural Programming > Structures a program like a recipe in that it provides a set of steps, in the form of functions and code blocks, that flow sequentially in order to complete a task. >relies on procedure calls to create modularized code. This approach simplifies your application code by breaking it into small pieces that a developer can view easily. ``` ## summing elements of a list def sum_elements(my_list): sum = 0 for x in my_list: sum += x return sum print(sum_elements([4,5,6,7])) ``` #### Task: Create a class Rectangle and define the following methods: * create_rectangle Input parameters: x, y, width, height Return value: instance of Rectangle Operation: create a new instance of Rectangle * area_of_rectangle * perimeter_of__rectangle * product_of_the diagonals Create a class square that inherits from class rectangle with an additional function of: * angle_between_diagonals
true
code
0.417568
null
null
null
null
# openvino2tensorflow This tutorial explains the use case of openvino2tensorflow while using arachne. `openvino2tensorflow` is developed in the following GitHub repository. https://github.com/PINTO0309/openvino2tensorflow When you convert onnx model to tensorflow model by `onnx-tf`, the converted model includes many unnecessary transpose layers. This is because onnx has NCHW layer format while tensorflow has NHWC. The inclusion of many unnecessary transpose layers causes performance degradation in inference. By using openvino2tensorflow, you can avoid the inclusion of unnecessary transpose layers when converting a model from to tensorflow. In this tutorial, we compare two convert methods and their converted models: 1. PyTorch -> (torch2onnx) -> ONNX -> (onnx-simplifier) -> ONNX -> (onnx-tf) -> Tensorflow -> (tflite_converter) -> TfLite 2. PyTorch -> (torch2onnx) -> ONNX -> (onnx-simplifier) -> ONNX -> (openvino_mo) -> OpenVino -> (openvino2tensorflow) -> Tensorflow -> (tflite_converter) -> TfLite The developers of openvino2tensorflow provides the detail article about the advantage using openvino2tensorflow: [Converting PyTorch, ONNX, Caffe, and OpenVINO (NCHW) models to Tensorflow / TensorflowLite (NHWC) in a snap](https://qiita.com/PINTO/items/ed06e03eb5c007c2e102) ## Create Simple Model Here we create and save a very simple PyTorch model to be converted. ``` import torch from torch import nn import torch.onnx model = nn.Sequential( nn.Conv2d(3, 16, 3, padding=1), nn.Conv2d(16, 16, 3, padding=1), ) torch.save(model.eval(), "./sample.pth") ``` Save model input and output information as yaml format for `arachne`. ``` yml = """ inputs: - dtype: float32 name: input shape: - 1 - 3 - 224 - 224 outputs: - dtype: float32 name: output shape: - 1 - 16 - 224 - 224 """ open("sample.yml", "w").write(yml) ``` ## Convert using onnx-tf You can apply multiple tools in sequence with `arachne.pipeline`. Models are converted in the following order: PyTorch -> (torch2onnx) -> ONNX -> (onnx-simplifier) -> ONNX -> (onnx-tf) -> Tensorflow -> (tflite_converter) -> TfLite ``` !python -m arachne.driver.pipeline \ +pipeline=[torch2onnx,onnx_simplifier,onnx_tf,tflite_converter] \ model_file=./sample.pth \ output_path=./pipeline1.tar \ model_spec_file=./sample.yml ``` Extract tarfile and see network structure of the converted tflite model. You can visualize model structure in netron: `netron ./pipeline1/model_0.tflite`. ``` !mkdir -p pipeline1 && tar xvf pipeline1.tar -C ./pipeline1 import tensorflow as tf def list_layers(model_path): interpreter = tf.lite.Interpreter(model_path) layer_details = interpreter.get_tensor_details() interpreter.allocate_tensors() for layer in layer_details: print("Layer Name: {}".format(layer['name'])) list_layers("./pipeline1/model_0.tflite") ``` We have confirmed that the transpose layer is unexpectedly included. ## Convert using openvino2tensorflow Next, try the second conversion method using openvino2tensorflow. Models are converted in the following order: PyTorch -> (torch2onnx) -> ONNX -> (onnx-simplifier) -> ONNX -> (openvino_mo) -> OpenVino -> (openvino2tensorflow) -> Tensorflow -> (tflite_converter) -> TfLite ``` !python -m arachne.driver.pipeline \ +pipeline=[torch2onnx,onnx_simplifier,openvino_mo,openvino2tf,tflite_converter] \ model_file=./sample.pth \ output_path=./pipeline2.tar \ model_spec_file=./sample.yml ``` Extract tarfile and see network structure of the converted tflite model. You can visualize model structure in netron: `netron ./pipeline2/model_0.tflite`. ``` !mkdir -p pipeline2 && tar xvf pipeline2.tar -C ./pipeline2 list_layers("./pipeline2/model_0.tflite") ``` We have confirmed that the transpose layer is NOT included.
true
code
0.675898
null
null
null
null
# Ingest Text Data Labeled text data can be in a structured data format, such as reviews for sentiment analysis, news headlines for topic modeling, or documents for text classification. In these cases, you may have one column for the label, one column for the text, and sometimes other columns for attributes. You can treat this structured data like tabular data, and ingest it in one of the ways discussed in the previous notebook [011_Ingest_tabular_data.ipynb](011_Ingest_tabular_data_v1.ipynb). Sometimes text data, especially raw text data comes as unstructured data and is often in .json or .txt format, and we will discuss how to ingest these types of data files into a SageMaker Notebook in this section. ## Set Up Notebook ``` %pip install -qU 'sagemaker>=2.15.0' 's3fs==0.4.2' import pandas as pd import json import glob import s3fs import sagemaker # Get SageMaker session & default S3 bucket sagemaker_session = sagemaker.Session() bucket = sagemaker_session.default_bucket() # replace with your own bucket if you have one s3 = sagemaker_session.boto_session.resource('s3') prefix = 'text_spam/spam' prefix_json = 'json_jeo' filename = 'SMSSpamCollection.txt' filename_json = 'JEOPARDY_QUESTIONS1.json' ``` ## Downloading data from Online Sources ### Text data (in structured .csv format): Twitter -- sentiment140 **Sentiment140** This is the sentiment140 dataset. It contains 1.6M tweets extracted using the twitter API. The tweets have been annotated with sentiment (0 = negative, 4 = positive) and topics (hashtags used to retrieve tweets). The dataset contains the following columns: * `target`: the polarity of the tweet (0 = negative, 4 = positive) * `ids`: The id of the tweet ( 2087) * `date`: the date of the tweet (Sat May 16 23:58:44 UTC 2009) * `flag`: The query (lyx). If there is no query, then this value is NO_QUERY. * `user`: the user that tweeted (robotickilldozr) * `text`: the text of the tweet (Lyx is cool [Second Twitter data](https://github.com/guyz/twitter-sentiment-dataset) is a Twitter data set collected as an extension to Sanders Analytics Twitter sentiment corpus, originally designed for training and testing Twitter sentiment analysis algorithms. We will use this data to showcase how to aggregate two data sets if you want to enhance your current data set by adding more data to it. ``` #helper functions to upload data to s3 def write_to_s3(filename, bucket, prefix): #put one file in a separate folder. This is helpful if you read and prepare data with Athena filename_key = filename.split('.')[0] key = "{}/{}/{}".format(prefix,filename_key,filename) return s3.Bucket(bucket).upload_file(filename,key) def upload_to_s3(bucket, prefix, filename): url = 's3://{}/{}/{}'.format(bucket, prefix, filename) print('Writing to {}'.format(url)) write_to_s3(filename, bucket, prefix) #run this cell if you are in SageMaker Studio notebook #!apt-get install unzip #download first twitter dataset !wget http://cs.stanford.edu/people/alecmgo/trainingandtestdata.zip -O sentimen140.zip # Uncompressing !unzip -o sentimen140.zip -d sentiment140 #upload the files to the S3 bucket csv_files = glob.glob("sentiment140/*.csv") for filename in csv_files: upload_to_s3(bucket, 'text_sentiment140', filename) #download second twitter dataset !wget https://raw.githubusercontent.com/zfz/twitter_corpus/master/full-corpus.csv filename = 'full-corpus.csv' upload_to_s3(bucket, 'text_twitter_sentiment_2', filename) ``` ### Text data (in .txt format): SMS Spam data [SMS Spam Data](https://archive.ics.uci.edu/ml/datasets/sms+spam+collection) was manually extracted from the Grumbletext Web site. This is a UK forum in which cell phone users make public claims about SMS spam messages, most of them without reporting the very spam message received. Each line in the text file has the correct class followed by the raw message. We will use this data to showcase how to ingest text data in .txt format. ``` txt_files = glob.glob("spam/*.txt") for filename in txt_files: upload_to_s3(bucket, 'text_spam', filename) !wget http://www.dt.fee.unicamp.br/~tiago/smsspamcollection/smsspamcollection.zip -O spam.zip !unzip -o spam.zip -d spam ``` ### Text Data (in .json format): Jeopardy Question data [Jeopardy Question](https://j-archive.com/) was obtained by crawling the Jeopardy question archive website. It is an unordered list of questions where each question has the following key-value pairs: * `category` : the question category, e.g. "HISTORY" * `value`: dollar value of the question as string, e.g. "\$200" * `question`: text of question * `answer` : text of answer * `round`: one of "Jeopardy!","Double Jeopardy!","Final Jeopardy!" or "Tiebreaker" * `show_number` : string of show number, e.g '4680' * `air_date` : the show air date in format YYYY-MM-DD ``` #json file format !wget http://skeeto.s3.amazonaws.com/share/JEOPARDY_QUESTIONS1.json.gz # Uncompressing !gunzip -f JEOPARDY_QUESTIONS1.json.gz filename = 'JEOPARDY_QUESTIONS1.json' upload_to_s3(bucket, 'json_jeo', filename) ``` ## Ingest Data into Sagemaker Notebook ## Method 1: Copying data to the Instance You can use the AWS Command Line Interface (CLI) to copy your data from s3 to your SageMaker instance. This is a quick and easy approach when you are dealing with medium sized data files, or you are experimenting and doing exploratory analysis. The documentation can be found [here](https://docs.aws.amazon.com/cli/latest/reference/s3/cp.html). ``` #Specify file names prefix = 'text_spam/spam' prefix_json = 'json_jeo' filename = 'SMSSpamCollection.txt' filename_json = 'JEOPARDY_QUESTIONS1.json' prefix_spam_2 = 'text_spam/spam_2' #copy data to your sagemaker instance using AWS CLI !aws s3 cp s3://$bucket/$prefix_json/ text/$prefix_json/ --recursive data_location = "text/{}/{}".format(prefix_json, filename_json) with open(data_location) as f: data = json.load(f) print(data[0]) ``` ## Method 2: Use AWS compatible Python Packages When you are dealing with large data sets, or do not want to lose any data when you delete your Sagemaker Notebook Instance, you can use pre-built packages to access your files in S3 without copying files into your instance. These packages, such as `Pandas`, have implemented options to access data with a specified path string: while you will use `file://` on your local file system, you will use `s3://` instead to access the data through the AWS boto library. For `pandas`, any valid string path is acceptable. The string could be a URL. Valid URL schemes include http, ftp, s3, and file. For file URLs, a host is expected. You can find additional documentation [here](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html). For text data, most of the time you can read it as line-by-line files or use Pandas to read it as a DataFrame by specifying a delimiter. ``` data_s3_location = "s3://{}/{}/{}".format(bucket, prefix, filename) # S3 URL s3_tabular_data = pd.read_csv(data_s3_location, sep="\t", header=None) s3_tabular_data.head() ``` For JSON files, depending on the structure, you can also use `Pandas` `read_json` function to read it if it's a flat json file. ``` data_json_location = "s3://{}/{}/{}".format(bucket, prefix_json, filename_json) s3_tabular_data_json = pd.read_json(data_json_location, orient='records') s3_tabular_data_json.head() ``` ## Method 3: Use AWS Native methods #### s3fs [S3Fs](https://s3fs.readthedocs.io/en/latest/) is a Pythonic file interface to S3. It builds on top of botocore. The top-level class S3FileSystem holds connection information and allows typical file-system style operations like cp, mv, ls, du, glob, etc., as well as put/get of local files to/from S3. ``` fs = s3fs.S3FileSystem() data_s3fs_location = "s3://{}/{}/".format(bucket, prefix) # To List all files in your accessible bucket fs.ls(data_s3fs_location) # open it directly with s3fs data_s3fs_location = "s3://{}/{}/{}".format(bucket, prefix, filename) # S3 URL with fs.open(data_s3fs_location) as f: print(pd.read_csv(f, sep = '\t', nrows = 2)) ``` # Aggregating Data Set If you would like to enhance your data with more data collected for your use cases, you can always aggregate your newly-collected data with your current data set. We will use the two data set -- Sentiment140 and Sanders Twitter Sentiment to show how to aggregate data together. ``` prefix_tw1 = 'text_sentiment140/sentiment140' filename_tw1 = 'training.1600000.processed.noemoticon.csv' prefix_added = 'text_twitter_sentiment_2' filename_added = 'full-corpus.csv' ``` Let's read in our original data and take a look at its format and schema: ``` data_s3_location_base = "s3://{}/{}/{}".format(bucket, prefix_tw1, filename_tw1) # S3 URL # we will showcase with a smaller subset of data for demonstration purpose text_data = pd.read_csv(data_s3_location_base, header = None, encoding = "ISO-8859-1", low_memory=False, nrows = 10000) text_data.columns = ['target', 'tw_id', 'date', 'flag', 'user', 'text'] ``` We have 6 columns, `date`, `text`, `flag` (which is the topic the twitter was queried), `tw_id` (tweet's id), `user` (user account name), and `target` (0 = neg, 4 = pos). ``` text_data.head(1) ``` Let's read in and take a look at the data we want to add to our original data. We will start by checking for columns for both data sets. The new data set has 5 columns, `TweetDate` which maps to `date`, `TweetText` which maps to `text`, `Topic` which maps to `flag`, `TweetId` which maps to `tw_id`, and `Sentiment` mapped to `target`. In this new data set, we don't have `user account name` column, so when we aggregate two data sets we can add this column to the data set to be added and fill it with `NULL` values. You can also remove this column from the original data if it does not provide much valuable information based on your use cases. ``` data_s3_location_added = "s3://{}/{}/{}".format(bucket, prefix_added, filename_added) # S3 URL # we will showcase with a smaller subset of data for demonstration purpose text_data_added = pd.read_csv(data_s3_location_added, encoding = "ISO-8859-1", low_memory=False, nrows = 10000) text_data_added.head(1) ``` #### Add the missing column to the new data set and fill it with `NULL` ``` text_data_added['user'] = "" ``` #### Renaming the new data set columns to combine two data sets ``` text_data_added.columns = ['flag', 'target', 'tw_id', 'date', 'text', 'user'] text_data_added.head(1) ``` #### Change the `target` column to the same format as the `target` in the original data set Note that the `target` column in the new data set is marked as "positive", "negative", "neutral", and "irrelevant", whereas the `target` in the original data set is marked as "0" and "4". So let's map "positive" to 4, "neutral" to 2, and "negative" to 0 in our new data set so that they are consistent. For "irrelevant", which are either not English or Spam, you can either remove these if it is not valuable for your use case (In our use case of sentiment analysis, we will remove those since these text does not provide any value in terms of predicting sentiment) or map them to -1. ``` #remove tweets labeled as irelevant text_data_added = text_data_added[text_data_added['target'] != 'irelevant'] # convert strings to number targets target_map = {'positive': 4, 'negative': 0, 'neutral': 2} text_data_added['target'] = text_data_added['target'].map(target_map) ``` #### Combine the two data sets and save as one new file ``` text_data_new = pd.concat([text_data, text_data_added]) filename = 'sentiment_full.csv' text_data_new.to_csv(filename, index = False) upload_to_s3(bucket, 'text_twitter_sentiment_full', filename) ``` ### Citation Twitter140 Data, Go, A., Bhayani, R. and Huang, L., 2009. Twitter sentiment classification using distant supervision. CS224N Project Report, Stanford, 1(2009), p.12. SMS Spaming data, Almeida, T.A., Gómez Hidalgo, J.M., Yamakami, A. Contributions to the Study of SMS Spam Filtering: New Collection and Results. Proceedings of the 2011 ACM Symposium on Document Engineering (DOCENG'11), Mountain View, CA, USA, 2011. J! Archive, J! Archive is created by fans, for fans. The Jeopardy! game show and all elements thereof, including but not limited to copyright and trademark thereto, are the property of Jeopardy Productions, Inc. and are protected under law. This website is not affiliated with, sponsored by, or operated by Jeopardy Productions, Inc.
true
code
0.313361
null
null
null
null
<a href="https://colab.research.google.com/github/Eoli-an/Exam-topic-prediction/blob/main/Slides_vs_Transcribes_Frequency.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # Plot for Dense Ranks of Word Usage in Slides and Transcribes of Relevant Words For this plot we analyse the relationship between the word frequency of the slides versus the word frequency of the transcribes of the lecture. We only analyse hand picked words that are relevant for predicting exam topics or their difficulties. ``` !pip install scattertext !pip install tika !pip install textblob import pandas as pd import glob import seaborn as sns import matplotlib.pyplot as plt %matplotlib inline import scattertext as st from tika import parser from textblob import TextBlob import nltk nltk.download('punkt') nltk.download('averaged_perceptron_tagger') nltk.download('brown') ``` The Slides are expected to be in a folder called Slides. The Transcibes are expected to be in a folder called Transcribes ``` lectures_spoken = [] path = 'Transcribes/*.txt' files=glob.glob(path) for file in sorted(files): with open(file, 'r') as f: lectures_spoken.append(f.read()) lectures_spoken = " ".join(lectures_spoken) lectures_pdf = [] path = 'Slides/*.pdf' files=glob.glob(path) for file in sorted(files): lectures_pdf.append(parser.from_file(file)["content"]) lectures_pdf = " ".join(lectures_pdf) ``` Create a texblob of the text. This is used to extract the noun phrases. ``` blob_spoken = TextBlob(lectures_spoken) freq_spoken = nltk.FreqDist(blob_spoken.noun_phrases) blob_pdf = TextBlob(lectures_pdf) freq_pdf = nltk.FreqDist(blob_pdf.noun_phrases) ``` This function checks if a noun phrase is sufficiently similar to a relevant word(templates). Sufficiently similar is defined as that the template is a substring of the noun phrase. ``` def convert_to_template(df_element, template): for template_element in template: if template_element in df_element: return template_element return "None" ``` We first create a pandas dataframe of all the noun phrases and their frequencies in both slides and transcribes. After that, we extract all words that are similar to a relevant word (as of the convert_to_template function). Then we group by the relevant words ``` relevant_words = ['bayes', 'frequentist', 'fairness', 'divergence', 'reproduc', 'regulariz', 'pca', 'principal c' 'bootstrap', 'nonlinear function', 'linear function', 'entropy', 'maximum likelihood estimat', 'significa', 'iid', 'bayes theorem', 'visualization', 'score function', 'dimensionality reduction', 'estimat', 'bayes', 'consumption', 'fisher', 'independence', 'logistic regression', 'bias', 'standard deviation', 'linear discriminant analysis', 'information matrix', 'null hypothesis', 'log likelihood', 'linear regression', 'hypothesis test', 'confidence', 'variance', 'sustainability', 'gaussian', 'linear model', 'climate', 'laplace', ] df_spoken = pd.DataFrame.from_dict({"word": list(freq_spoken.keys()), "freq_spoken" : list(freq_spoken.values())}) df_pdf = pd.DataFrame.from_dict({"word": list(freq_pdf.keys()), "freq_pdf" : list(freq_pdf.values())}) df = df_spoken.merge(df_pdf,how="outer",on="word") df["word"] = df["word"].apply(lambda x: convert_to_template(x,relevant_words)) df = df.groupby(["word"]).sum().reset_index() df = df[df["word"] != "None"].reset_index() ``` We use the dense_rank functionality of the scattertext library to convert the absolute number of occurances of a word to a dense rank. This means that we only consider the relative order of the frequencies of the word and discard all information that tells us how far apart two word frequencies are. ``` df["freq_spoken"] = st.Scalers.dense_rank(df["freq_spoken"]) df["freq_pdf"] = st.Scalers.dense_rank(df["freq_pdf"]) df plt.figure(figsize=(20,12)) sns.set_theme(style="dark") p1 = sns.scatterplot(x='freq_spoken', # Horizontal axis y='freq_pdf', # Vertical axis data=df, # Data source s = 80, legend=False, color="orange", #marker = "s" ) for line in range(0,df.shape[0]): if line == 6:#divergence p1.text(df.freq_spoken[line]-0.12, df.freq_pdf[line]-0.007, df.word[line], horizontalalignment='left', size='xx-large', color='black', weight='normal') elif line == 21:#linear regression p1.text(df.freq_spoken[line]-0.18, df.freq_pdf[line]-0.007, df.word[line], horizontalalignment='left', size='xx-large', color='black', weight='normal') elif line == 18:#linear discriminant analysis p1.text(df.freq_spoken[line]-0.05, df.freq_pdf[line]-0.05, df.word[line], horizontalalignment='left', size='xx-large', color='black', weight='normal') elif line == 19:#linear function p1.text(df.freq_spoken[line]-0.02, df.freq_pdf[line]-0.04, df.word[line], horizontalalignment='left', size='xx-large', color='black', weight='normal') elif line == 29:#reproduce p1.text(df.freq_spoken[line]-0.03, df.freq_pdf[line]+0.03, df.word[line], horizontalalignment='left', size='xx-large', color='black', weight='normal') elif line == 12:#gaussian: p1.text(df.freq_spoken[line]-0.1, df.freq_pdf[line]-0.007, df.word[line], horizontalalignment='left', size='xx-large', color='black', weight='normal') elif line == 16:#information matrix: p1.text(df.freq_spoken[line]+0.01, df.freq_pdf[line]-0.025, df.word[line], horizontalalignment='left', size='xx-large', color='black', weight='normal') elif line == 25:#nonlinear function: p1.text(df.freq_spoken[line]+0.01, df.freq_pdf[line]-0.025, df.word[line], horizontalalignment='left', size='xx-large', color='black', weight='normal') elif line == 24:#maximum likelihood estimat: p1.text(df.freq_spoken[line]-0.07, df.freq_pdf[line]+0.02, df.word[line], horizontalalignment='left', size='xx-large', color='black', weight='normal') elif line == 17:#laplace: p1.text(df.freq_spoken[line]-0.08, df.freq_pdf[line]-0.007, df.word[line], horizontalalignment='left', size='xx-large', color='black', weight='normal') else: p1.text(df.freq_spoken[line]+0.01, df.freq_pdf[line]-0.007, df.word[line], horizontalalignment='left', size='xx-large', color='black', weight='normal') #plt.title('Dense Ranks of Word Usage in Slides and Transcribes of Relevant Words',size = "xx-large") # Set x-axis label plt.xlabel('Transcribes Frequency',size = "xx-large") # Set y-axis label plt.ylabel('Slides Frequency',size = "xx-large") p1.set_xticks([0,0.5,1]) # <--- set the ticks first p1.set_xticklabels(["Infrequent", "Average", "Frequent"],size = "x-large") p1.set_yticks([0,0.5,1]) # <--- set the ticks first p1.set_yticklabels(["Infrequent", "Average", "Frequent"],size = "x-large") plt.show() ```
true
code
0.550668
null
null
null
null
<a href="https://colab.research.google.com/github/R-aryan/Image_Classification_VGG16/blob/master/Classification_Cat_VS_Dogs_Transfer_Learning.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ``` import keras,os from keras.models import Sequential from keras.layers import Dense, Conv2D, MaxPool2D , Flatten from keras.preprocessing.image import ImageDataGenerator from keras.applications.vgg16 import VGG16 from keras.models import Model from keras import optimizers from keras.models import load_model import numpy as np import shutil from os import listdir from os.path import splitext from keras.preprocessing import image import matplotlib.pyplot as plt train_directory= "/content/drive/My Drive/classification_Dataset/cat_VS_dogs/train" test_directory="/content/drive/My Drive/classification_Dataset/cat_VS_dogs/test1" src= '/content/drive/My Drive/classification_Dataset/cat_VS_dogs/train' dest_d='/content/drive/My Drive/classification_Dataset/cat_VS_dogs/train/Dogs' dest_c='/content/drive/My Drive/classification_Dataset/cat_VS_dogs/train/Cats' validation_set='/content/drive/My Drive/classification_Dataset/cat_VS_dogs/validation_data' trdata = ImageDataGenerator() traindata = trdata.flow_from_directory(directory=src,target_size=(224,224),batch_size=32) tsdata = ImageDataGenerator() testdata = tsdata.flow_from_directory(directory=validation_set, target_size=(224,224),batch_size=32) ``` Here using the ImageDataGenerator method in keras I will import all the images of cat and dog in the model. ImageDataGenerator will automatically label the data and map all the labels to its specific data. ``` vggmodel = VGG16(weights='imagenet', include_top=True) ``` Here in this part I will import VGG16 from keras with pre-trained weights which was trained on imagenet. Here as you can see that include top parameter is set to true. This means that weights for our whole model will be downloaded. If this is set to false then the pre-trained weights will only be downloaded for convolution layers and no weights will be downloaded for dense layers. ``` vggmodel.summary() ``` Now as I run vggmodel.summary() then the summary of the whole VGG model which was downloaded will be printed. Its output is attached below. ``` ``` After the model has been downloaded then I need to use this model for my problem statement which is to detect cats and dogs. So here I will set that I will not be training the weights of the first 19 layers and use it as it is. Therefore i am setting the trainable parameter to False for first 19 layers. ``` vggmodel.layers for layers in (vggmodel.layers)[:19]: print(layers) layers.trainable = False ``` Since my problem is to detect cats and dogs and it has two classes so the last dense layer of my model should be a 2 unit softmax dense layer. Here I am taking the second last layer of the model which is dense layer with 4096 units and adding a dense softmax layer of 2 units in the end. In this way I will remove the last layer of the VGG16 model which is made to predict 1000 classes. ``` X= vggmodel.layers[-2].output predictions = Dense(2, activation="softmax")(X) model_final = Model(input = vggmodel.input, output = predictions) ``` Now I will compile my new model. I will set the learning rate of SGD (Stochastic Gradient Descent) optimiser using lr parameter and since i have a 2 unit dense layer in the end so i will be using categorical_crossentropy as loss since the output of the model is categorical. ``` model_final.compile(loss = "categorical_crossentropy", optimizer = optimizers.SGD(lr=0.0001, momentum=0.9), metrics=["accuracy"]) model_final.summary() from keras.callbacks import ModelCheckpoint, EarlyStopping checkpoint = ModelCheckpoint("/content/drive/My Drive/classification_Dataset/vgg16_tl.h5", monitor='val_acc', verbose=1, save_best_only=True, save_weights_only=False, mode='auto', period=1) early = EarlyStopping(monitor='val_acc', min_delta=0, patience=40, verbose=1, mode='auto') model_final.fit_generator(generator= traindata, steps_per_epoch= 2, epochs= 100, validation_data= testdata, validation_steps=1, callbacks=[checkpoint,early]) model_final.save_weights("/content/drive/My Drive/classification_Dataset/vgg16_tl.h5") ``` Predicting the output ``` # from keras.preprocessing import image # import matplotlib.pyplot as plt img = image.load_img("/content/drive/My Drive/classification_Dataset/cat_VS_dogs/test1/12500.jpg",target_size=(224,224)) img = np.asarray(img) plt.imshow(img) img = np.expand_dims(img, axis=0) from keras.models import load_model model_final.load_weights("/content/drive/My Drive/classification_Dataset/vgg16_tl.h5") #saved_model.compile() output = model_final.predict(img) if output[0][0] > output[0][1]: print("cat") else: print('dog') def prediction(path_image): img = image.load_img(path_image,target_size=(224,224)) img = np.asarray(img) plt.imshow(img) img = np.expand_dims(img, axis=0) model_final.load_weights("/content/drive/My Drive/classification_Dataset/vgg16_tl.h5") output = model_final.predict(img) if output[0][0] > output[0][1]: print("cat") else: print('dog') prediction("/content/drive/My Drive/classification_Dataset/cat_VS_dogs/test1/12500.jpg") prediction("/content/drive/My Drive/classification_Dataset/cat_VS_dogs/test1/12499.jpg") ```
true
code
0.688102
null
null
null
null
# Neste notebook vamos simular a interconexão entre SLITs ``` # importar as bibliotecas necessárias import numpy as np # arrays import matplotlib.pyplot as plt # plots plt.rcParams.update({'font.size': 14}) import IPython.display as ipd # to play signals import sounddevice as sd import soundfile as sf # Os próximos módulos são usados pra criar nosso SLIT from scipy.signal import butter, lfilter, freqz, chirp, impulse ``` # Vamos criar 2 SLITs Primeiro vamos criar dois SLITs. Um filtro passa alta e um passa-baixa. Você pode depois mudar a ordem de um dos filtros e sua frequência de corte e, então, observar o que acontece na FRF do SLIT concatenado. ``` # Variáveis do filtro order1 = 6 fs = 44100 # sample rate, Hz cutoff1 = 1000 # desired cutoff frequency of the filter, Hz # Passa baixa b, a = butter(order1, 2*cutoff1/fs, btype='low', analog=False) w, H1 = freqz(b, a) # Passa alta cutoff2 = 1000 order2 = 6 b, a = butter(order2, 2*cutoff2/fs, btype='high', analog=False) w, H2 = freqz(b, a) plt.figure(figsize=(15,5)) plt.subplot(1,2,1) plt.semilogx(fs*w/(2*np.pi), 20 * np.log10(abs(H1)), 'b', linewidth = 2, label = 'Passa-baixa') plt.semilogx(fs*w/(2*np.pi), 20 * np.log10(abs(H2)), 'r', linewidth = 2, label = 'Passa-alta') plt.title('Magnitude') plt.xlabel('Frequency [Hz]') plt.ylabel('Amplitude [dB]') plt.margins(0, 0.1) plt.grid(which='both', axis='both') plt.ylim((-100, 20)) plt.subplot(1,2,2) plt.semilogx(fs*w/(2*np.pi), np.angle(H1), 'b', linewidth = 2, label = 'Passa-baixa') plt.semilogx(fs*w/(2*np.pi), np.angle(H2), 'r', linewidth = 2, label = 'Passa-alta') plt.legend(loc = 'upper right') plt.title('Fase') plt.xlabel('Frequency [Hz]') plt.ylabel('Amplitude [dB]') plt.margins(0, 0.1) plt.grid(which='both', axis='both') plt.show() ``` # Interconexão em série \begin{equation} H(\mathrm{j}\omega) = H_1(\mathrm{j}\omega)H_2(\mathrm{j}\omega) \end{equation} ``` Hs = H1*H2 plt.figure(figsize=(15,5)) plt.subplot(1,2,1) plt.semilogx(fs*w/(2*np.pi), 20 * np.log10(abs(H1)), '--k', linewidth = 2) plt.semilogx(fs*w/(2*np.pi), 20 * np.log10(abs(H2)), '--k', linewidth = 2) plt.semilogx(fs*w/(2*np.pi), 20 * np.log10(abs(Hs)), 'b', linewidth = 2, label = 'R: Band pass') plt.title('Magnitude') plt.xlabel('Frequency [Hz]') plt.ylabel('Amplitude [dB]') plt.margins(0, 0.1) plt.grid(which='both', axis='both') plt.ylim((-100, 20)) plt.subplot(1,2,2) plt.semilogx(fs*w/(2*np.pi), np.angle(H1), '--k', linewidth = 2) plt.semilogx(fs*w/(2*np.pi), np.angle(H2), '--k', linewidth = 2) plt.semilogx(fs*w/(2*np.pi), np.angle(Hs), 'b', linewidth = 2, label = 'R: Band pass') plt.legend(loc = 'upper right') plt.title('Fase') plt.xlabel('Frequency [Hz]') plt.ylabel('Amplitude [dB]') plt.margins(0, 0.1) plt.grid(which='both', axis='both') plt.show() ``` # Interconexão em paralelo \begin{equation} H(\mathrm{j}\omega) = H_1(\mathrm{j}\omega)+H_2(\mathrm{j}\omega) \end{equation} ``` Hs = H1+H2 plt.figure(figsize=(15,5)) plt.subplot(1,2,1) plt.semilogx(fs*w/(2*np.pi), 20 * np.log10(abs(H1)), '--k', linewidth = 2) plt.semilogx(fs*w/(2*np.pi), 20 * np.log10(abs(H2)), '--k', linewidth = 2) plt.semilogx(fs*w/(2*np.pi), 20 * np.log10(abs(Hs)), 'b', linewidth = 2, label = 'R: All pass') plt.title('Magnitude') plt.xlabel('Frequency [Hz]') plt.ylabel('Amplitude [dB]') plt.margins(0, 0.1) plt.grid(which='both', axis='both') plt.ylim((-100, 20)) plt.subplot(1,2,2) plt.semilogx(fs*w/(2*np.pi), np.angle(H1), '--k', linewidth = 2) plt.semilogx(fs*w/(2*np.pi), np.angle(H2), '--k', linewidth = 2) plt.semilogx(fs*w/(2*np.pi), np.angle(Hs), 'b', linewidth = 2, label = 'R: All pass') plt.legend(loc = 'upper right') plt.title('Fase') plt.xlabel('Frequency [Hz]') plt.ylabel('Amplitude [dB]') plt.margins(0, 0.1) plt.grid(which='both', axis='both') plt.show() ```
true
code
0.633212
null
null
null
null
## Forecasting, updating datasets, and the "news" In this notebook, we describe how to use Statsmodels to compute the impacts of updated or revised datasets on out-of-sample forecasts or in-sample estimates of missing data. We follow the approach of the "Nowcasting" literature (see references at the end), by using a state space model to compute the "news" and impacts of incoming data. **Note**: this notebook applies to Statsmodels v0.12+. In addition, it only applies to the state space models or related classes, which are: `sm.tsa.statespace.ExponentialSmoothing`, `sm.tsa.arima.ARIMA`, `sm.tsa.SARIMAX`, `sm.tsa.UnobservedComponents`, `sm.tsa.VARMAX`, and `sm.tsa.DynamicFactor`. ``` %matplotlib inline import numpy as np import pandas as pd import statsmodels.api as sm import matplotlib.pyplot as plt macrodata = sm.datasets.macrodata.load_pandas().data macrodata.index = pd.period_range('1959Q1', '2009Q3', freq='Q') ``` Forecasting exercises often start with a fixed set of historical data that is used for model selection and parameter estimation. Then, the fitted selected model (or models) can be used to create out-of-sample forecasts. Most of the time, this is not the end of the story. As new data comes in, you may need to evaluate your forecast errors, possibly update your models, and create updated out-of-sample forecasts. This is sometimes called a "real-time" forecasting exercise (by contrast, a pseudo real-time exercise is one in which you simulate this procedure). If all that matters is minimizing some loss function based on forecast errors (like MSE), then when new data comes in you may just want to completely redo model selection, parameter estimation and out-of-sample forecasting, using the updated datapoints. If you do this, your new forecasts will have changed for two reasons: 1. You have received new data that gives you new information 2. Your forecasting model or the estimated parameters are different In this notebook, we focus on methods for isolating the first effect. The way we do this comes from the so-called "nowcasting" literature, and in particular Bańbura, Giannone, and Reichlin (2011), Bańbura and Modugno (2014), and Bańbura et al. (2014). They describe this exercise as computing the "**news**", and we follow them in using this language in Statsmodels. These methods are perhaps most useful with multivariate models, since there multiple variables may update at the same time, and it is not immediately obvious what forecast change was created by what updated variable. However, they can still be useful for thinking about forecast revisions in univariate models. We will therefore start with the simpler univariate case to explain how things work, and then move to the multivariate case afterwards. **Note on revisions**: the framework that we are using is designed to decompose changes to forecasts from newly observed datapoints. It can also take into account *revisions* to previously published datapoints, but it does not decompose them separately. Instead, it only shows the aggregate effect of "revisions". **Note on `exog` data**: the framework that we are using only decomposes changes to forecasts from newly observed datapoints for *modeled* variables. These are the "left-hand-side" variables that in Statsmodels are given in the `endog` arguments. This framework does not decompose or account for changes to unmodeled "right-hand-side" variables, like those included in the `exog` argument. ### Simple univariate example: AR(1) We will begin with a simple autoregressive model, an AR(1): $$y_t = \phi y_{t-1} + \varepsilon_t$$ - The parameter $\phi$ captures the persistence of the series We will use this model to forecast inflation. To make it simpler to describe the forecast updates in this notebook, we will work with inflation data that has been de-meaned, but it is straightforward in practice to augment the model with a mean term. ``` # De-mean the inflation series y = macrodata['infl'] - macrodata['infl'].mean() ``` #### Step 1: fitting the model on the available dataset Here, we'll simulate an out-of-sample exercise, by constructing and fitting our model using all of the data except the last five observations. We'll assume that we haven't observed these values yet, and then in subsequent steps we'll add them back into the analysis. ``` y_pre = y.iloc[:-5] y_pre.plot(figsize=(15, 3), title='Inflation'); ``` To construct forecasts, we first estimate the parameters of the model. This returns a results object that we will be able to use produce forecasts. ``` mod_pre = sm.tsa.arima.ARIMA(y_pre, order=(1, 0, 0), trend='n') res_pre = mod_pre.fit() print(res_pre.summary()) ``` Creating the forecasts from the results object `res` is easy - you can just call the `forecast` method with the number of forecasts you want to construct. In this case, we'll construct four out-of-sample forecasts. ``` # Compute the forecasts forecasts_pre = res_pre.forecast(4) # Plot the last 3 years of data and the four out-of-sample forecasts y_pre.iloc[-12:].plot(figsize=(15, 3), label='Data', legend=True) forecasts_pre.plot(label='Forecast', legend=True); ``` For the AR(1) model, it is also easy to manually construct the forecasts. Denoting the last observed variable as $y_T$ and the $h$-step-ahead forecast as $y_{T+h|T}$, we have: $$y_{T+h|T} = \hat \phi^h y_T$$ Where $\hat \phi$ is our estimated value for the AR(1) coefficient. From the summary output above, we can see that this is the first parameter of the model, which we can access from the `params` attribute of the results object. ``` # Get the estimated AR(1) coefficient phi_hat = res_pre.params[0] # Get the last observed value of the variable y_T = y_pre.iloc[-1] # Directly compute the forecasts at the horizons h=1,2,3,4 manual_forecasts = pd.Series([phi_hat * y_T, phi_hat**2 * y_T, phi_hat**3 * y_T, phi_hat**4 * y_T], index=forecasts_pre.index) # We'll print the two to double-check that they're the same print(pd.concat([forecasts_pre, manual_forecasts], axis=1)) ``` #### Step 2: computing the "news" from a new observation Suppose that time has passed, and we have now received another observation. Our dataset is now larger, and we can evaluate our forecast error and produce updated forecasts for the subsequent quarters. ``` # Get the next observation after the "pre" dataset y_update = y.iloc[-5:-4] # Print the forecast error print('Forecast error: %.2f' % (y_update.iloc[0] - forecasts_pre.iloc[0])) ``` To compute forecasts based on our updated dataset, we will create an updated results object `res_post` using the `append` method, to append on our new observation to the previous dataset. Note that by default, the `append` method does not re-estimate the parameters of the model. This is exactly what we want here, since we want to isolate the effect on the forecasts of the new information only. ``` # Create a new results object by passing the new observations to the `append` method res_post = res_pre.append(y_update) # Since we now know the value for 2008Q3, we will only use `res_post` to # produce forecasts for 2008Q4 through 2009Q2 forecasts_post = pd.concat([y_update, res_post.forecast('2009Q2')]) print(forecasts_post) ``` In this case, the forecast error is quite large - inflation was more than 10 percentage points below the AR(1) models' forecast. (This was largely because of large swings in oil prices around the global financial crisis). To analyse this in more depth, we can use Statsmodels to isolate the effect of the new information - or the "**news**" - on our forecasts. This means that we do not yet want to change our model or re-estimate the parameters. Instead, we will use the `news` method that is available in the results objects of state space models. Computing the news in Statsmodels always requires a *previous* results object or dataset, and an *updated* results object or dataset. Here we will use the original results object `res_pre` as the previous results and the `res_post` results object that we just created as the updated results. Once we have previous and updated results objects or datasets, we can compute the news by calling the `news` method. Here, we will call `res_pre.news`, and the first argument will be the updated results, `res_post` (however, if you have two results objects, the `news` method could can be called on either one). In addition to specifying the comparison object or dataset as the first argument, there are a variety of other arguments that are accepted. The most important specify the "impact periods" that you want to consider. These "impact periods" correspond to the forecasted periods of interest; i.e. these dates specify with periods will have forecast revisions decomposed. To specify the impact periods, you must pass two of `start`, `end`, and `periods` (similar to the Pandas `date_range` method). If your time series was a Pandas object with an associated date or period index, then you can pass dates as values for `start` and `end`, as we do below. ``` # Compute the impact of the news on the four periods that we previously # forecasted: 2008Q3 through 2009Q2 news = res_pre.news(res_post, start='2008Q3', end='2009Q2') # Note: one alternative way to specify these impact dates is # `start='2008Q3', periods=4` ``` The variable `news` is an object of the class `NewsResults`, and it contains details about the updates to the data in `res_post` compared to `res_pre`, the new information in the updated dataset, and the impact that the new information had on the forecasts in the period between `start` and `end`. One easy way to summarize the results are with the `summary` method. ``` print(news.summary()) ``` **Summary output**: the default summary for this news results object printed four tables: 1. Summary of the model and datasets 2. Details of the news from updated data 3. Summary of the impacts of the new information on the forecasts between `start='2008Q3'` and `end='2009Q2'` 4. Details of how the updated data led to the impacts on the forecasts between `start='2008Q3'` and `end='2009Q2'` These are described in more detail below. *Notes*: - There are a number of arguments that can be passed to the `summary` method to control this output. Check the documentation / docstring for details. - Table (4), showing details of the updates and impacts, can become quite large if the model is multivariate, there are multiple updates, or a large number of impact dates are selected. It is only shown by default for univariate models. **First table: summary of the model and datasets** The first table, above, shows: - The type of model from which the forecasts were made. Here this is an ARIMA model, since an AR(1) is a special case of an ARIMA(p,d,q) model. - The date and time at which the analysis was computed. - The original sample period, which here corresponds to `y_pre` - The endpoint of the updated sample period, which here is the last date in `y_post` **Second table: the news from updated data** This table simply shows the forecasts from the previous results for observations that were updated in the updated sample. *Notes*: - Our updated dataset `y_post` did not contain any *revisions* to previously observed datapoints. If it had, there would be an additional table showing the previous and updated values of each such revision. **Third table: summary of the impacts of the new information** *Columns*: The third table, above, shows: - The previous forecast for each of the impact dates, in the "estimate (prev)" column - The impact that the new information (the "news") had on the forecasts for each of the impact dates, in the "impact of news" column - The updated forecast for each of the impact dates, in the "estimate (new)" column *Notes*: - In multivariate models, this table contains additional columns describing the relevant impacted variable for each row. - Our updated dataset `y_post` did not contain any *revisions* to previously observed datapoints. If it had, there would be additional columns in this table showing the impact of those revisions on the forecasts for the impact dates. - Note that `estimate (new) = estimate (prev) + impact of news` - This table can be accessed independently using the `summary_impacts` method. *In our example*: Notice that in our example, the table shows the values that we computed earlier: - The "estimate (prev)" column is identical to the forecasts from our previous model, contained in the `forecasts_pre` variable. - The "estimate (new)" column is identical to our `forecasts_post` variable, which contains the observed value for 2008Q3 and the forecasts from the updated model for 2008Q4 - 2009Q2. **Fourth table: details of updates and their impacts** The fourth table, above, shows how each new observation translated into specific impacts at each impact date. *Columns*: The first three columns table described the relevant **update** (an "updated" is a new observation): - The first column ("update date") shows the date of the variable that was updated. - The second column ("forecast (prev)") shows the value that would have been forecasted for the update variable at the update date based on the previous results / dataset. - The third column ("observed") shows the actual observed value of that updated variable / update date in the updated results / dataset. The last four columns described the **impact** of a given update (an impact is a changed forecast within the "impact periods"). - The fourth column ("impact date") gives the date at which the given update made an impact. - The fifth column ("news") shows the "news" associated with the given update (this is the same for each impact of a given update, but is just not sparsified by default) - The sixth column ("weight") describes the weight that the "news" from the given update has on the impacted variable at the impact date. In general, weights will be different between each "updated variable" / "update date" / "impacted variable" / "impact date" combination. - The seventh column ("impact") shows the impact that the given update had on the given "impacted variable" / "impact date". *Notes*: - In multivariate models, this table contains additional columns to show the relevant variable that was updated and variable that was impacted for each row. Here, there is only one variable ("infl"), so those columns are suppressed to save space. - By default, the updates in this table are "sparsified" with blanks, to avoid repeating the same values for "update date", "forecast (prev)", and "observed" for each row of the table. This behavior can be overridden using the `sparsify` argument. - Note that `impact = news * weight`. - This table can be accessed independently using the `summary_details` method. *In our example*: - For the update to 2008Q3 and impact date 2008Q3, the weight is equal to 1. This is because we only have one variable, and once we have incorporated the data for 2008Q3, there is no no remaining ambiguity about the "forecast" for this date. Thus all of the "news" about this variable at 2008Q3 passes through to the "forecast" directly. #### Addendum: manually computing the news, weights, and impacts For this simple example with a univariate model, it is straightforward to compute all of the values shown above by hand. First, recall the formula for forecasting $y_{T+h|T} = \phi^h y_T$, and note that it follows that we also have $y_{T+h|T+1} = \phi^h y_{T+1}$. Finally, note that $y_{T|T+1} = y_T$, because if we know the value of the observations through $T+1$, we know the value of $y_T$. **News**: The "news" is nothing more than the forecast error associated with one of the new observations. So the news associated with observation $T+1$ is: $$n_{T+1} = y_{T+1} - y_{T+1|T} = Y_{T+1} - \phi Y_T$$ **Impacts**: The impact of the news is the difference between the updated and previous forecasts, $i_h \equiv y_{T+h|T+1} - y_{T+h|T}$. - The previous forecasts for $h=1, \dots, 4$ are: $\begin{pmatrix} \phi y_T & \phi^2 y_T & \phi^3 y_T & \phi^4 y_T \end{pmatrix}'$. - The updated forecasts for $h=1, \dots, 4$ are: $\begin{pmatrix} y_{T+1} & \phi y_{T+1} & \phi^2 y_{T+1} & \phi^3 y_{T+1} \end{pmatrix}'$. The impacts are therefore: $$\{ i_h \}_{h=1}^4 = \begin{pmatrix} y_{T+1} - \phi y_T \\ \phi (Y_{T+1} - \phi y_T) \\ \phi^2 (Y_{T+1} - \phi y_T) \\ \phi^3 (Y_{T+1} - \phi y_T) \end{pmatrix}$$ **Weights**: To compute the weights, we just need to note that it is immediate that we can rewrite the impacts in terms of the forecast errors, $n_{T+1}$. $$\{ i_h \}_{h=1}^4 = \begin{pmatrix} 1 \\ \phi \\ \phi^2 \\ \phi^3 \end{pmatrix} n_{T+1}$$ The weights are then simply $w = \begin{pmatrix} 1 \\ \phi \\ \phi^2 \\ \phi^3 \end{pmatrix}$ We can check that this is what the `news` method has computed. ``` # Print the news, computed by the `news` method print(news.news) # Manually compute the news print() print((y_update.iloc[0] - phi_hat * y_pre.iloc[-1]).round(6)) # Print the total impacts, computed by the `news` method # (Note: news.total_impacts = news.revision_impacts + news.update_impacts, but # here there are no data revisions, so total and update impacts are the same) print(news.total_impacts) # Manually compute the impacts print() print(forecasts_post - forecasts_pre) # Print the weights, computed by the `news` method print(news.weights) # Manually compute the weights print() print(np.array([1, phi_hat, phi_hat**2, phi_hat**3]).round(6)) ``` ### Multivariate example: dynamic factor In this example, we'll consider forecasting monthly core price inflation based on the Personal Consumption Expenditures (PCE) price index and the Consumer Price Index (CPI), using a Dynamic Factor model. Both of these measures track prices in the US economy and are based on similar source data, but they have a number of definitional differences. Nonetheless, they track each other relatively well, so modeling them jointly using a single dynamic factor seems reasonable. One reason that this kind of approach can be useful is that the CPI is released earlier in the month than the PCE. One the CPI is released, therefore, we can update our dynamic factor model with that additional datapoint, and obtain an improved forecast for that month's PCE release. A more involved version of this kind of analysis is available in Knotek and Zaman (2017). We start by downloading the core CPI and PCE price index data from [FRED](https://fred.stlouisfed.org/), converting them to annualized monthly inflation rates, removing two outliers, and de-meaning each series (the dynamic factor model does not ``` import pandas_datareader as pdr levels = pdr.get_data_fred(['PCEPILFE', 'CPILFESL'], start='1999', end='2019').to_period('M') infl = np.log(levels).diff().iloc[1:] * 1200 infl.columns = ['PCE', 'CPI'] # Remove two outliers and de-mean the series infl['PCE'].loc['2001-09':'2001-10'] = np.nan ``` To show how this works, we'll imagine that it is April 14, 2017, which is the data of the March 2017 CPI release. So that we can show the effect of multiple updates at once, we'll assume that we haven't updated our data since the end of January, so that: - Our **previous dataset** will consist of all values for the PCE and CPI through January 2017 - Our **updated dataset** will additionally incorporate the CPI for February and March 2017 and the PCE data for February 2017. But it will not yet the PCE (the March 2017 PCE price index was not released until May 1, 2017). ``` # Previous dataset runs through 2017-02 y_pre = infl.loc[:'2017-01'].copy() const_pre = np.ones(len(y_pre)) print(y_pre.tail()) # For the updated dataset, we'll just add in the # CPI value for 2017-03 y_post = infl.loc[:'2017-03'].copy() y_post.loc['2017-03', 'PCE'] = np.nan const_post = np.ones(len(y_post)) # Notice the missing value for PCE in 2017-03 print(y_post.tail()) ``` We chose this particular example because in March 2017, core CPI prices fell for the first time since 2010, and this information may be useful in forecast core PCE prices for that month. The graph below shows the CPI and PCE price data as it would have been observed on April 14th$^\dagger$. ----- $\dagger$ This statement is not entirely true, because both the CPI and PCE price indexes can be revised to a certain extent after the fact. As a result, the series that we're pulling are not exactly like those observed on April 14, 2017. This could be fixed by pulling the archived data from [ALFRED](https://alfred.stlouisfed.org/) instead of [FRED](https://fred.stlouisfed.org/), but the data we have is good enough for this tutorial. ``` # Plot the updated dataset fig, ax = plt.subplots(figsize=(15, 3)) y_post.plot(ax=ax) ax.hlines(0, '2009', '2017-06', linewidth=1.0) ax.set_xlim('2009', '2017-06'); ``` To perform the exercise, we first construct and fit a `DynamicFactor` model. Specifically: - We are using a single dynamic factor (`k_factors=1`) - We are modeling the factor's dynamics with an AR(6) model (`factor_order=6`) - We have included a vector of ones as an exogenous variable (`exog=const_pre`), because the inflation series we are working with are not mean-zero. ``` mod_pre = sm.tsa.DynamicFactor(y_pre, exog=const_pre, k_factors=1, factor_order=6) res_pre = mod_pre.fit() print(res_pre.summary()) ``` With the fitted model in hand, we now construct the news and impacts associated with observing the CPI for March 2017. The updated data is for February 2017 and part of March 2017, and we'll examining the impacts on both March and April. In the univariate example, we first created an updated results object, and then passed that to the `news` method. Here, we're creating the news by directly passing the updated dataset. Notice that: 1. `y_post` contains the entire updated dataset (not just the new datapoints) 2. We also had to pass an updated `exog` array. This array must cover **both**: - The entire period associated with `y_post` - Any additional datapoints after the end of `y_post` through the last impact date, specified by `end` Here, `y_post` ends in March 2017, so we needed our `exog` to extend one more period, to April 2017. ``` # Create the news results # Note const_post_plus1 = np.ones(len(y_post) + 1) news = res_pre.news(y_post, exog=const_post_plus1, start='2017-03', end='2017-04') ``` > **Note**: > > In the univariate example, above, we first constructed a new results object, and then passed that to the `news` method. We could have done that here too, although there is an extra step required. Since we are requesting an impact for a period beyond the end of `y_post`, we would still need to pass the additional value for the `exog` variable during that period to `news`: > > ```python res_post = res_pre.apply(y_post, exog=const_post) news = res_pre.news(res_post, exog=[1.], start='2017-03', end='2017-04') ``` Now that we have computed the `news`, printing `summary` is a convenient way to see the results. ``` # Show the summary of the news results print(news.summary()) ``` Because we have multiple variables, by default the summary only shows the news from updated data along and the total impacts. From the first table, we can see that our updated dataset contains three new data points, with most of the "news" from these data coming from the very low reading in March 2017. The second table shows that these three datapoints substantially impacted the estimate for PCE in March 2017 (which was not yet observed). This estimate revised down by nearly 1.5 percentage points. The updated data also impacted the forecasts in the first out-of-sample month, April 2017. After incorporating the new data, the model's forecasts for CPI and PCE inflation in that month revised down 0.29 and 0.17 percentage point, respectively. While these tables show the "news" and the total impacts, they do not show how much of each impact was caused by each updated datapoint. To see that information, we need to look at the details tables. One way to see the details tables is to pass `include_details=True` to the `summary` method. To avoid repeating the tables above, however, we'll just call the `summary_details` method directly. ``` print(news.summary_details()) ``` This table shows that most of the revisions to the estimate of PCE in April 2017, described above, came from the news associated with the CPI release in March 2017. By contrast, the CPI release in February had only a little effect on the April forecast, and the PCE release in February had essentially no effect. ### Bibliography Bańbura, Marta, Domenico Giannone, and Lucrezia Reichlin. "Nowcasting." The Oxford Handbook of Economic Forecasting. July 8, 2011. Bańbura, Marta, Domenico Giannone, Michele Modugno, and Lucrezia Reichlin. "Now-casting and the real-time data flow." In Handbook of economic forecasting, vol. 2, pp. 195-237. Elsevier, 2013. Bańbura, Marta, and Michele Modugno. "Maximum likelihood estimation of factor models on datasets with arbitrary pattern of missing data." Journal of Applied Econometrics 29, no. 1 (2014): 133-160. Knotek, Edward S., and Saeed Zaman. "Nowcasting US headline and core inflation." Journal of Money, Credit and Banking 49, no. 5 (2017): 931-968.
true
code
0.672533
null
null
null
null
``` import torch import torch.nn as nn import torch.nn.functional as F from torchvision import datasets, transforms from torch.autograd import Variable from collections import OrderedDict import numpy as np import matplotlib.pyplot as plt plt.rcParams['image.cmap'] = 'gray' %matplotlib inline # input batch size for training (default: 64) batch_size = 64 # input batch size for testing (default: 1000) test_batch_size = 1000 # number of epochs to train (default: 10) epochs = 10 # learning rate (default: 0.01) lr = 0.01 # SGD momentum (default: 0.5) momentum = 0.5 # disables CUDA training no_cuda = True # random seed (default: 1) seed = 1 # how many batches to wait before logging training status log_interval = 10 # Setting seed for reproducibility. torch.manual_seed(seed) cuda = not no_cuda and torch.cuda.is_available() print("CUDA: {}".format(cuda)) if cuda: torch.cuda.manual_seed(seed) cudakwargs = {'num_workers': 1, 'pin_memory': True} if cuda else {} mnist_transform = transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,)) # Precalcualted values. ]) train_set = datasets.MNIST( root='data', train=True, transform=mnist_transform, download=True, ) test_set = datasets.MNIST( root='data', train=False, transform=mnist_transform, download=True, ) train_loader = torch.utils.data.DataLoader( dataset=train_set, batch_size=batch_size, shuffle=True, **cudakwargs ) test_loader = torch.utils.data.DataLoader( dataset=test_set, batch_size=test_batch_size, shuffle=True, **cudakwargs ) ``` ## Loading the model. Here we will focus only on `nn.Sequential` model types as they are easier to deal with. Generalizing the methods described here to `nn.Module` will require more work. ``` class Flatten(nn.Module): def forward(self, x): return x.view(x.size(0), -1) def __str__(self): return 'Flatten()' model = nn.Sequential(OrderedDict([ ('conv2d_1', nn.Conv2d(in_channels=1, out_channels=32, kernel_size=3)), ('relu_1', nn.ReLU()), ('max_pooling2d_1', nn.MaxPool2d(kernel_size=2)), ('conv2d_2', nn.Conv2d(in_channels=32, out_channels=32, kernel_size=3)), ('relu_2', nn.ReLU()), ('dropout_1', nn.Dropout(p=0.25)), ('flatten_1', Flatten()), ('dense_1', nn.Linear(3872, 64)), ('relu_3', nn.ReLU()), ('dropout_2', nn.Dropout(p=0.5)), ('dense_2', nn.Linear(64, 10)), ('readout', nn.LogSoftmax()) ])) model.load_state_dict(torch.load('example_torch_mnist_model.pth')) ``` ## Accessing the layers A `torch.nn.Sequential` module serves itself as an iterable and subscriptable container for all its children modules. ``` for i, layer in enumerate(model): print('{}\t{}'.format(i, layer)) ``` Moreover `.modules` and `.children` provide generators for accessing layers. ``` for m in model.modules(): print(m) for c in model.children(): print(c) ``` ## Getting the weigths. ``` conv2d_1_weight = model[0].weight.data.numpy() conv2d_1_weight.shape for i in range(32): plt.imshow(conv2d_1_weight[i, 0]) plt.show() ``` ### Getting layer properties The layer objects themselfs expose most properties as attributes. ``` conv2d_1 = model[0] conv2d_1.kernel_size conv2d_1.stride conv2d_1.dilation conv2d_1.in_channels, conv2d_1.out_channels conv2d_1.padding conv2d_1.output_padding dropout_1 = model[5] dropout_1.p dense_1 = model[7] dense_1.in_features, dense_1.out_features ```
true
code
0.806662
null
null
null
null
## Next Task: compute precision and recall threshold 25: zoomy, sustain->thick, smooth (user results) zoomy, sustain -> dark, smooth (word2word matcher resuts) smooth tp dark fp thik tn (fn?) precision = tp/(tp+fp) recall = tp/(tp+fn) for one word, cant compute recall later: tensorflow language models, Optimising (Kullback-Leibler) for the distribution However, note: Let A and B be any sets with |A|=|B| (|.| being the set cardinality, i.e. number of elements in the set). It follows that fp = |B\A∩B|=|B|-|A∩B| = |A|-|A∩B| = |A\A∩B|=fn. It hence follows that precision = tp/(tp+fp)=tp/(tp+fn)=recall I understood your definition "A is the set of words in our ground truth, when you apply a threshold to the sliders B is the set of words from the output of our words matcher" in a way such that |A|=|B| ``` import sys import ipdb import pandas as pd import numpy as np from tqdm import tqdm sys.path.append(r'C:\Temp\SoundOfAI\rg_text_to_sound\tts_pipeline\src') from match_word_to_words import prepare_dataset,word_to_wordpair_estimator,word_to_words_matcher,prepare_dataset import matplotlib.pyplot as plt df = pd.read_csv('text_to_qualities.csv') colnames = df.columns display(df.head(2)) df.shape df = pd.read_csv('text_to_qualities.csv') dfnew[dfnew.description.str.match('\'')] dfnew['description'] = dfnew.description.str.replace("'","") dfnew['description']=dfnew.description.str.lower().str.replace('(\(not.*\))','',regex=True) dfnew = dfnew[~dfnew.description.str.match('\(.*\)')] dfnew.head() wordlist = dfnew.description unique_word_list = np.unique(wordlist).tolist() len(wordlist),len(unique_word_list) ``` threshold 25: zoomy, sustain->thick, smooth (user results) zoomy, sustain -> dark, smooth (word2word matcher resuts) smooth tp dark fp thik tn precision = tp/(tp+fp) recall = tp/(tp+fn) for one word, cant compute recall # word pair estimator ``` df_score df_score = dfnew.iloc[:,1:] descriptions = dfnew.iloc[:,0] wordpairnames = df_score.columns.tolist() df_score.head() target_word_pairs = [('bright', 'dark'), ('full', 'hollow'),( 'smooth', 'rough'), ('warm', 'metallic'), ('clear', 'muddy'), ('thin', 'thick'), ('pure', 'noisy'), ('rich', 'sparse'), ('soft', 'hard')] wordpairnames_to_wordpair_dict = {s:t for s,t in zip(wordpairnames,target_word_pairs)} wordpairnames_to_wordpair_dict list(np.arange(49.8,50,0.1)) A=set([1,2,3]) B=set([3,4,5]) AandB = A.intersection(B) B.difference(AandB) def single_word_precision_recall(word,scorerow,threshold,w2wpe,wordpairnames_to_wordpair_dict): elems_above = scorerow[(scorerow>(100-threshold)) ] elems_below = scorerow[(scorerow<=threshold) ] words_above = [wordpairnames_to_wordpair_dict[wordpairname][1] for wordpairname in elems_above.index] words_below = [wordpairnames_to_wordpair_dict[wordpairname][0] for wordpairname in elems_below.index] A = set(words_above+words_below) opposite_pairs_beyond_threshold = elems_above.index.tolist()+elems_below.index.tolist() B = set([w2wpe.match_word_to_wordpair(word,ind)['closest word'] for ind in opposite_pairs_beyond_threshold]) assert len(A)==len(B), 'This should never occurr!' AandB = set(A).intersection(B) tp = AandB fp = B.difference(AandB) # were found but shouldn't have been fn = A.difference(AandB) # were not found but should have been den = len(tp)+len(fp) if den==0: precision = np.NaN else: precision = len(tp)/den den = len(tp)+len(fn) if den==0: recall = np.NaN else: recall = len(tp)/den if precision!=recall and not np.isnan(precision): print('This should never occur!') print('word, A,B,AandB,tp,fp,fn,precision,recall') print(word, A,B,AandB,tp,fp,fn,precision,recall) return precision,recall,len(A) lang_model='en_core_web_sm' w2wpe = word_to_wordpair_estimator() w2wpe.build(wordpairnames,target_word_pairs,lang_model=lang_model) w2wpe.match_word_to_wordpair('full','full_vs_hollow') word = descriptions[0] scorerow = df_score.iloc[0,:] prec_50_list=[] NrRelevantWordpairList=[] for word, (irow,scorerow) in tqdm(zip(descriptions, df_score.iterrows())): prec,rec,NrRelevantWordpairs = single_word_precision_recall(word,scorerow,10,w2wpe,wordpairnames_to_wordpair_dict) prec_50_list.append(prec) NrRelevantWordpairList.append(NrRelevantWordpairs) pd.Series(prec_50_list).dropna() len(prec_50_list),np.mean(prec_50_list) ' '.join([f'{i:1.1f}' for i in thresholdlist]) def compute_accuracy(lang_model='en_core_web_lg',thresholdlist=None): w2wpe = word_to_wordpair_estimator() w2wpe.build(wordpairnames,target_word_pairs,lang_model=lang_model) if thresholdlist is None: thresholdlist = list(np.arange(0,50,2))+list(np.arange(45,50,0.5))+[50.] mean_accuracy_list = [] nrrelevantlist = [] for threshold in tqdm(thresholdlist): acc_list=[] NrRelevantWordpairList=[] for word, (irow,scorerow) in zip(descriptions, df_score.iterrows()): precision,recall,NrRelevantWordpairs = single_word_precision_recall(word,scorerow,threshold,w2wpe,wordpairnames_to_wordpair_dict) acc_list.append(precision) NrRelevantWordpairList.append(NrRelevantWordpairs) assert len(acc_list)>0, 'something is wrong...' meanAccuracyVal = pd.Series(acc_list).dropna().mean() NrRelevantVal = np.mean(NrRelevantWordpairList) mean_accuracy_list.append(meanAccuracyVal) nrrelevantlist.append(NrRelevantVal) return mean_accuracy_list,nrrelevantlist %time lang_model1 = 'en_core_web_sm' lang_model2 = 'en_core_web_lg' mean_accuracy_list1,nrrelevantlist1 = compute_accuracy(lang_model=lang_model1) mean_accuracy_list2,nrrelevantlist2 = compute_accuracy(lang_model=lang_model2) lang_model3 = 'en_core_web_md' thresholdlist = list(np.arange(0,50,2))+list(np.arange(45,50,0.5))+[50.] mean_accuracy_list3,nrrelevantlist3 = compute_accuracy(lang_model=lang_model3,thresholdlist=thresholdlist) from nltk.corpus import wordnet # Then, we're going to use the term "program" to find synsets like so: syns = wordnet.synsets("program") if np.all(np.isclose(np.array(nrrelevantlist1),np.array(nrrelevantlist2))): nrrelevantlist = nrrelevantlist1 plt.figure(1,figsize=(15,7)) plt.subplot(3,1,1) plt.plot(thresholdlist,mean_accuracy_list1,marker='o',label='Accuracy') plt.suptitle(f'Accuracy vs. Threshold\nWords considered have (score <= threshold) or (score > 100-threshold)') plt.title(f'Accuracy of {lang_model1}') plt.ylabel('Accuracy') plt.legend() plt.subplot(2,1,2) plt.plot(thresholdlist,mean_accuracy_list2,marker='o',label='Accuracy') plt.title(f'Accuracy of {lang_model2}') plt.ylabel('Accuracy') plt.legend() plt.subplot(3,1,3) plt.plot(thresholdlist,nrrelevantlist,marker='o') plt.title('Average number of relevant sliders') plt.xlabel('threshold value') plt.ylabel('Nr of Sliders') plt.yticks(np.arange(1,10,2)) plt.subplots_adjust(hspace=.6) plt.figure(1,figsize=(15,7)) plt.subplot(1,1,1) plt.plot(thresholdlist,mean_accuracy_list3,marker='o',label='Accuracy') plt.suptitle(f'Accuracy vs. Threshold\nWords considered have (score <= threshold) or (score > 100-threshold)') plt.title(f'Accuracy of {lang_model3}') plt.ylabel('Accuracy') plt.legend() plt.figure(1,figsize=(15,7)) plt.subplot(2,1,1) plt.plot(thresholdlist,mean_accuracy_list1,marker='o',label=f'Accuracy of {lang_model1}') plt.plot(thresholdlist,mean_accuracy_list2,marker='o',label=f'Accuracy of {lang_model2}') plt.suptitle(f'Accuracy vs. Threshold\nWords considered have (score <= threshold) or (score > 100-threshold)') plt.ylabel('Accuracy') plt.legend() plt.subplot(2,1,2) plt.plot(thresholdlist,nrrelevantlist,marker='o') plt.title('Average number of relevant sliders') plt.xlabel('threshold value') plt.ylabel('Nr of Sliders') plt.yticks(np.arange(1,10,2)) plt.subplots_adjust(hspace=.6) plt.savefig('Accuracy_vs_Threshold.svg') row lang_model = 'en_core_web_sm' w2wpe = word_to_wordpair_estimator() w2wpe.build(wordpairnames,target_word_pairs,lang_model=lang_model) prediction_dict = w2wpe.match_word_to_wordpair(word,ind) ind,prediction_dict[] ind,w2wpe.match_word_to_wordpair(word,ind) def compute_mean_acc(dfnew,df_score,thresholdmargin,threshold=50, required_confidence=0, lang_model='en_core_web_sm'): """ Take the opposite quality pairs for which the slider value is outside the 50+/- <thresholdmargin> band. Compute the accuracy in predicting the correct opposite-pair word for each such pair. threshold: where to split a score to lower or upper quality in pair: 50 is the most natural value. The prediction must be with a (minimum) < required_confidence > otherwise the prediction is deemed unsure. The returned accuracy is computed as accuracy = NrCorrect/(NrCorrect+NrWrong+NrUnsure) averaged over all words in <dfnew>.description """ w2wpe = word_to_wordpair_estimator() w2wpe.build(wordpairnames,target_word_pairs,lang_model=lang_model) acc_list = [] unsure_list = [] NrCorrect = 0 NrWrong = 0 NrUnsure = 0 for word, (irow,scorerow) in zip(dfnew.description, df_score.iterrows()): #determine which opposite quality pairs will be correctly predicted as the first and second word in the word pair, respectively valid_qualities = scorerow[(scorerow > threshold+thresholdmargin )|(scorerow < threshold-thresholdmargin)] below_th = valid_qualities[valid_qualities<threshold].index.tolist()#first word in the word pair is correct above_th = valid_qualities[valid_qualities>threshold].index.tolist()#second word in the word pair is correct #word_pair_tuple = wordpairnames_to_wordpair_dict[word_pair] NrCorrect = 0 NrWrong = 0 NrUnsure = 0 for word_pair in above_th: res = w2wpe.match_word_to_wordpair(word,word_pair) if res['slider value']>(threshold+required_confidence):# Add prediction threshold? NrCorrect+=1 elif res['slider value']<(threshold-required_confidence): NrWrong+=1 else: NrUnsure+=1 #if required confidence was not reached for word_pair in below_th: res = w2wpe.match_word_to_wordpair(word,word_pair) if res['slider value']<(threshold-required_confidence):# Add prediction threshold? NrCorrect+=1 elif res['slider value']>threshold+required_confidence: NrWrong+=1 else: NrUnsure+=1 #if required confidence was not reached if len(below_th)+len(above_th)==0: continue accuracy = NrCorrect/(NrCorrect+NrWrong+NrUnsure) unsure_ratio = NrUnsure/(NrCorrect+NrWrong+NrUnsure) # the fraction of cases where the prediction did not reach the required confidence acc_list.append(accuracy) unsure_list.append(unsure_ratio) #resdict = {'NrCorrect':NrCorrect, 'NrWrong':NrWrong, 'NrUnsure':NrUnsure} mean_acc = np.mean(acc_list) #list of accuracies for each word, over all available sliders mean_unsure = np.mean(unsure_list) del w2wpe return mean_acc,mean_unsure def f(): ipdb.set_trace() return wordpair_matcher_dict['bright_vs_dark'].match_word_to_words('sunny') f() y = np.array([np.where(row['bright_vs_dark']>=50,1,0) for row in rowlist]) y.shape,yhat1.shape yhat_binary = np.array([0 if yhatelem==target_word_pair[0] else 1 for yhatelem in yhat1]) yhat_binary.shape len(yhat),len(rowlist) accuracy_score(y,yhat_binary) yhat1 df_detailed = pd.DataFrame(index=wordlist) df_detailed.head(7) wordlist = [w for r,w in generate_training_examples(df)] rowlist = [r for r,w in generate_training_examples(df)] acc_scores=dict() for target_word_pair,opposite_quality_pair in zip(target_word_pairs,colnames): y = np.array([np.where(row[opposite_quality_pair]>=50,1,0) for row in rowlist]) print(target_word_pair,opposite_quality_pair) w2wm = word_to_words_matcher() w2wm.build(target_word_pair) yhat1 = np.array(f(wordlist,w2wm,variant=1)) df_detailed[opposite_quality_pair] = yhat1 yhat_binary = np.array([0 if yhatelem==target_word_pair[0] else 1 for yhatelem in yhat1]) acc_score = accuracy_score(y,yhat_binary) print(f'{acc_score:1.3f}') acc_scores[opposite_quality_pair] = acc_score print(df_detailed.shape) df_detailed.to_excel('predicted_qualities.xlsx') df_detailed.head(20) pd.Series(acc_scores).plot.bar(ylabel='accuracy') plt.plot(plt.xlim(),[0.5,0.5],'--',c='k') plt.title(f'Accuracy of Spacy word vectors in predicting\ntext_to_qualities.csv ({len(wordlist)} qualities)') plt.ylim(0,1) ``` ## Next Task: compute precision and recall threshold 25: zoomy, sustain->thick, smooth (user results) zoomy, sustain -> dark, smooth (word2word matcher resuts) smooth tp dark fp thik tn precision = tp/(tp+fp) recall = tp/(tp+fn) for one word, cant compute recall later: tensorflow language models, Optimising (Kullback-Leibler) for the distribution
true
code
0.28887
null
null
null
null
# K-means clustering When working with large datasets it can be helpful to group similar observations together. This process, known as clustering, is one of the most widely used in Machine Learning and is often used when our dataset comes without pre-existing labels. In this notebook we're going to implement the classic K-means algorithm, the simplest and most widely used clustering method. Once we've implemented it we'll use it to split a dataset into groups and see how our clustering compares to the 'true' labelling. ## Import Modules ``` import numpy as np import random import pandas as pd import matplotlib.pyplot as plt from scipy.stats import multivariate_normal ``` ## Generate Dataset ``` modelParameters = {'mu':[[-2,1], [0.5, -1], [0,1]], 'pi':[0.2, 0.35, 0.45], 'sigma':0.4, 'n':200} #Check that pi sums to 1 if np.sum(modelParameters['pi']) != 1: print('Mixture weights must sum to 1!') data = [] #determine which mixture each point belongs to def generateLabels(n, pi): #Generate n realisations of a categorical distribution given the parameters pi unif = np.random.uniform(size = n) #Generate uniform random variables labels = [(u < np.cumsum(pi)).argmax() for u in unif] #assign cluster return labels #Given the labels, generate from the corresponding normal distribution def generateMixture(labels, params): normalSamples = [] for label in labels: #Select Parameters mu = params['mu'][label] Sigma = np.diag([params['sigma']**2]*len(mu)) #sample from multivariate normal samp = np.random.multivariate_normal(mean = mu, cov = Sigma, size = 1) normalSamples.append(samp) normalSamples = np.reshape(normalSamples, (len(labels), len(params['mu'][0]))) return normalSamples labels = generateLabels(100, modelParameters['pi']) #labels - (in practice we don't actually know what these are!) X = generateMixture(labels, modelParameters) #features - (we do know what these are) ``` # Quickly plot the data so we know what it looks like ``` plt.figure(figsize=(10,6)) plt.scatter(X[:,0], X[:,1],c = labels) plt.show() ``` When doing K-means clustering, our goal is to sort the data into 3 clusters using the data $X$. When we're doing clustering we don't have access to the colour (label) of each point, so the data we're actually given would look like this: ``` plt.figure(figsize=(10,6)) plt.scatter(X[:,0], X[:,1]) plt.title('Example data - no labels') plt.show() ``` If we inspect the data we can still see that the data are roughly made up by 3 groups, one in the top left corner, one in the top right corner and one in the bottom right corner ## How does K-means work? The K in K-means represents the number of clusters, K, that we will sort the data into. Let's imagine we had already sorted the data into K clusters (like in the first plot above) and were trying to decide what the label of a new point should be. It would make sense to assign it to the cluster which it is closest to. But how do we define 'closest to'? One way would be to give it the same label as the point that is closest to it (a 'nearest neighbour' approach), but a more robust way would be to determine where the 'middle' of each cluster was and assign the new point to the cluster with the closest middle. We call this 'middle' the Cluster Centroid and we calculate it be taking the average of all the points in the cluster. That's all very well and good if we already have the clusters in place, but the whole point of the algorithm is to find out what the clusters are! To find the clusters, we do the following: 1. Randomly initialise K Cluster Centroids 2. Assign each point to the Cluster Centroid that it is closest to. 3. Update the Cluster Centroids as the average of all points currently assigned to that centroid 4. Repeat steps 2-3 until convergence ### Why does K-means work? Our aim is to find K Cluster Centroids such that the overall distance between each datapoint and its Cluster Centroid is minimised. That is, we want to choose cluster centroids $C = \{C_1,...,C_K\}$ such that the error function: $$E(C) = \sum_{i=1}^n ||x_i-C_{x_i}||^2$$ is minimised, where $C_{x_i}$ is the Cluster Centroid associated with the ith observation and $||x_i-C_{x_i}||$ is the Euclidean distance between the ith observation and associated Cluster Centroid. Now assume after $m$ iterations of the algorithm, the current value of $E(C)$ was $\alpha$. By carrying out step 2, we make sure that each point is assigned to the nearest cluster centroid - by doing this, either $\alpha$ stays the same (every point was already assigned to the closest centroid) or $\alpha$ gets smaller (one or more points is moved to a nearer centroid and hence the total distance is reduced). Similarly with step 3, by changing the centroid to be the average of all points in the cluster, we minimise the total distance associated with that cluster, meaning $\alpha$ can either stay the same or go down. In this way we see that as we run the algorithm $E(C)$ is non-increasing, so by continuing to run the algorithm our results can't get worse - hopefully if we run it for long enough then the results will be sensible! ``` class KMeans: def __init__(self, data, K): self.data = data #dataset with no labels self.K = K #Number of clusters to sort the data into #Randomly initialise Centroids self.Centroids = np.random.normal(0,1,(self.K, self.data.shape[1])) #If the data has p features then should be a K x p array def closestCentroid(self, x): #Takes a single example and returns the index of the closest centroid #Recall centroids are saved as self.Centroids pass def assignToCentroid(self): #Want to assign each observation to a centroid by passing each observation to the function closestCentroid pass def updateCentroids(self): #Now based on the current cluster assignments (stored in self.assignments) update the Centroids pass def runKMeans(self, tolerance = 0.00001): #When the improvement between two successive evaluations of our error function is less than tolerance, we stop change = 1000 #Initialise change to be a big number numIterations = 0 self.CentroidStore = [np.copy(self.Centroids)] #We want to be able to keep track of how the centroids evolved over time #while change > tolerance: #Code goes here... print(f'K-means Algorithm converged in {numIterations} steps') myKM = KMeans(X,3) myKM.runKMeans() ``` ## Let's plot the results ``` c = [0,1,2]*len(myKM.CentroidStore) plt.figure(figsize=(10,6)) plt.scatter(np.array(myKM.CentroidStore).reshape(-1,2)[:,0], np.array(myKM.CentroidStore).reshape(-1,2)[:,1],c=np.array(c), s = 200, marker = '*') plt.scatter(X[:,0], X[:,1], s = 12) plt.title('Example data from a mixture of Gaussians - Cluster Centroid traces') plt.show() ``` The stars of each colour above represents the trajectory of each cluster centroid as the algorithm progressed. Starting from a random initialisation, the centroids raplidly converged to a separate cluster, which is encouraging. Now let's plot the data with the associated labels that we've assigned to them. ``` plt.figure(figsize=(10,6)) plt.scatter(X[:,0], X[:,1], s = 20, c = myKM.assignments) plt.scatter(np.array(myKM.Centroids).reshape(-1,2)[:,0], np.array(myKM.Centroids).reshape(-1,2)[:,1], s = 200, marker = '*', c = 'red') plt.title('Example data from a mixture of Gaussians - Including Cluster Centroids') plt.show() ``` The plot above shows the final clusters (with red Cluster Centroids) assigned by the model, which should be pretty close to the 'true' clusters at the top of the page. Note: It's possible that although the clusters are the same the labels might be different - remember that K-means isn't supposed to identify the correct label, it's supposed to group the data in clusters which in reality share the same labels. The data we've worked with in this notebook had an underlying structure that made it easy for K-means to identify distinct clusters. However let's look at an example where K-means doesn't perform so well ## The sting in the tail - A more complex data structure ``` theta = np.linspace(0, 2*np.pi, 100) r = 15 x1 = r*np.cos(theta) x2 = r*np.sin(theta) #Perturb the values in the circle x1 = x1 + np.random.normal(0,2,x1.shape[0]) x2 = x2 + np.random.normal(0,2,x2.shape[0]) z1 = np.random.normal(0,3,x1.shape[0]) z2 = np.random.normal(0,3,x2.shape[0]) x1 = np.array([x1,z1]).reshape(-1) x2 = np.array([x2,z2]).reshape(-1) plt.scatter(x1,x2) plt.show() ``` It might be the case that the underlying generative structure that we want to capture is that the 'outer ring' in the plot corresponds to a certain kind of process and the 'inner circle' corresponds to another. ``` #Get data in the format we want newX = [] for i in range(x1.shape[0]): newX.append([x1[i], x2[i]]) newX = np.array(newX) #Run KMeans myNewKM = KMeans(newX,2) myNewKM.runKMeans() plt.figure(figsize=(10,6)) plt.scatter(newX[:,0], newX[:,1], s = 20, c = np.array(myNewKM.assignments)) plt.scatter(np.array(myNewKM.Centroids).reshape(-1,2)[:,0], np.array(myNewKM.Centroids).reshape(-1,2)[:,1], s = 200, marker = '*', c = 'red') plt.title('Assigned K-Means labels for Ring data ') plt.show() ``` The above plot indicates that K-means isn't able to identify the ring-like structure that we mentioned above. The clustering it has performed is perfectly valid - remember in K-means' world, labels don't exist and this is a legitmate clustering of the data! However if we were to use this clustering our subsequent analyses might be negatively impacted. In a future post we'll implement a method which is capable of capturing non-linear relationships more effectively (the Gaussian Mixture Model).
true
code
0.634741
null
null
null
null
# RDF graph processing against the integrated POIs #### Auxiliary function to format SPARQL query results as a data frame: ``` import pandas as pds def sparql_results_frame(qres): cols = qres.vars out = [] for row in qres: item = [] for c in cols: item.append(row[c]) out.append(item) pds.set_option('display.max_colwidth', 0) return pds.DataFrame(out, columns=cols) ``` #### Create an **RDF graph** with the triples resulting from data integration: ``` from rdflib import Graph,URIRef g = Graph() g.parse('./output/integrated.nt', format="nt") # Get graph size (in number of statements) len(g) ``` #### Number of statements per predicate: ``` # SPARQL query is used to retrieve the results from the graph qres = g.query( """SELECT ?p (COUNT(*) AS ?cnt) { ?s ?p ?o . } GROUP BY ?p ORDER BY DESC(?cnt)""") # display unformatted query results #for row in qres: # print("%s %s" % row) # display formatted query results sparql_results_frame(qres) ``` #### Identify POIs having _**name**_ similar to a user-specified one: ``` # SPARQL query is used to retrieve the results from the graph qres = g.query( """PREFIX slipo: <http://slipo.eu/def#> PREFIX provo: <http://www.w3.org/ns/prov#> SELECT DISTINCT ?poiURI ?title WHERE { ?poiURI slipo:name ?n . ?n slipo:nameValue ?title . FILTER regex(?title, "^Achilleio", "i") } """) # display query results sparql_results_frame(qres) ``` #### **Fusion action** regarding a specific POI: ``` # SPARQL query is used to retrieve the results from the graph qres = g.query( """PREFIX slipo: <http://slipo.eu/def#> PREFIX provo: <http://www.w3.org/ns/prov#> SELECT ?prov ?defaultAction ?conf WHERE { ?poiURI provo:wasDerivedFrom ?prov . ?poiURI slipo:name ?n . ?n slipo:nameValue ?title . ?poiURI slipo:address ?a . ?a slipo:street ?s . ?prov provo:default-fusion-action ?defaultAction . ?prov provo:fusion-confidence ?conf . FILTER regex(?title, "Achilleio", "i") } """) print("Query returned %d results." % len(qres) ) # display query results sparql_results_frame(qres) ``` #### **Pair of original POIs** involved in this fusion: ``` # SPARQL query is used to retrieve the results from the graph qres = g.query( """PREFIX slipo: <http://slipo.eu/def#> PREFIX provo: <http://www.w3.org/ns/prov#> SELECT ?leftURI ?rightURI ?conf WHERE { <http://www.provbook.org/d494ddbd-9a98-39b0-bec9-0477636c42f7> provo:left-uri ?leftURI . <http://www.provbook.org/d494ddbd-9a98-39b0-bec9-0477636c42f7> provo:right-uri ?rightURI . <http://www.provbook.org/d494ddbd-9a98-39b0-bec9-0477636c42f7> provo:fusion-confidence ?conf . } """) print("Query returned %d results." % len(qres)) # display pair of POI URIs along with the fusion confidence sparql_results_frame(qres) ``` #### Values per attribute **before and after fusion** regarding this POI: ``` # SPARQL query is used to retrieve the results from the graph qres = g.query( """PREFIX slipo: <http://slipo.eu/def#> PREFIX provo: <http://www.w3.org/ns/prov#> SELECT DISTINCT ?valLeft ?valRight ?valFused WHERE { ?poiURI provo:wasDerivedFrom <http://www.provbook.org/d494ddbd-9a98-39b0-bec9-0477636c42f7> . ?poiURI provo:appliedAction ?action . ?action provo:attribute ?attr . ?action provo:left-value ?valLeft . ?action provo:right-value ?valRight . ?action provo:fused-value ?valFused . } """) print("Query returned %d results." % len(qres)) # print query results sparql_results_frame(qres) ``` # POI Analytics #### Once integrated POI data has been saved locally, analysis can be perfomed using tools like **pandas** _DataFrames_, **geopandas** _GeoDataFrames_ or other libraries. #### Unzip exported CSV file with the results of data integration: ``` import os import zipfile with zipfile.ZipFile('./output/corfu-integrated-pois.zip','r') as zip_ref: zip_ref.extractall("./output/") os.rename('./output/points.csv', './output/corfu_pois.csv') ``` #### Load CSV data in a _DataFrame_: ``` import pandas as pd pois = pd.read_csv('./output/corfu_pois.csv', delimiter='|', error_bad_lines=False) # Geometries in the exported CSV file are listed in Extended Well-Known Text (EWKT) # Since shapely does not support EWKT, update the geometry by removing the SRID value from EWKT pois['the_geom'] = pois['the_geom'].apply(lambda x: x.split(';')[1]) pois.head() ``` #### Create a _GeoDataFrame_: ``` import geopandas from shapely import wkt pois['the_geom'] = pois['the_geom'].apply(wkt.loads) gdf = geopandas.GeoDataFrame(pois, geometry='the_geom') ``` #### Display the location of the exported POIs on a **simplified plot** using _matplotlib_: ``` %matplotlib inline import matplotlib.pyplot as plt world = geopandas.read_file(geopandas.datasets.get_path('naturalearth_lowres')) # Restrict focus to Germany: ax = world[world.name == 'Greece'].plot( color='white', edgecolor='black') # Plot the contents of the GeoDataFrame in blue dots: gdf.plot(ax=ax, color='blue') plt.show() ```
true
code
0.352996
null
null
null
null
# Keras Intro: Shallow Models Keras Documentation: https://keras.io In this notebook we explore how to use Keras to implement 2 traditional Machine Learning models: - **Linear Regression** to predict continuous data - **Logistic Regression** to predict categorical data ## Linear Regression ``` %matplotlib inline import matplotlib.pyplot as plt import pandas as pd import numpy as np ``` ### 0. Load data ``` df = pd.read_csv('../data/weight-height.csv') df.head() df.plot(kind='scatter', x='Height', y='Weight', title='Weight and Height in adults') ``` ### 1. Create Train/Test split ``` from sklearn.model_selection import train_test_split X = df[['Height']].values y = df['Weight'].values X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3, random_state=0) ``` ### 2. Train Linear Regression Model ``` from keras.models import Sequential from keras.layers import Dense from keras.optimizers import Adam, SGD model = Sequential() model.add(Dense(1, input_shape=(1,))) model.summary() model.compile(Adam(lr=0.9), 'mean_squared_error') model.fit(X_train, y_train, epochs=40) ``` ### 3. Evaluate Model Performance ``` from sklearn.metrics import r2_score y_train_pred = model.predict(X_train).ravel() y_test_pred = model.predict(X_test).ravel() print("The R2 score on the Train set is:\t{:0.3f}".format(r2_score(y_train, y_train_pred))) print("The R2 score on the Test set is:\t{:0.3f}".format(r2_score(y_test, y_test_pred))) df.plot(kind='scatter', x='Height', y='Weight', title='Weight and Height in adults') plt.plot(X_test, y_test_pred, color='red') W, B = model.get_weights() W B ``` # Classification ### 0. Load Data ``` df = pd.read_csv('../data/user_visit_duration.csv') df.head() df.plot(kind='scatter', x='Time (min)', y='Buy') ``` ### 1. Create Train/Test split ``` X = df[['Time (min)']].values y = df['Buy'].values X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3, random_state=0) ``` ### 2. Train Logistic Regression Model ``` model = Sequential() model.add(Dense(1, input_shape=(1,), activation='sigmoid')) model.summary() model.compile(SGD(lr=0.5), 'binary_crossentropy', metrics=['accuracy']) model.fit(X_train, y_train, epochs=40) ax = df.plot(kind='scatter', x='Time (min)', y='Buy', title='Purchase behavior VS time spent on site') t = np.linspace(0, 4) ax.plot(t, model.predict(t), color='orange') plt.legend(['model', 'data']) ``` ### 3. Evaluate Model Performance #### Accuracy ``` from sklearn.metrics import accuracy_score y_train_pred = model.predict_classes(X_train) y_test_pred = model.predict_classes(X_test) print("The train accuracy score is {:0.3f}".format(accuracy_score(y_train, y_train_pred))) print("The test accuracy score is {:0.3f}".format(accuracy_score(y_test, y_test_pred))) ``` #### Confusion Matrix & Classification Report ``` from sklearn.metrics import confusion_matrix confusion_matrix(y_test, y_test_pred) def pretty_confusion_matrix(y_true, y_pred, labels=["False", "True"]): cm = confusion_matrix(y_true, y_pred) pred_labels = ['Predicted '+ l for l in labels] df = pd.DataFrame(cm, index=labels, columns=pred_labels) return df pretty_confusion_matrix(y_test, y_test_pred, ['Not Buy', 'Buy']) from sklearn.metrics import classification_report print(classification_report(y_test, y_test_pred)) ``` ## Exercise You've just been hired at a real estate investment firm and they would like you to build a model for pricing houses. You are given a dataset that contains data for house prices and a few features like number of bedrooms, size in square feet and age of the house. Let's see if you can build a model that is able to predict the price. In this exercise we extend what we have learned about linear regression to a dataset with more than one feature. Here are the steps to complete it: 1. Load the dataset ../data/housing-data.csv - create 2 variables called X and y: X shall be a matrix with 3 columns (sqft,bdrms,age) and y shall be a vector with 1 column (price) - create a linear regression model in Keras with the appropriate number of inputs and output - split the data into train and test with a 20% test size, use `random_state=0` for consistency with classmates - train the model on the training set and check its accuracy on training and test set - how's your model doing? Is the loss decreasing? - try to improve your model with these experiments: - normalize the input features: - divide sqft by 1000 - divide age by 10 - divide price by 100000 - use a different value for the learning rate of your model - use a different optimizer - once you're satisfied with training, check the R2score on the test set ``` # Load the dataset ../data/housing-data.csv df = pd.read_csv('../data/housing-data.csv') df.head() df.columns # create 2 variables called X and y: # X shall be a matrix with 3 columns (sqft,bdrms,age) # and y shall be a vector with 1 column (price) X = df[['sqft', 'bdrms', 'age']].values y = df['price'].values # create a linear regression model in Keras # with the appropriate number of inputs and output model = Sequential() model.add(Dense(1, input_shape=(3,))) model.compile(Adam(lr=0.8), 'mean_squared_error') # split the data into train and test with a 20% test size X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0) # train the model on the training set and check its accuracy on training and test set # how's your model doing? Is the loss decreasing? model.fit(X_train, y_train, epochs=50) # check the R2score on training and test set (probably very bad) y_train_pred = model.predict(X_train) y_test_pred = model.predict(X_test) print("The R2 score on the Train set is:\t{:0.3f}".format(r2_score(y_train, y_train_pred))) print("The R2 score on the Test set is:\t{:0.3f}".format(r2_score(y_test, y_test_pred))) # try to improve your model with these experiments: # - normalize the input features with one of the rescaling techniques mentioned above # - use a different value for the learning rate of your model # - use a different optimizer df['sqft1000'] = df['sqft']/1000.0 df['age10'] = df['age']/10.0 df['price100k'] = df['price']/1e5 X = df[['sqft1000', 'bdrms', 'age10']].values y = df['price100k'].values X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0) model = Sequential() model.add(Dense(1, input_dim=3)) model.compile(Adam(lr=0.1), 'mean_squared_error') model.fit(X_train, y_train, epochs=50) # once you're satisfied with training, check the R2score on the test set y_train_pred = model.predict(X_train) y_test_pred = model.predict(X_test) print("The R2 score on the Train set is:\t{:0.3f}".format(r2_score(y_train, y_train_pred))) print("The R2 score on the Test set is:\t{:0.3f}".format(r2_score(y_test, y_test_pred))) ```
true
code
0.716231
null
null
null
null
## The 1cycle policy ``` from fastai.gen_doc.nbdoc import * from fastai.vision import * from fastai.callbacks import * ``` ## What is 1cycle? This Callback allows us to easily train a network using Leslie Smith's 1cycle policy. To learn more about the 1cycle technique for training neural networks check out [Leslie Smith's paper](https://arxiv.org/pdf/1803.09820.pdf) and for a more graphical and intuitive explanation check out [Sylvain Gugger's post](https://sgugger.github.io/the-1cycle-policy.html). To use our 1cycle policy we will need an [optimum learning rate](https://sgugger.github.io/how-do-you-find-a-good-learning-rate.html). We can find this learning rate by using a learning rate finder which can be called by using [`lr_finder`](/callbacks.lr_finder.html#callbacks.lr_finder). It will do a mock training by going over a large range of learning rates, then plot them against the losses. We will pick a value a bit before the minimum, where the loss still improves. Our graph would look something like this: ![onecycle_finder](imgs/onecycle_finder.png) Here anything between `3x10^-2` and `10^-2` is a good idea. Next we will apply the 1cycle policy with the chosen learning rate as the maximum learning rate. The original 1cycle policy has three steps: 1. We progressively increase our learning rate from lr_max/div_factor to lr_max and at the same time we progressively decrease our momentum from mom_max to mom_min. 2. We do the exact opposite: we progressively decrease our learning rate from lr_max to lr_max/div_factor and at the same time we progressively increase our momentum from mom_min to mom_max. 3. We further decrease our learning rate from lr_max/div_factor to lr_max/(div_factor x 100) and we keep momentum steady at mom_max. This gives the following form: <img src="imgs/onecycle_params.png" alt="1cycle parameteres" width="500"> Unpublished work has shown even better results by using only two phases: the same phase 1, followed by a second phase where we do a cosine annealing from lr_max to 0. The momentum goes from mom_min to mom_max by following the symmetric cosine (see graph a bit below). ## Basic Training The one cycle policy allows to train very quickly, a phenomenon termed [_superconvergence_](https://arxiv.org/abs/1708.07120). To see this in practice, we will first train a CNN and see how our results compare when we use the [`OneCycleScheduler`](/callbacks.one_cycle.html#OneCycleScheduler) with [`fit_one_cycle`](/train.html#fit_one_cycle). ``` path = untar_data(URLs.MNIST_SAMPLE) data = ImageDataBunch.from_folder(path) model = simple_cnn((3,16,16,2)) learn = Learner(data, model, metrics=[accuracy]) ``` First lets find the optimum learning rate for our comparison by doing an LR range test. ``` learn.lr_find() learn.recorder.plot() ``` Here 5e-2 looks like a good value, a tenth of the minimum of the curve. That's going to be the highest learning rate in 1cycle so let's try a constant training at that value. ``` learn.fit(2, 5e-2) ``` We can also see what happens when we train at a lower learning rate ``` model = simple_cnn((3,16,16,2)) learn = Learner(data, model, metrics=[accuracy]) learn.fit(2, 5e-3) ``` ## Training with the 1cycle policy Now to do the same thing with 1cycle, we use [`fit_one_cycle`](/train.html#fit_one_cycle). ``` model = simple_cnn((3,16,16,2)) learn = Learner(data, model, metrics=[accuracy]) learn.fit_one_cycle(2, 5e-2) ``` This gets the best of both world and we can see how we get a far better accuracy and a far lower loss in the same number of epochs. It's possible to get to the same amazing results with training at constant learning rates, that we progressively diminish, but it will take a far longer time. Here is the schedule of the lrs (left) and momentum (right) that the new 1cycle policy uses. ``` learn.recorder.plot_lr(show_moms=True) show_doc(OneCycleScheduler) ``` Create a [`Callback`](/callback.html#Callback) that handles the hyperparameters settings following the 1cycle policy for `learn`. `lr_max` should be picked with the [`lr_find`](/train.html#lr_find) test. In phase 1, the learning rates goes from `lr_max/div_factor` to `lr_max` linearly while the momentum goes from `moms[0]` to `moms[1]` linearly. In phase 2, the learning rates follows a cosine annealing from `lr_max` to 0, as the momentum goes from `moms[1]` to `moms[0]` with the same annealing. ``` show_doc(OneCycleScheduler.steps, doc_string=False) ``` Build the [`Scheduler`](/callback.html#Scheduler) for the [`Callback`](/callback.html#Callback) according to `steps_cfg`. ### Callback methods You don't call these yourself - they're called by fastai's [`Callback`](/callback.html#Callback) system automatically to enable the class's functionality. ``` show_doc(OneCycleScheduler.on_train_begin, doc_string=False) ``` Initiate the parameters of a training for `n_epochs`. ``` show_doc(OneCycleScheduler.on_batch_end, doc_string=False) ``` Prepares the hyperparameters for the next batch. ## Undocumented Methods - Methods moved below this line will intentionally be hidden ## New Methods - Please document or move to the undocumented section ``` show_doc(OneCycleScheduler.on_epoch_end) ```
true
code
0.803945
null
null
null
null
# StyleGAN2 *Please note that this is an optional notebook that is meant to introduce more advanced concepts, if you're up for a challenge. So, don't worry if you don't completely follow every step! We provide external resources for extra base knowledge required to grasp some components of the advanced material.* In this notebook, you're going to learn about StyleGAN2, from the paper [Analyzing and Improving the Image Quality of StyleGAN](https://arxiv.org/abs/1912.04958) (Karras et al., 2019), and how it builds on StyleGAN. This is the V2 of StyleGAN, so be prepared for even more extraordinary outputs. Here's the quick version: 1. **Demodulation.** The instance normalization of AdaIN in the original StyleGAN actually was producing “droplet artifacts” that made the output images clearly fake. AdaIN is modified a bit in StyleGAN2 to make this not happen. Below, *Figure 1* from the StyleGAN2 paper is reproduced, showing the droplet artifacts in StyleGAN. ![droplet artifacts example](droplet_artifact.png) 2. **Path length regularization.** “Perceptual path length” (or PPL, which you can explore in [another optional notebook](https://www.coursera.org/learn/build-better-generative-adversarial-networks-gans/ungradedLab/BQjUq/optional-ppl)) was introduced in the original StyleGAN paper, as a metric for measuring the disentanglement of the intermediate noise space W. PPL measures the change in the output image, when interpolating between intermediate noise vectors $w$. You'd expect a good model to have a smooth transition during interpolation, where the same step size in $w$ maps onto the same amount of perceived change in the resulting image. Using this intuition, you can make the mapping from $W$ space to images smoother, by encouraging a given change in $w$ to correspond to a constant amount of change in the image. This is known as path length regularization, and as you might expect, included as a term in the loss function. This smoothness also made the generator model "significantly easier to invert"! Recall that inversion means going from a real or fake image to finding its $w$, so you can easily adapt the image's styles by controlling $w$. 3. **No progressive growing.** While progressive growing was seemingly helpful for training the network more efficiently and with greater stability at lower resolutions before progressing to higher resolutions, there's actually a better way. Instead, you can replace it with 1) a better neural network architecture with skip and residual connections (which you also see in Course 3 models, Pix2Pix and CycleGAN), and 2) training with all of the resolutions at once, but gradually moving the generator's _attention_ from lower-resolution to higher-resolution dimensions. So in a way, still being very careful about how to handle different resolutions to make training eaiser, from lower to higher scales. There are also a number of performance optimizations, like calculating the regularization less frequently. We won't focus on those in this notebook, but they are meaningful technical contributions. But first, some useful imports: ``` import torch import torch.nn as nn import torch.nn.functional as F from torchvision.utils import make_grid import matplotlib.pyplot as plt def show_tensor_images(image_tensor, num_images=16, size=(3, 64, 64), nrow=3): ''' Function for visualizing images: Given a tensor of images, number of images, size per image, and images per row, plots and prints the images in an uniform grid. ''' image_tensor = (image_tensor + 1) / 2 image_unflat = image_tensor.detach().cpu().clamp_(0, 1) image_grid = make_grid(image_unflat[:num_images], nrow=nrow, padding=2) plt.imshow(image_grid.permute(1, 2, 0).squeeze()) plt.axis('off') plt.show() ``` ## Fixing Instance Norm One issue with instance normalization is that it can lose important information that is typically communicated by relative magnitudes. In StyleGAN2, it was proposed that the droplet artifects are a way for the network to "sneak" this magnitude information with a single large spike. This issue was also highlighted in the paper which introduced GauGAN, [Semantic Image Synthesis with Spatially-Adaptive Normalization](https://arxiv.org/abs/1903.07291) (Park et al.), earlier in 2019. In that more extreme case, instance normalization could sometimes eliminate all semantic information, as shown in their paper's *Figure 3*: ![information loss by gaugan](gaugan_in.png) While removing normalization is technically possible, it reduces the controllability of the model, a major feature of StyleGAN. Here's one solution from the paper: ### Output Demodulation The first solution notes that the scaling the output of a convolutional layer by style has a consistent and numerically reproducible impact on the standard deviation of its output. By scaling down the standard deviation of the output to 1, the droplet effect can be reduced. More specifically, the style $s$, when applied as a multiple to convolutional weights $w$, resulting in weights $w'_{ijk}=s_i \cdot w_{ijk}$ will have standard deviation $\sigma_j = \sqrt{\sum_{i,k} w'^2_{ijk}}$. One can simply divide the output of the convolution by this factor. However, the authors note that dividing by this factor can also be incorporated directly into the the convolutional weights (with an added $\epsilon$ for numerical stability): $$w''_{ijk}=\frac{w'_{ijk}}{\sqrt{\sum_{i,k} w'^2_{ijk} + \epsilon}}$$ This makes it so that this entire operation can be baked into a single convolutional layer, making it easier to work with, implement, and integrate into the existing architecture of the model. ``` class ModulatedConv2d(nn.Module): ''' ModulatedConv2d Class, extends/subclass of nn.Module Values: channels: the number of channels the image has, a scalar w_dim: the dimension of the intermediate tensor, w, a scalar ''' def __init__(self, w_dim, in_channels, out_channels, kernel_size, padding=1): super().__init__() self.conv_weight = nn.Parameter( torch.randn(out_channels, in_channels, kernel_size, kernel_size) ) self.style_scale_transform = nn.Linear(w_dim, in_channels) self.eps = 1e-6 self.padding = padding def forward(self, image, w): # There is a more efficient (vectorized) way to do this using the group parameter of F.conv2d, # but for simplicity and readibility you will go through one image at a time. images = [] for i, w_cur in enumerate(w): # Calculate the style scale factor style_scale = self.style_scale_transform(w_cur) # Multiply it by the corresponding weight to get the new weights w_prime = self.conv_weight * style_scale[None, :, None, None] # Demodulate the new weights based on the above formula w_prime_prime = w_prime / torch.sqrt( (w_prime ** 2).sum([1, 2, 3])[:, None, None, None] + self.eps ) images.append(F.conv2d(image[i][None], w_prime_prime, padding=self.padding)) return torch.cat(images) def forward_efficient(self, image, w): # Here's the more efficient approach. It starts off mostly the same style_scale = self.style_scale_transform(w) w_prime = self.conv_weight[None] * style_scale[:, None, :, None, None] w_prime_prime = w_prime / torch.sqrt( (w_prime ** 2).sum([2, 3, 4])[:, :, None, None, None] + self.eps ) # Now, the trick is that we'll make the images into one image, and # all of the conv filters into one filter, and then use the "groups" # parameter of F.conv2d to apply them all at once batchsize, in_channels, height, width = image.shape out_channels = w_prime_prime.shape[2] # Create an "image" where all the channels of the images are in one sequence efficient_image = image.view(1, batchsize * in_channels, height, width) efficient_filter = w_prime_prime.view(batchsize * out_channels, in_channels, *w_prime_prime.shape[3:]) efficient_out = F.conv2d(efficient_image, efficient_filter, padding=self.padding, groups=batchsize) return efficient_out.view(batchsize, out_channels, *image.shape[2:]) example_modulated_conv = ModulatedConv2d(w_dim=128, in_channels=3, out_channels=3, kernel_size=3) num_ex = 2 image_size = 64 rand_image = torch.randn(num_ex, 3, image_size, image_size) # A 64x64 image with 3 channels rand_w = torch.randn(num_ex, 128) new_image = example_modulated_conv(rand_image, rand_w) second_modulated_conv = ModulatedConv2d(w_dim=128, in_channels=3, out_channels=3, kernel_size=3) second_image = second_modulated_conv(new_image, rand_w) print("Original noise (left), noise after modulated convolution (middle), noise after two modulated convolutions (right)") plt.rcParams['figure.figsize'] = [8, 8] show_tensor_images(torch.stack([rand_image, new_image, second_image], 1).view(-1, 3, image_size, image_size)) ``` ## Path Length Regularization Path length regularization was introduced based on the usefulness of PPL, or perceptual path length, a metric used of evaluating disentanglement proposed in the original StyleGAN paper -- feel free to check out the [optional notebook](https://www.coursera.org/learn/build-better-generative-adversarial-networks-gans/ungradedLab/BQjUq/optional-ppl) for a detailed overview! In essence, for a fixed-size step in any direction in $W$ space, the metric attempts to make the change in image space to have a constant magnitude $a$. This is accomplished (in theory) by first taking the Jacobian of the generator with respect to $w$, which is $\mathop{\mathrm{J}_{\mathrm{w}}}={\partial g(\mathrm{w})} / {\partial \mathrm{w}}$. Then, you take the L2 norm of Jacobian matrix and you multiply that by random images (that you sample from a normal distribution, as you often do): $\Vert \mathrm{J}_{\mathrm{w}}^T \mathrm{y} \Vert_2$. This captures the expected magnitude of the change in pixel space. From this, you get a loss term, which penalizes the distance between this magnitude and $a$. The paper notes that this has similarities to spectral normalization (discussed in [another optional notebook](https://www.coursera.org/learn/build-basic-generative-adversarial-networks-gans/ungradedLab/c2FPs/optional-sn-gan) in Course 1), because it constrains multiple norms. An additional optimization is also possible and ultimately used in the StyleGAN2 model: instead of directly computing $\mathrm{J}_{\mathrm{w}}^T \mathrm{y}$, you can more efficiently calculate the gradient $\nabla_{\mathrm{w}} (g(\mathrm{w}) \cdot \mathrm{y})$. Finally, a bit of talk on $a$: $a$ is not a fixed constant, but an exponentially decaying average of the magnitudes over various runs -- as with most times you see (decaying) averages being used, this is to smooth out the value of $a$ across multiple iterations, not just dependent on one. Notationally, with decay rate $\gamma$, $a$ at the next iteration $a_{t+1} = {a_t} * (1 - \gamma) + \Vert \mathrm{J}_{\mathrm{w}}^T \mathrm{y} \Vert_2 * \gamma$. However, for your one example iteration you can treat $a$ as a constant for simplicity. There is also an example of an update of $a$ after the calculation of the loss, so you can see what $a_{t+1}$ looks like with exponential decay. ``` # For convenience, we'll define a very simple generator here: class SimpleGenerator(nn.Module): ''' SimpleGenerator Class, for path length regularization demonstration purposes Values: channels: the number of channels the image has, a scalar w_dim: the dimension of the intermediate tensor, w, a scalar ''' def __init__(self, w_dim, in_channels, hid_channels, out_channels, kernel_size, padding=1, init_size=64): super().__init__() self.w_dim = w_dim self.init_size = init_size self.in_channels = in_channels self.c1 = ModulatedConv2d(w_dim, in_channels, hid_channels, kernel_size) self.activation = nn.ReLU() self.c2 = ModulatedConv2d(w_dim, hid_channels, out_channels, kernel_size) def forward(self, w): image = torch.randn(len(w), self.in_channels, self.init_size, self.init_size).to(w.device) y = self.c1(image, w) y = self.activation(y) y = self.c2(y, w) return y from torch.autograd import grad def path_length_regulization_loss(generator, w, a): # Generate the images from w fake_images = generator(w) # Get the corresponding random images random_images = torch.randn_like(fake_images) # Output variation that we'd like to regularize output_var = (fake_images * random_images).sum() # Calculate the gradient with respect to the inputs cur_grad = grad(outputs=output_var, inputs=w)[0] # Calculate the distance from a penalty = (((cur_grad - a) ** 2).sum()).sqrt() return penalty, output_var simple_gen = SimpleGenerator(w_dim=128, in_channels=3, hid_channels=64, out_channels=3, kernel_size=3) samples = 10 test_w = torch.randn(samples, 128).requires_grad_() a = 10 penalty, variation = path_length_regulization_loss(simple_gen, test_w, a=a) decay = 0.001 # How quickly a should decay new_a = a * (1 - decay) + variation * decay print(f"Old a: {a}; new a: {new_a.item()}") ``` ## No More Progressive Growing While the concepts behind progressive growing remain, you get to see how that is revamped and beefed up in StyleGAN2. This starts with generating all resolutions of images from the very start of training. You might be wondering why they didn't just do this in the first place: in the past, this has generally been unstable to do. However, by using residual or skip connections (there are two variants that both do better than without them), StyleGAN2 manages to replicate many of the dynamics of progressive growing in a less explicit way. Three architectures were considered for StyleGAN2 to replace the progressive growing. Note that in the following figure, *tRGB* and *fRGB* refer to the $1 \times 1$ convolutions which transform the noise with some number channels at a given layer into a three-channel image for the generator, and vice versa for the discriminator. ![architectures considered](stylegan_architectures.png) *The set of architectures considered for StyleGAN2 (from the paper). Ultimately, the skip generator and residual discriminator (highlighted in green) were chosen*. ### Option a: MSG-GAN [MSG-GAN](https://arxiv.org/abs/1903.06048) (from Karnewar and Wang 2019), proposed a somewhat natural approach: generate all resolutions of images, but also directly pass each corresponding resolution to a block of the discriminator responsible for dealing with that resolution. ### Option b: Skip Connections In the skip-connection approach, each block takes the previous noise as input and generates the next resolution of noise. For the generator, each noise is converted to an image, upscaled to the maximum size, and then summed together. For the discriminator, the images are downsampled to each block's size and converted to noises. ### Option c: Residual Nets In the residual network approach, each block adds residual detail to the noise, and the image conversion happens at the end for the generator and at the start for the discriminator. ### StyleGAN2: Skip Generator, Residual Discriminator By experiment, the skip generator and residual discriminator were chosen. One interesting effect is that, as the images for the skip generator are additive, you can explicitly see the contribution from each of them, and measure the magnitude of each block's contribution. If you're not 100% sure how to implement skip and residual models yet, don't worry - you'll get a lot of practice with that in Course 3! ![contribution by different resolutions over time](noise_contributions.png) *Figure 8 from StyleGAN2 paper, showing generator contributions by different resolution blocks of the generator over time. The y-axis is the standard deviation of the contributions, and the x-axis is the number of millions of images that the model has been trained on (training progress).* Now, you've seen the primary changes, and you understand the current state-of-the-art in image generation, StyleGAN2, congratulations! If you're the type of person who reads through the optional notebooks for fun, maybe you'll make the next state-of-the-art! Can't wait to cover your GAN in a new notebook :)
true
code
0.886076
null
null
null
null
# Project 1: Navigation ### Test 3 - DDQN model with Prioritized Experience Replay <sub>Uirá Caiado. August 23, 2018<sub> #### Abstract _In this notebook, I will use the Unity ML-Agents environment to train a DDQN model with PER for the first project of the [Deep Reinforcement Learning Nanodegree](https://www.udacity.com/course/deep-reinforcement-learning-nanodegree--nd893)._ ## 1. What we are going to test Quoting the seminal [Prioritized Experience Replay](https://arxiv.org/abs/1511.05952) paper, from the Deep Mind team, experience replay lets online reinforcement learning agents remember and reuse experiences from the past. Bellow, I am going to test my implementation of the PER buffer in conjunction to Double DQN. Thus, let's begin by checking the environment where I am going to run these tests. ``` %load_ext version_information %version_information numpy, unityagents, torch, matplotlib, pandas, gym ``` Now, let's define some meta variables to use in this notebook ``` import os fig_prefix = 'figures/2018-08-23-' data_prefix = '../data/2018-08-23-' s_currentpath = os.getcwd() ``` Also, let's import some of the necessary packages for this experiment. ``` from unityagents import UnityEnvironment import sys import os sys.path.append("../") # include the root directory as the main import eda import pandas as pd import numpy as np ``` ## 2. Training the agent The environment used for this project is the Udacity version of the Banana Collector environment, from [Unity](https://youtu.be/heVMs3t9qSk). The goal of the agent is to collect as many yellow bananas as possible while avoiding blue bananas. Bellow, we are going to start this environment. ``` env = UnityEnvironment(file_name="../Banana_Linux_NoVis/Banana.x86_64") ``` Unity Environments contain brains which are responsible for deciding the actions of their associated agents. Here we check for the first brain available, and set it as the default brain we will be controlling from Python. ``` # get the default brain brain_name = env.brain_names[0] brain = env.brains[brain_name] ``` Now, we are going to collect some basic information about the environment. ``` # reset the environment env_info = env.reset(train_mode=True)[brain_name] # number of actions action_size = brain.vector_action_space_size # examine the state space state = env_info.vector_observations[0] state_size = len(state) ``` And finally, we are going to train the model. We will consider that this environment is solved if the agent is able to receive an average reward (over 100 episodes) of at least +13. ``` %%time import gym import pickle import random import torch import numpy as np from collections import deque from drlnd.dqn_agent import DQNAgent, DDQNAgent, DDQNPREAgent n_episodes = 2000 eps_start = 1. eps_end=0.01 eps_decay=0.995 max_t = 1000 s_model = 'ddqnpre' agent = DDQNPREAgent(state_size=state_size, action_size=action_size, seed=0) scores = [] # list containing scores from each episode scores_std = [] # List containing the std dev of the last 100 episodes scores_avg = [] # List containing the mean of the last 100 episodes scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): env_info = env.reset(train_mode=True)[brain_name] # reset the environment state = env_info.vector_observations[0] # get the current state score = 0 # initialize the score for t in range(max_t): # action = np.random.randint(action_size) # select an action action = agent.act(state, eps) env_info = env.step(action)[brain_name] # send the action to the environment next_state = env_info.vector_observations[0] # get the next state reward = env_info.rewards[0] # get the reward done = env_info.local_done[0] # see if episode has finished agent.step(state, action, reward, next_state, done) score += reward # update the score state = next_state # roll over the state to next time step if done: # exit loop if episode finished break scores_window.append(score) # save most recent score scores.append(score) # save most recent score scores_std.append(np.std(scores_window)) # save most recent std dev scores_avg.append(np.mean(scores_window)) # save most recent std dev eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=13.0: s_msg = '\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}' print(s_msg.format(i_episode, np.mean(scores_window))) torch.save(agent.qnet.state_dict(), '%scheckpoint_%s.pth' % (data_prefix, s_model)) break # save data to use latter d_data = {'episodes': i_episode, 'scores': scores, 'scores_std': scores_std, 'scores_avg': scores_avg, 'scores_window': scores_window} pickle.dump(d_data, open('%ssim-data-%s.data' % (data_prefix, s_model), 'wb')) ``` ## 3. Results The agent using Double DQN with Prioritized Experience Replay was able to solve the Banana Collector environment in 562 episodes of 1000 steps, each. ``` import pickle d_data = pickle.load(open('../data/2018-08-23-sim-data-ddqnpre.data', 'rb')) s_msg = 'Environment solved in {:d} episodes!\tAverage Score: {:.2f} +- {:.2f}' print(s_msg.format(d_data['episodes'], np.mean(d_data['scores_window']), np.std(d_data['scores_window']))) ``` Now, let's plot the rewards per episode. In the right panel, we will plot the rolling average score over 100 episodes $\pm$ its standard deviation, as well as the goal of this project (13+ on average over the last 100 episodes). ``` import matplotlib.pyplot as plt import seaborn as sns import numpy as np %matplotlib inline #recover data na_raw = np.array(d_data['scores']) na_mu = np.array(d_data['scores_avg']) na_sigma = np.array(d_data['scores_std']) # plot the scores f, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 5), sharex=True, sharey=True) # plot the sores by episode ax1.plot(np.arange(len(na_raw)), na_raw) ax1.set_xlim(0, len(na_raw)+1) ax1.set_ylabel('Score') ax1.set_xlabel('Episode #') ax1.set_title('raw scores') # plot the average of these scores ax2.axhline(y=13., xmin=0.0, xmax=1.0, color='r', linestyle='--', linewidth=0.7, alpha=0.9) ax2.plot(np.arange(len(na_mu)), na_mu) ax2.fill_between(np.arange(len(na_mu)), na_mu+na_sigma, na_mu-na_sigma, facecolor='gray', alpha=0.1) ax2.set_ylabel('Average Score') ax2.set_xlabel('Episode #') ax2.set_title('average scores') f.tight_layout() # f.savefig(fig_prefix + 'ddqnpre-learning-curve.eps', format='eps', dpi=1200) f.savefig(fig_prefix + 'ddqnpre-learning-curve.jpg', format='jpg') env.close() ``` ## 4. Conclusion The Double Deep Q-learning agent using Prioritized Experience Replay was able to solve the environment in 562 episodes and was the worst performance among all implementations. However, something that is worth noting that this implementation is seen to present the most smooth learning curve. ``` import pickle d_ddqnper = pickle.load(open('../data/2018-08-23-sim-data-ddqnpre.data', 'rb')) d_ddqn = pickle.load(open('../data/2018-08-24-sim-data-ddqn.data', 'rb')) d_dqn = pickle.load(open('../data/2018-08-24-sim-data-dqn.data', 'rb')) import matplotlib.pyplot as plt import seaborn as sns import numpy as np %matplotlib inline def recover_data(d_data): #recover data na_raw = np.array(d_data['scores']) na_mu = np.array(d_data['scores_avg']) na_sigma = np.array(d_data['scores_std']) return na_raw, na_mu, na_sigma # plot the scores f, ax2 = plt.subplots(1, 1, figsize=(8, 4), sharex=True, sharey=True) for s_model, d_data in zip(['DQN', 'DDQN', 'DDQN with PER'], [d_ddqnper, d_ddqn, d_dqn]): na_raw, na_mu, na_sigma = recover_data(d_data) if s_model == 'DDQN with PER': ax2.set_xlim(0, 572) # plot the average of these scores ax2.axhline(y=13., xmin=0.0, xmax=1.0, color='r', linestyle='--', linewidth=0.7, alpha=0.9) ax2.plot(np.arange(len(na_mu)), na_mu, label=s_model) # ax2.fill_between(np.arange(len(na_mu)), na_mu+na_sigma, na_mu-na_sigma, alpha=0.15) # format axis ax2.legend() ax2.set_title('Learning Curves') ax2.set_ylabel('Average Score in 100 episodes') ax2.set_xlabel('Episode #') # Shrink current axis's height by 10% on the bottom box = ax2.get_position() ax2.set_position([box.x0, box.y0 + box.height * 0.1, box.width, box.height * 0.9]) # Put a legend below current axis lgd = ax2.legend(loc='upper center', bbox_to_anchor=(0.5, -0.10), fancybox=False, shadow=False, ncol=3) f.tight_layout() f.savefig(fig_prefix + 'final-comparition-2.eps', format='eps', bbox_extra_artists=(lgd,), bbox_inches='tight', dpi=1200) ``` Finally, let's compare the score distributions generated by the agents. I am going to perform the one-sided Welch's unequal variances t-test for the null hypothesis that the DDQN model has the expected score higher than the other agents on the final 100 episodes of each experiment. As the implementation of the t-test in the [Scipy](https://goo.gl/gs222c) assumes a two-sided t-test, to perform the one-sided test, we will divide the p-value by 2 to compare to a critical value of 0.05 and requires that the t-value is greater than zero. ``` import pandas as pd def extract_info(s, d_data): return {'model': s, 'episodes': d_data['episodes'], 'mean_score': np.mean(d_data['scores_window']), 'std_score': np.std(d_data['scores_window'])} l_data = [extract_info(s, d) for s, d in zip(['DDQN with PER', 'DDQN', 'DQN'], [d_ddqnper, d_ddqn, d_dqn])] df = pd.DataFrame(l_data) df.index = df.model df.drop('model', axis=1, inplace=True) print(df.sort_values(by='episodes')) import scipy #performs t-test a = [float(pd.DataFrame(d_dqn['scores']).iloc[-1].values)] * 2 b = list(pd.DataFrame(d_rtn_test_1r['pnl']['test']).fillna(method='ffill').iloc[-1].values) tval, p_value = scipy.stats.ttest_ind(a, b, equal_var=False) import scipy tval, p_value = scipy.stats.ttest_ind(d_ddqn['scores'], d_dqn['scores'], equal_var=False) print("DDQN vs. DQN: t-value = {:0.6f}, p-value = {:0.8f}".format(tval, p_value)) tval, p_value = scipy.stats.ttest_ind(d_ddqn['scores'], d_ddqnper['scores'], equal_var=False) print("DDQN vs. DDQNPRE: t-value = {:0.6f}, p-value = {:0.8f}".format(tval, p_value)) ``` There was no significant difference between the performances of the agents.
true
code
0.554229
null
null
null
null
# How do ratings behave after users have seen many captions? This notebook looks at the "vote decay" of users. The New Yorker caption contest organizer, Bob Mankoff, has received many emails like the one below (name/personal details left out for anonymity) > Here's my issue. > > First time I encounter something, I might say it's funny. > > Then it comes back in many forms over and over and it's no longer funny and I wish I could go back to the first one and say it's not funny. > > But it's funny, and then I can't decide whether to credit everyone with funny or keep hitting unfunny. What I really like to find out is who submitted it first, but often it's slightly different and there may be a best version. Auggh! > > How should we do this??? We can investigate this: we have all the data at hand. We record the timestamp, participant ID and their rating for a given caption. So let's see how votes go after a user has seen $n$ captions! ``` %matplotlib inline import pandas as pd import numpy as np import matplotlib.pyplot as plt plt.style.use('seaborn') import caption_contest_data as ccd ``` ## Reading in data Let's read in the data. As the last column can contain a non-escaped comma, we have to fix that before doing any analysis. Note that two versions of this notebook exist (the previous notebook can be found in [43bc5d]). This highlights some of the differences required to read in the earlier datasets. [43bc5d]:https://github.com/nextml/caption-contest-data/commit/43bc5d23ee287b8b34cc4eb0181484bd21bbd341 ``` contest = 540 responses = ccd.responses(contest) print(len(responses)) responses.head() ``` ## Seeing how many captions a user has seen This is the workhorse of the notebook: it sees how many captions one participant has seen. I sorted by timestamp (and with an actual timestamp, not a str) to collect the ratings in the order a user has seen. I do not assume that only one user answers at a time. ``` last_id = None i = 0 num_responses = [] captions_seen = [] responses = responses.sort_values(by='timestamp_query_generated') # responses = responses[0:1000] # debug captions_seen_by = {} captions_seen = [] for _, response in responses.iterrows(): id_, rating = response['participant_uid'], response['target_reward'] if id_ not in captions_seen_by: captions_seen_by[id_] = 0 captions_seen_by[id_] += 1 captions_seen += [captions_seen_by[id_]] num_responses += [i] responses['number of captions seen'] = captions_seen responses.head() ``` ## Viewing the data Now let's format the data to view it. We can view the data in two ways: as we only have three rating values, we can view the probability of a person rating 1, 2 or 3, and can also view the mean. In this, we rely on `pd.pivot_table`. This can take DataFrame that looks like a list of dictionaries and compute `aggfunc` (by default `np.mean`) for all items that contain common keys (indicated by `index` and `columns`). It's similar to Excel's pivot table functionality. ### Probability of rating {1, 2, 3} ``` def prob(x): n = len(x) ret = {'n': n} ret.update({name: np.sum(x == i) for name, i in [('unfunny', 1), ('somewhat funny', 2), ('funny', 3)]}) return ret probs = responses.pivot_table(index='number of captions seen', columns='alg_label', values='target_reward', aggfunc=prob) probs.head() d = {label: dict(probs[label]) for label in ['RandomSampling']} for label in d.keys(): for n in d[label].keys(): if d[label][n] is None: continue for rating in ['unfunny', 'somewhat funny', 'funny']: d[label][n][rating] = d[label][n][rating] / d[label][n]['n'] df = pd.DataFrame(d['RandomSampling']).T df = pd.concat({'RandomSampling': df}, axis=1) df.head() plt.style.use("default") fig, axs = plt.subplots(figsize=(8, 4), ncols=2) alg = "RandomSampling" show = df[alg].copy() show["captions seen"] = show.index for y in ["funny", "somewhat funny", "unfunny"]: show.plot(x="captions seen", y=y, ax=axs[0]) show.plot(x="captions seen", y="n", ax=axs[1]) for ax in axs: ax.set_xlim(0, 100) ax.grid(linestyle='--', alpha=0.5) plt.style.use("default") def plot(alg): fig = plt.figure(figsize=(10, 5)) ax = plt.subplot(1, 2, 1) df[alg][['unfunny', 'somewhat funny', 'funny']].plot(ax=ax) plt.xlim(0, 100) plt.title('{} ratings\nfor contest {}'.format(alg, contest)) plt.ylabel('Probability of rating') plt.xlabel('Number of captions seen') plt.grid(linestyle="--", alpha=0.6) ax = plt.subplot(1, 2, 2) df[alg]['n'].plot(ax=ax, logy=False) plt.ylabel('Number of users') plt.xlabel('Number of captions seen, $n$') plt.title('Number of users that have\nseen $n$ captions') plt.xlim(0, 100) plt.grid(linestyle="--", alpha=0.6) for alg in ['RandomSampling']: fig = plot(alg) plt.show() ```
true
code
0.502075
null
null
null
null
# 使用预训练的词向量完成文本分类任务 **作者**: [fiyen](https://github.com/fiyen)<br> **日期**: 2021.10<br> **摘要**: 本示例教程将会演示如何使用飞桨内置的Imdb数据集,并使用预训练词向量进行文本分类。 ## 一、环境设置 本教程基于Paddle 2.2.0-rc0 编写,如果你的环境不是本版本,请先参考官网[安装](https://www.paddlepaddle.org.cn/install/quick) Paddle 2.2.0-rc0。 ``` import paddle from paddle.io import Dataset import numpy as np import paddle.text as text import random print(paddle.__version__) ``` ## 二、数据载入 在这个示例中,将使用 Paddle 2.2.0-rc0 完成针对 Imdb 数据集(电影评论情感二分类数据集)的分类训练和测试。Imdb 将直接调用自 Paddle 2.2.0-rc0,同时, 利用预训练的词向量([GloVe embedding](http://nlp.stanford.edu/projects/glove/))完成任务。 ``` print('自然语言相关数据集:', paddle.text.__all__) ``` 由于 Paddle 2.2.0-rc0 提供了经过处理的Imdb数据集,可以方便地调用所需要的数据实例,省去了数据预处理的麻烦。目前, Paddle 2.2.0-rc0 以及内置的高质量 数据集包括 Conll05st、Imdb、Imikolov、Movielens、HCIHousing、WMT14 和 WMT16 等,未来还将提供更多常用数据集的调用接口。 以下定义了调用 imdb 训练集合测试集的方法。其中,cutoff 定义了构建词典的截止大小,即数据集中出现频率在 cutoff 以下的不予考虑;mode 定义了返回的数据用于何种用途(test: 测试集,train: 训练集)。 ### 2.1 定义数据集 ``` imdb_train = text.Imdb(mode='train', cutoff=150) imdb_test = text.Imdb(mode='test', cutoff=150) ``` 调用 Imdb 得到的是经过编码的数据集,每个 term 对应一个唯一 id,映射关系可以通过 imdb_train.word_idx 查看。将每一个样本即一条电影评论,表示成 id 序列。可以检查一下以上生成的数据内容: ``` print("训练集样本数量: %d; 测试集样本数量: %d" % (len(imdb_train), len(imdb_test))) print(f"样本标签: {set(imdb_train.labels)}") print(f"样本字典: {list(imdb_train.word_idx.items())[:10]}") print(f"单个样本: {imdb_train.docs[0]}") print(f"最小样本长度: {min([len(x) for x in imdb_train.docs])};最大样本长度: {max([len(x) for x in imdb_train.docs])}") ``` 对于训练集,将数据的顺序打乱,以优化将要进行的分类模型训练的效果。 ``` shuffle_index = list(range(len(imdb_train))) random.shuffle(shuffle_index) train_x = [imdb_train.docs[i] for i in shuffle_index] train_y = [imdb_train.labels[i] for i in shuffle_index] test_x = imdb_test.docs test_y = imdb_test.labels ``` 从样本长度上可以看到,每个样本的长度是不相同的。然而,在模型的训练过程中,需要保证每个样本的长度相同,以便于构造矩阵进行批量运算。 因此,需要先对所有样本进行填充或截断,使样本的长度一致。 ``` def vectorizer(input, label=None, length=2000): if label is not None: for x, y in zip(input, label): yield np.array((x + [0]*length)[:length]).astype('int64'), np.array([y]).astype('int64') else: for x in input: yield np.array((x + [0]*length)[:length]).astype('int64') ``` ### 2.2 载入预训练向量 以下给出的文件较小,可以直接完全载入内存。对于大型的预训练向量,无法一次载入内存的,可以采用分批载入,并行处理的方式进行匹配。 此外,AIStudio 中提供了 glove.6B 数据集挂载,用户可在 AIStudio 中直接载入数据集并解压。 ``` # 下载并解压预训练向量 !wget http://nlp.stanford.edu/data/glove.6B.zip !unzip -q glove.6B.zip glove_path = "./glove.6B.100d.txt" embeddings = {} ``` 观察上述GloVe预训练向量文件一行的数据: ``` # 使用utf8编码解码 with open(glove_path, encoding='utf-8') as gf: line = gf.readline() print("GloVe单行数据:'%s'" % line) ``` 可以看到,每一行都以单词开头,其后接上该单词的向量值,各个值之间用空格隔开。基于此,可以用如下方法得到所有词向量的字典。 ``` with open(glove_path, encoding='utf-8') as gf: for glove in gf: word, embedding = glove.split(maxsplit=1) embedding = [float(s) for s in embedding.split(' ')] embeddings[word] = embedding print("预训练词向量总数:%d" % len(embeddings)) print(f"单词'the'的向量是:{embeddings['the']}") ``` ### 3.3 给数据集的词表匹配词向量 接下来,提取数据集的词表,需要注意的是,词表中的词编码的先后顺序是按照词出现的频率排列的,频率越高的词编码值越小。 ``` word_idx = imdb_train.word_idx vocab = [w for w in word_idx.keys()] print(f"词表的前5个单词:{vocab[:5]}") print(f"词表的后5个单词:{vocab[-5:]}") ``` 观察词表的后5个单词,发现最后一个词是"\<unk\>",这个符号代表所有词表以外的词。另外,对于形式b'the',是字符串'the' 的二进制编码形式,使用中注意使用b'the'.decode()来进行转换('\<unk\>'并没有进行二进制编码,注意区分)。 接下来,给词表中的每个词匹配对应的词向量。预训练词向量可能没有覆盖数据集词表中的所有词,对于没有的词,设该词的词 向量为零向量。 ``` # 定义词向量的维度,注意与预训练词向量保持一致 dim = 100 vocab_embeddings = np.zeros((len(vocab), dim)) for ind, word in enumerate(vocab): if word != '<unk>': word = word.decode() embedding = embeddings.get(word, np.zeros((dim,))) vocab_embeddings[ind, :] = embedding ``` ## 四、组网 ### 4.1 构建基于预训练向量的Embedding 对于预训练向量的Embedding,一般期望它的参数不再变动,所以要设置trainable=False。如果希望在此基础上训练参数,则需要 设置trainable=True。 ``` pretrained_attr = paddle.ParamAttr(name='embedding', initializer=paddle.nn.initializer.Assign(vocab_embeddings), trainable=False) embedding_layer = paddle.nn.Embedding(num_embeddings=len(vocab), embedding_dim=dim, padding_idx=word_idx['<unk>'], weight_attr=pretrained_attr) ``` ### 4.2 构建分类器 这里,构建简单的基于一维卷积的分类模型,其结构为:Embedding->Conv1D->Pool1D->Linear。在定义Linear时,由于需要知 道输入向量的维度,可以按照公式[官方文档](https://www.paddlepaddle.org.cn/documentation/docs/zh/2.0-beta/api/paddle/nn/layer/conv/Conv2d_cn.html) 来进行计算。这里给出计算的函数如下: ``` def cal_output_shape(input_shape, out_channels, kernel_size, stride, padding=0, dilation=1): return out_channels, int((input_shape + 2*padding - (dilation*(kernel_size - 1) + 1)) / stride) + 1 # 定义每个样本的长度 length = 2000 # 定义卷积层参数 kernel_size = 5 out_channels = 10 stride = 2 padding = 0 output_shape = cal_output_shape(length, out_channels, kernel_size, stride, padding) output_shape = cal_output_shape(output_shape[1], output_shape[0], 2, 2, 0) sim_model = paddle.nn.Sequential(embedding_layer, paddle.nn.Conv1D(in_channels=dim, out_channels=out_channels, kernel_size=kernel_size, stride=stride, padding=padding, data_format='NLC', bias_attr=True), paddle.nn.ReLU(), paddle.nn.MaxPool1D(kernel_size=2, stride=2), paddle.nn.Flatten(), paddle.nn.Linear(in_features=np.prod(output_shape), out_features=2, bias_attr=True), paddle.nn.Softmax()) paddle.summary(sim_model, input_size=(-1, length), dtypes='int64') ``` ### 4.3 读取数据,进行训练 可以利用飞桨2.0的io.Dataset模块来构建一个数据的读取器,方便地将数据进行分批训练。 ``` class DataReader(Dataset): def __init__(self, input, label, length): self.data = list(vectorizer(input, label, length=length)) def __getitem__(self, idx): return self.data[idx] def __len__(self): return len(self.data) # 定义输入格式 input_form = paddle.static.InputSpec(shape=[None, length], dtype='int64', name='input') label_form = paddle.static.InputSpec(shape=[None, 1], dtype='int64', name='label') model = paddle.Model(sim_model, input_form, label_form) model.prepare(optimizer=paddle.optimizer.Adam(learning_rate=0.001, parameters=model.parameters()), loss=paddle.nn.loss.CrossEntropyLoss(), metrics=paddle.metric.Accuracy()) # 分割训练集和验证集 eval_length = int(len(train_x) * 1/4) model.fit(train_data=DataReader(train_x[:-eval_length], train_y[:-eval_length], length), eval_data=DataReader(train_x[-eval_length:], train_y[-eval_length:], length), batch_size=32, epochs=10, verbose=1) ``` ## 五、评估效果并用模型预测 ``` # 评估 model.evaluate(eval_data=DataReader(test_x, test_y, length), batch_size=32, verbose=1) # 预测 true_y = test_y[100:105] + test_y[-110:-105] pred_y = model.predict(DataReader(test_x[100:105] + test_x[-110:-105], None, length), batch_size=1) test_x_doc = test_x[100:105] + test_x[-110:-105] # 标签编码转文字 label_id2text = {0: 'positive', 1: 'negative'} for index, y in enumerate(pred_y[0]): print("原文本:%s" % ' '.join([vocab[i].decode() for i in test_x_doc[index] if i < len(vocab) - 1])) print("预测的标签是:%s, 实际标签是:%s" % (label_id2text[np.argmax(y)], label_id2text[true_y[index]])) ```
true
code
0.530419
null
null
null
null
# House Price Prediction <p><b>Status: <span style=color:orange;>In process</span></b></p> ##### LOAD THE FEATURE DATA ``` import pandas as pd import numpy as np X = pd.read_csv('../../../data/preprocessed_data/X.csv', sep=',') print ('Feature data, shape:\nX: {}'.format(X.shape)) X.head() y = pd.read_csv('../../../data/preprocessed_data/y.csv', sep=',', header=None) print ('Target data, shape:\ny: {}'.format(y.shape)) y.head() ``` ##### SPLIT THE DATA ``` from sklearn.model_selection import train_test_split # set the seed for reproducibility np.random.seed(127) # split the dataset into 2 training and 2 testing sets X_train, X_test, y_train, y_test = train_test_split(X.values, y.values, test_size=0.2, random_state=13) print('Data shapes:\n') print('X_train : {}\ny_train : {}\n\nX_test : {}\ny_test : {}'.format(X_train.shape, y_train.shape, X_test.shape, y_test.shape)) ``` ##### DEFINE NETWORK PARAMETERS ``` # define number of attributes n_features = X_train.shape[1] n_target = 1 # quantitative data # count number of samples in each set of data n_train = X_train.shape[0] n_test = X_test.shape[0] # define amount of neurons n_layer_in = n_features # 12 neurons in input layer n_layer_h1 = 5 # first hidden layer n_layer_h2 = 5 # second hidden layer n_layer_out = n_target # 1 neurons in output layer sigma_init = 0.01 # For randomized initialization ``` ##### RESET TENSORFLOW GRAPH IF THERE IS ANY ``` import tensorflow as tf # this will set up a specific seed in order to control the output # and get more homogeneous results though every model variation def reset_graph(seed=127): tf.reset_default_graph() tf.set_random_seed(seed) np.random.seed(seed) reset_graph() ``` ##### MODEL ARCHITECTURE ``` # create symbolic variables X = tf.placeholder(tf.float32, [None, n_layer_in], name="input") Y = tf.placeholder(tf.float32, [None, n_layer_out], name="output") # deploy the variables that will store the weights W = { 'W1': tf.Variable(tf.random_normal([n_layer_in, n_layer_h1], stddev = sigma_init), name='W1'), 'W2': tf.Variable(tf.random_normal([n_layer_h1, n_layer_h2], stddev = sigma_init), name='W2'), 'W3': tf.Variable(tf.random_normal([n_layer_h2, n_layer_out], stddev = sigma_init), name='W3') } # deploy the variables that will store the bias b = { 'b1': tf.Variable(tf.random_normal([n_layer_h1]), name='b1'), 'b2': tf.Variable(tf.random_normal([n_layer_h2]), name='b2'), 'b3': tf.Variable(tf.random_normal([n_layer_out]), name='b3') } # this will create the model architecture and output the result def model_MLP(_X, _W, _b): with tf.name_scope('hidden_1'): layer_h1 = tf.nn.selu(tf.add(tf.matmul(_X,_W['W1']), _b['b1'])) with tf.name_scope('hidden_2'): layer_h2 = tf.nn.selu(tf.add(tf.matmul(layer_h1,_W['W2']), _b['b2'])) with tf.name_scope('layer_output'): layer_out = tf.add(tf.matmul(layer_h2,_W['W3']), _b['b3']) return layer_out # these are the predictions with tf.name_scope("MLP"): y_pred = model_MLP(X, W, b) ``` ##### DEFINE LEARNING RATE ``` learning_rate = 0.4 # CHOOSE A DECAYING METHOD IN HERE model_decay = 'none' # [exponential | inverse_time | natural_exponential | polynomial | none] global_step = tf.Variable(0, trainable=False) decay_rate = 0.90 decay_step = 10000 if model_decay == 'exponential': learning_rate = tf.train.exponential_decay(learning_rate, global_step, decay_step, decay_rate) elif model_decay == 'inverse_time': learning_rate = tf.train.inverse_time_decay(learning_rate, global_step, decay_step, decay_rate) elif model_decay == 'natural_exponential': learning_rate = tf.train.natural_exp_decay(learning_rate, global_step, decay_step, decay_rate) elif model_decay == 'polynomial': end_learning_rate = 0.001 learning_rate = tf.train.polynomial_decay(learning_rate, global_step, decay_step, end_learning_rate, power=0.5) else: decay_rate = 1.0 learning_rate = tf.train.exponential_decay(learning_rate, global_step, decay_step, decay_rate) print('Decaying Learning Rate : ', model_decay) ``` ##### DEFINE MODEL TRAINING AND MEASURE PERFORMANCE ``` with tf.name_scope("loss"): loss = tf.square(Y - y_pred) # squared error #loss = tf.nn.softmax(logits=y_pred) # softmax #loss = tf.nn.log_softmax(logits=y_pred) # log-softmax #loss = tf.nn.softmax_cross_entropy_with_logits_v2(labels=Y, logits=y_pred, dim=-1) # cross-entropy #loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=Y, logits=y_pred) # sigmoid-cross-entropy #loss = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=Y, logits=y_pred) # sparse-softmax-cross-entropy loss = tf.reduce_mean(loss, name='MSE') with tf.name_scope("train"): #optimizer = tf.train.GradientDescentOptimizer(learning_rate) # SGD #optimizer = tf.train.MomentumOptimizer(learning_rate=learning_rate,momentum=0.9) # MOMENTUM #optimizer = tf.train.AdagradOptimizer(learning_rate=learning_rate) # ADAGRAD optimizer = tf.train.AdadeltaOptimizer(learning_rate=learning_rate) # ADADELTA #optimizer = tf.train.RMSPropOptimizer(learning_rate=learning_rate, decay=1) # RMS training_op = optimizer.minimize(loss, global_step=global_step) # Create summaries tf.summary.scalar("loss", loss) tf.summary.scalar("learn_rate", learning_rate) # Merge all summaries into a single op to generate the summary data merged_summary_op = tf.summary.merge_all() ``` ##### DEFINE DIRECTORIES FOR RESULTS ``` import sys import shutil from datetime import datetime # set up the directory to store the results for tensorboard now = datetime.utcnow().strftime('%Y%m%d%H%M%S') root_ckpoint = 'tf_checkpoints' root_logdir = 'tf_logs' logdir = '{}/run-{}/'.format(root_logdir, now) ## Try to remove tree; if failed show an error using try...except on screen try: shutil.rmtree(root_ckpoint) except OSError as e: print ("Error: %s - %s." % (e.filename, e.strerror)) ``` ##### EXECUTE THE MODEL ``` from datetime import datetime # define some parameters n_epochs = 40 display_epoch = 2 # checkpoint will also be created based on this batch_size = 10 n_batches = int(n_train/batch_size) # this will help to restore the model to a specific epoch saver = tf.train.Saver(tf.global_variables()) # store the results through every epoch iteration mse_train_list = [] mse_test_list = [] learning_list = [] prediction_results = [] with tf.Session() as sess: sess.run(tf.global_variables_initializer()) # write logs for tensorboard summary_writer = tf.summary.FileWriter(logdir, graph=tf.get_default_graph()) for epoch in range(n_epochs): for i in range(0, n_train, batch_size): # create batches X_batch = X_train[i:i+batch_size] y_batch = y_train[i:i+batch_size] # improve the model _, _summary = sess.run([training_op, merged_summary_op], feed_dict={X:X_batch, Y:y_batch}) # Write logs at every iteration summary_writer.add_summary(_summary) # measure performance and display the results if (epoch+1) % display_epoch == 0: _mse_train = sess.run(loss, feed_dict={X: X_train, Y: y_train}) _mse_test = sess.run(loss, feed_dict={X: X_test, Y: y_test}) mse_train_list.append(_mse_train); mse_test_list.append(_mse_test) learning_list.append(sess.run(learning_rate)) # Save model weights to disk for reproducibility saver = tf.train.Saver(max_to_keep=15) saver.save(sess, "{}/epoch{:04}.ckpt".format(root_ckpoint, (epoch+1))) print("Epoch: {:04}\tTrainMSE: {:06.5f}\tTestMSE: {:06.5f}, Learning: {:06.7f}".format((epoch+1), _mse_train, _mse_test, learning_list[-1])) # store the predictuve values prediction_results = sess.run(y_pred, feed_dict={X: X_test, Y: y_test}) predictions = sess.run(y_pred, feed_dict={X: X_test, Y: y_test}) # output comparative table dataframe = pd.DataFrame(predictions, columns=['Prediction']) dataframe['Target'] = y_test dataframe['Difference'] = dataframe.Target - dataframe.Prediction print('\nPrinting results :\n\n', dataframe) ``` ##### VISUALIZE THE MODEL'S IMPROVEMENTS ``` %matplotlib inline import matplotlib.pyplot as plt import matplotlib.patches as mpatches # set up legend blue_patch = mpatches.Patch(color='blue', label='Train MSE') red_patch = mpatches.Patch(color='red', label='Test MSE') plt.legend(handles=[blue_patch,red_patch]) plt.grid() # plot the data plt.plot(mse_train_list, color='blue') plt.plot(mse_test_list, color='red') plt.xlabel('epochs (x{})'.format(display_epoch)) plt.ylabel('MSE [minimize]'); ``` ##### LEARNING RATE EVOLUTION ``` or_patch = mpatches.Patch(color='orange', label='Learning rate') plt.legend(handles=[or_patch]) plt.plot(learning_list, color='orange'); plt.xlabel('epochs (x{})'.format(display_epoch)) plt.ylabel('learning rate'); ``` ##### VISUALIZE THE RESULTS ``` plt.figure(figsize=(15,10)) # define legend blue_patch = mpatches.Patch(color='blue', label='Prediction') red_patch = mpatches.Patch(color='red', label='Expected Value') green_patch = mpatches.Patch(color='green', label='Abs Error') plt.legend(handles=[blue_patch,red_patch, green_patch]) # plot data x_array = np.arange(len(prediction_results)) plt.scatter(x_array, prediction_results, color='blue') plt.scatter(x_array, y_test, color='red') abs_error = abs(y_test-prediction_results) plt.plot(x_array, abs_error, color='green') plt.grid() # define legends plt.xlabel('index'.format(display_epoch)) plt.ylabel('MEDV'); ``` ##### VISUALIZE TENSORBOARD ``` from IPython.display import clear_output, Image, display, HTML # CHECK IT ON TENSORBOARD TYPING THESE LINES IN THE COMMAND PROMPT: # tensorboard --logdir=/tmp/tf_logs def strip_consts(graph_def, max_const_size=32): """Strip large constant values from graph_def.""" strip_def = tf.GraphDef() for n0 in graph_def.node: n = strip_def.node.add() n.MergeFrom(n0) if n.op == 'Const': tensor = n.attr['value'].tensor size = len(tensor.tensor_content) if size > max_const_size: tensor.tensor_content = b"<stripped %d bytes>"%size return strip_def def show_graph(graph_def, max_const_size=32): """Visualize TensorFlow graph.""" if hasattr(graph_def, 'as_graph_def'): graph_def = graph_def.as_graph_def() strip_def = strip_consts(graph_def, max_const_size=max_const_size) code = """ <script> function load() {{ document.getElementById("{id}").pbtxt = {data}; }} </script> <link rel="import" href="https://tensorboard.appspot.com/tf-graph-basic.build.html" onload=load()> <div style="height:600px"> <tf-graph-basic id="{id}"></tf-graph-basic> </div> """.format(data=repr(str(strip_def)), id='graph'+str(np.random.rand())) iframe = """ <iframe seamless style="width:1200px;height:620px;border:0" srcdoc="{}"></iframe> """.format(code.replace('"', '&quot;')) display(HTML(iframe)) show_graph(tf.get_default_graph()) ``` ## ----- PREPARE THE MODEL FOR FUTURE RESTORES ----- ##### SAVED VARIABLE LIST These is the list of variables that were saved on every checkpoint after training. .data: Contains variable values .meta: Contains graph structure .index: Identifies checkpoints ``` for i, var in enumerate(saver._var_list): print('Var {}: {}'.format(i, var)) ``` ##### RESTORE TO CHECKPOINT ``` # select the epoch to be restored epoch = 38 # Running a new session print('Restoring model to Epoch {}\n'.format(epoch)) with tf.Session() as sess: # Restore variables from disk saver.restore(sess, '{}/epoch{:04}.ckpt'.format(root_ckpoint, epoch)) print('\nPrint expected values :') print(y_test) print('\nPrint predicted values :') predictions = sess.run(y_pred, feed_dict={X: X_test}) print(predictions) ```
true
code
0.554651
null
null
null
null
<div class="alert alert-block alert-info"> Section of the book chapter: <b>5.3 Model Selection, Optimization and Evaluation</b> </div> # 5. Model Selection and Evaluation **Table of Contents** * [5.1 Hyperparameter Optimization](#5.1-Hyperparameter-Optimization) * [5.2 Model Evaluation](#5.2-Model-Evaluation) **Learnings:** - how to optimize machine learning (ML) models with grid search, random search and Bayesian optimization, - how to evaluate ML models. ### Packages ``` %matplotlib inline %config InlineBackend.figure_format = 'retina' # ignore warnings import warnings warnings.filterwarnings('ignore') import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt import matplotlib as mpl from sklearn.ensemble import RandomForestRegressor import utils ``` ### Read in Data **Dataset:** Felix M. Riese and Sina Keller, "Hyperspectral benchmark dataset on soil moisture", Dataset, Zenodo, 2018. [DOI:10.5281/zenodo.1227836](http://doi.org/10.5281/zenodo.1227836) and [GitHub](https://github.com/felixriese/hyperspectral-soilmoisture-dataset) **Introducing paper:** Felix M. Riese and Sina Keller, “Introducing a Framework of Self-Organizing Maps for Regression of Soil Moisture with Hyperspectral Data,” in IGARSS 2018 - 2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 2018, pp. 6151-6154. [DOI:10.1109/IGARSS.2018.8517812](https://doi.org/10.1109/IGARSS.2018.8517812) ``` X_train, X_test, y_train, y_test = utils.get_xy_split() print(X_train.shape, X_test.shape, y_train.shape, y_test.shape) ``` ### Fix Random State ``` np.random.seed(42) ``` *** ## 5.1 Hyperparameter Optimization Content: - [5.1.1 Grid Search](#5.1.1-Grid-Search) - [5.1.2 Randomized Search](#5.1.2-Randomized-Search) - [5.1.3 Bayesian Optimization](#5.1.3-Bayesian-Optimization) ### 5.1.1 Grid Search ``` # NBVAL_IGNORE_OUTPUT from sklearn.svm import SVR from sklearn.model_selection import GridSearchCV # example mode: support vector regressor model = SVR(kernel="rbf") # define parameter grid to be tested params = { "C": np.logspace(-4, 4, 9), "gamma": np.logspace(-4, 4, 9)} # set up grid search and run it on the data gs = GridSearchCV(model, params) %timeit gs.fit(X_train, y_train) print("R2 score = {0:.2f} %".format(gs.score(X_test, y_test)*100)) ``` ### 5.1.2 Randomized Search ``` # NBVAL_IGNORE_OUTPUT from sklearn.svm import SVR from sklearn.model_selection import RandomizedSearchCV # example mode: support vector regressor model = SVR(kernel="rbf") # define parameter grid to be tested params = { "C": np.logspace(-4, 4, 9), "gamma": np.logspace(-4, 4, 9)} # set up grid search and run it on the data gsr = RandomizedSearchCV(model, params, n_iter=15, refit=True) %timeit gsr.fit(X_train, y_train) print("R2 score = {0:.2f} %".format(gsr.score(X_test, y_test)*100)) ``` ### 5.1.3 Bayesian Optimization Implementation: [github.com/fmfn/BayesianOptimization](https://github.com/fmfn/BayesianOptimization) ``` # NBVAL_IGNORE_OUTPUT from sklearn.svm import SVR from bayes_opt import BayesianOptimization # define function to be optimized def opt_func(C, gamma): model = SVR(C=C, gamma=gamma) return model.fit(X_train, y_train).score(X_test, y_test) # set bounded region of parameter space pbounds = {'C': (1e-5, 1e4), 'gamma': (1e-5, 1e4)} # define optimizer optimizer = BayesianOptimization( f=opt_func, pbounds=pbounds, random_state=1) # optimize %time optimizer.maximize(init_points=2, n_iter=15) print("R2 score = {0:.2f} %".format(optimizer.max["target"]*100)) ``` *** ## 5.2 Model Evaluation Content: - [5.2.1 Generate Exemplary Data](#5.2.1-Generate-Exemplary-Data) - [5.2.2 Plot the Data](#5.2.2-Plot-the-Data) - [5.2.3 Evaluation Metrics](#5.2.3-Evaluation-Metrics) ``` import sklearn.metrics as me ``` ### 5.2.1 Generate Exemplary Data ``` ### generate example data np.random.seed(1) # define x grid x_grid = np.linspace(0, 10, 11) y_model = x_grid*0.5 # define first dataset without outlier y1 = np.array([y + np.random.normal(scale=0.2) for y in y_model]) # define second dataset with outlier y2 = np.copy(y1) y2[9] = 0.5 # define third dataset with higher variance y3 = np.array([y + np.random.normal(scale=1.0) for y in y_model]) ``` ### 5.2.2 Plot the Data ``` # plot example data fig, (ax1, ax2, ax3) = plt.subplots(1,3, figsize=(12,4)) fontsize = 18 titleweight = "bold" titlepad = 10 scatter_label = "Data" scatter_alpha = 0.7 scatter_s = 100 ax1.scatter(x_grid, y1, label=scatter_label, alpha=scatter_alpha, s=scatter_s) ax1.set_title("(a) Low var.", fontsize=fontsize, fontweight=titleweight, pad=titlepad) ax2.scatter(x_grid, y2, label=scatter_label, alpha=scatter_alpha, s=scatter_s) ax2.set_title("(b) Low var. + outlier", fontsize=fontsize, fontweight=titleweight, pad=titlepad) ax3.scatter(x_grid, y3, label=scatter_label, alpha=scatter_alpha, s=scatter_s) ax3.set_title("(c) Higher var.", fontsize=fontsize, fontweight=titleweight, pad=titlepad) for i, ax in enumerate([ax1, ax2, ax3]): i += 1 # red line ax.plot(x_grid, y_model, label="Model", c="tab:red", linestyle="dashed", linewidth=4, alpha=scatter_alpha) # x-axis cosmetics ax.set_xlabel("x in a.u.", fontsize=fontsize) for tick in ax.xaxis.get_major_ticks(): tick.label.set_fontsize(fontsize) # y-axis cosmetics if i != 1: ax.set_yticklabels([]) else: ax.set_ylabel("y in a.u.", fontsize=fontsize, rotation=90) for tick in ax.yaxis.get_major_ticks(): tick.label.set_fontsize(fontsize) ax.set_xlim(-0.5, 10.5) ax.set_ylim(-0.5, 6.5) # ax.set_title("Example "+str(i), fontsize=fontsize) if i == 2: ax.legend(loc=2, fontsize=fontsize*1.0, frameon=True) plt.tight_layout() plt.savefig("plots/metrics_plot.pdf", bbox_inches="tight") ``` ### 5.2.3 Evaluation Metrics ``` # calculating the metrics for i, y in enumerate([y1, y2, y3]): print("Example", i+1) print("- MAE = {:.2f}".format(me.mean_absolute_error(y_model, y))) print("- MSE = {:.2f}".format(me.mean_squared_error(y_model, y))) print("- RMSE = {:.2f}".format(np.sqrt(me.mean_squared_error(y_model, y)))) print("- R2 = {:.2f}%".format(me.r2_score(y_model, y)*100)) print("-"*20) ```
true
code
0.677687
null
null
null
null
``` import numpy as np import matplotlib.pyplot as plt from sympy import Symbol, integrate %matplotlib notebook ``` ### Smooth local paths We will use cubic spirals to generate smooth local paths. Without loss of generality, as $\theta$ smoothly changes from 0 to 1, we impose a condition on the curvature as follows $\kappa = f'(\theta) = K(\theta(1-\theta))^n $ This ensures curvature vanishes at the beginning and end of the path. Integrating, the yaw changes as $\theta = \int_0^x f'(\theta)d\theta$ With $n = 1$ we get a cubic spiral, $n=2$ we get a quintic spiral and so on. Let us use the sympy package to find the family of spirals 1. Declare $x$ a Symbol 2. You want to find Integral of $f'(x)$ 3. You can choose $K$ so that all coefficients are integers Verify if $\theta(0) = 0$ and $\theta(1) = 1$ ``` K = 30#choose for cubic/quintic n = 2#choose for cubic/ quintic x = Symbol('x')#declare as Symbol print(integrate(K*(x*(1-x))**n, x)) # complete the expression #write function to compute a cubic spiral #input/ output can be any theta def cubic_spiral(theta_i, theta_f, n=10): x = np.linspace(0, 1, num=n) theta = (-2*x**3 + 3*x**2) * (theta_f-theta_i) + theta_i return theta # pass def quintic_spiral(theta_i, theta_f, n=10): x = np.linspace(0, 1, num=n) theta = (6*x**5 - 15*x**4 + 10*x**3)* (theta_f-theta_i) + theta_i return theta # pass def circular_spiral(theta_i, theta_f, n=10): x = np.linspace(0, 1, num=n) theta = x* (theta_f-theta_i) + theta_i return theta ``` ### Plotting Plot cubic, quintic spirals along with how $\theta$ will change when moving in a circular arc. Remember circular arc is when $\omega $ is constant ``` theta_i = 1.57 theta_f = 0 n = 10 x = np.linspace(0, 1, num=n) plt.figure() plt.plot(x,circular_spiral(theta_i, theta_f, n),label='Circular') plt.plot(x,cubic_spiral(theta_i, theta_f, n), label='Cubic') plt.plot(x,quintic_spiral(theta_i, theta_f, n), label='Quintic') plt.grid() plt.legend() ``` ## Trajectory Using the spirals, convert them to trajectories $\{(x_i,y_i,\theta_i)\}$. Remember the unicycle model $dx = v\cos \theta dt$ $dy = v\sin \theta dt$ $\theta$ is given by the spiral functions you just wrote. Use cumsum() in numpy to calculate {} What happens when you change $v$? ``` v = 1 dt = 0.1 theta_i = 1.57 theta_f = 0 n = 100 theta_cubic = cubic_spiral(theta_i, theta_f, n) theta_quintic = quintic_spiral(theta_i, theta_f, int(n+(23/1000)*n)) theta_circular = circular_spiral(theta_i, theta_f, int(n-(48/1000)*n)) # print(theta) def trajectory(v,dt,theta): dx = v*np.cos(theta) *dt dy = v*np.sin(theta) *dt # print(dx) x = np.cumsum(dx) y = np.cumsum(dy) return x,y # plot trajectories for circular/ cubic/ quintic plt.figure() plt.plot(*trajectory(v,dt,theta_circular), label='Circular') plt.plot(*trajectory(v,dt,theta_cubic), label='Cubic') plt.plot(*trajectory(v,dt,theta_quintic), label='Quintic') plt.grid() plt.legend() ``` ## Symmetric poses We have been doing only examples with $|\theta_i - \theta_f| = \pi/2$. What about other orientation changes? Given below is an array of terminal angles (they are in degrees!). Start from 0 deg and plot the family of trajectories ``` dt = 0.1 thetas = [15, 30, 45, 60, 90, 120, 150, 180] #convert to radians plt.figure() for tf in thetas: t = cubic_spiral(0, np.deg2rad(tf),50) x = np.cumsum(np.cos(t)*dt) y = np.cumsum(np.sin(t)*dt) plt.plot(x, y, label=f'0 to {tf} degree') plt.grid() plt.legend() # On the same plot, move from 180 to 180 - theta #thetas = plt.figure() for tf in thetas: t = cubic_spiral(np.pi, np.pi-np.deg2rad(tf),50) x = np.cumsum(np.cos(t)*dt) y = np.cumsum(np.sin(t)*dt) plt.plot(x, y, label=f'180 to {180-tf} degree') plt.grid() plt.legend() ``` Modify your code to print the following for the positive terminal angles $\{\theta_f\}$ 1. Final x, y position in corresponding trajectory: $x_f, y_f$ 2. $\frac{y_f}{x_f}$ and $\tan \frac{\theta_f}{2}$ What do you notice? What happens when $v$ is doubled? ``` dt = 0.1 thetas = [15, 30, 45, 60, 90, 120, 150, 180] #convert to radians # plt.figure() for tf in thetas: t = cubic_spiral(0, np.deg2rad(tf),50) x = np.cumsum(np.cos(t)*dt) y = np.cumsum(np.sin(t)*dt) print(f'tf: {tf} x_f : {x[-1]} y_f: {y[-1]} y_f/x_f : {y[-1]/x[-1]} tan (theta_f/2) : {np.tan(np.deg2rad(tf)/2)}') ``` These are called *symmetric poses*. With this spiral-fitting approach, only symmetric poses can be reached. In order to move between any 2 arbitrary poses, you will have to find an intermediate pose that is pair-wise symmetric to the start and the end pose. What should be the intermediate pose? There are infinite possibilities. We would have to formulate it as an optimization problem. As they say, that has to be left for another time! ``` ```
true
code
0.596433
null
null
null
null
## Add cancer analysis Analysis of results from `run_add_cancer_classification.py`. We hypothesized that adding cancers in a principled way (e.g. by similarity to the target cancer) would lead to improved performance relative to both a single-cancer model (using only the target cancer type), and a pan-cancer model using all cancer types without regard for similarity to the target cancer. Script parameters: * RESULTS_DIR: directory to read experiment results from * IDENTIFIER: {gene}\_{cancer_type} target identifier to plot results for ``` import os import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import pancancer_evaluation.config as cfg import pancancer_evaluation.utilities.analysis_utilities as au RESULTS_DIR = os.path.join(cfg.repo_root, 'add_cancer_results', 'add_cancer') ``` ### Load data ``` add_cancer_df = au.load_add_cancer_results(RESULTS_DIR, load_cancer_types=True) print(add_cancer_df.shape) add_cancer_df.sort_values(by=['gene', 'holdout_cancer_type']).head() # load data from previous single-cancer and pan-cancer experiments # this is to put the add cancer results in the context of our previous results pancancer_dir = os.path.join(cfg.results_dir, 'pancancer') pancancer_dir2 = os.path.join(cfg.results_dir, 'vogelstein_s1_results', 'pancancer') single_cancer_dir = os.path.join(cfg.results_dir, 'single_cancer') single_cancer_dir2 = os.path.join(cfg.results_dir, 'vogelstein_s1_results', 'single_cancer') single_cancer_df1 = au.load_prediction_results(single_cancer_dir, 'single_cancer') single_cancer_df2 = au.load_prediction_results(single_cancer_dir2, 'single_cancer') single_cancer_df = pd.concat((single_cancer_df1, single_cancer_df2)) print(single_cancer_df.shape) single_cancer_df.head() pancancer_df1 = au.load_prediction_results(pancancer_dir, 'pancancer') pancancer_df2 = au.load_prediction_results(pancancer_dir2, 'pancancer') pancancer_df = pd.concat((pancancer_df1, pancancer_df2)) print(pancancer_df.shape) pancancer_df.head() single_cancer_comparison_df = au.compare_results(single_cancer_df, identifier='identifier', metric='aupr', correction=True, correction_alpha=0.001, verbose=False) pancancer_comparison_df = au.compare_results(pancancer_df, identifier='identifier', metric='aupr', correction=True, correction_alpha=0.001, verbose=False) experiment_comparison_df = au.compare_results(single_cancer_df, pancancer_df=pancancer_df, identifier='identifier', metric='aupr', correction=True, correction_alpha=0.05, verbose=False) experiment_comparison_df.sort_values(by='corr_pval').head(n=10) ``` ### Plot change in performance as cancers are added ``` IDENTIFIER = 'BRAF_COAD' # IDENTIFIER = 'EGFR_ESCA' # IDENTIFIER = 'EGFR_LGG' # IDENTIFIER = 'KRAS_CESC' # IDENTIFIER = 'PIK3CA_ESCA' # IDENTIFIER = 'PIK3CA_STAD' # IDENTIFIER = 'PTEN_COAD' # IDENTIFIER = 'PTEN_BLCA' # IDENTIFIER = 'TP53_OV' # IDENTIFIER = 'NF1_GBM' GENE = IDENTIFIER.split('_')[0] gene_df = add_cancer_df[(add_cancer_df.gene == GENE) & (add_cancer_df.data_type == 'test') & (add_cancer_df.signal == 'signal')].copy() # make seaborn treat x axis as categorical gene_df.num_train_cancer_types = gene_df.num_train_cancer_types.astype(str) gene_df.loc[(gene_df.num_train_cancer_types == '-1'), 'num_train_cancer_types'] = 'all' sns.set({'figure.figsize': (14, 6)}) sns.pointplot(data=gene_df, x='num_train_cancer_types', y='aupr', hue='identifier', order=['0', '1', '2', '4', 'all']) plt.legend(bbox_to_anchor=(1.15, 0.5), loc='center right', borderaxespad=0., title='Cancer type') plt.title('Adding cancer types by confusion matrix similarity, {} mutation prediction'.format(GENE), size=13) plt.xlabel('Number of added cancer types', size=13) plt.ylabel('AUPR', size=13) id_df = add_cancer_df[(add_cancer_df.identifier == IDENTIFIER) & (add_cancer_df.data_type == 'test') & (add_cancer_df.signal == 'signal')].copy() # make seaborn treat x axis as categorical id_df.num_train_cancer_types = id_df.num_train_cancer_types.astype(str) id_df.loc[(id_df.num_train_cancer_types == '-1'), 'num_train_cancer_types'] = 'all' sns.set({'figure.figsize': (14, 6)}) cat_order = ['0', '1', '2', '4', 'all'] sns.pointplot(data=id_df, x='num_train_cancer_types', y='aupr', hue='identifier', order=cat_order) plt.legend([],[], frameon=False) plt.title('Adding cancer types by confusion matrix similarity, {} mutation prediction'.format(IDENTIFIER), size=13) plt.xlabel('Number of added cancer types', size=13) plt.ylabel('AUPR', size=13) # annotate points with cancer types def label_points(x, y, cancer_types, gene, ax): a = pd.DataFrame({'x': x, 'y': y, 'cancer_types': cancer_types}) for i, point in a.iterrows(): if gene in ['TP53', 'PIK3CA'] and point['x'] == 4: ax.text(point['x']+0.05, point['y']+0.005, str(point['cancer_types'].replace(' ', '\n')), bbox=dict(facecolor='none', edgecolor='black', boxstyle='round'), ha='left', va='center') else: ax.text(point['x']+0.05, point['y']+0.005, str(point['cancer_types'].replace(' ', '\n')), bbox=dict(facecolor='none', edgecolor='black', boxstyle='round')) cat_to_loc = {c: i for i, c in enumerate(cat_order)} group_id_df = ( id_df.groupby(['num_train_cancer_types', 'train_cancer_types']) .mean() .reset_index() ) label_points([cat_to_loc[c] for c in group_id_df.num_train_cancer_types], group_id_df.aupr, group_id_df.train_cancer_types, GENE, plt.gca()) ``` ### Plot gene/cancer type "best model" performance vs. single/pan-cancer models ``` id_df = add_cancer_df[(add_cancer_df.identifier == IDENTIFIER) & (add_cancer_df.data_type == 'test')].copy() best_num = ( id_df[id_df.signal == 'signal'] .groupby('num_train_cancer_types') .mean() .reset_index() .sort_values(by='aupr', ascending=False) .iloc[0, 0] ) print(best_num) best_id_df = ( id_df.loc[id_df.num_train_cancer_types == best_num, :] .drop(columns=['num_train_cancer_types', 'how_to_add', 'train_cancer_types']) ) best_id_df['train_set'] = 'best_add' sc_id_df = ( id_df.loc[id_df.num_train_cancer_types == 1, :] .drop(columns=['num_train_cancer_types', 'how_to_add', 'train_cancer_types']) ) sc_id_df['train_set'] = 'single_cancer' pc_id_df = ( id_df.loc[id_df.num_train_cancer_types == -1, :] .drop(columns=['num_train_cancer_types', 'how_to_add', 'train_cancer_types']) ) pc_id_df['train_set'] = 'pancancer' all_id_df = pd.concat((sc_id_df, best_id_df, pc_id_df), sort=False) all_id_df.head() sns.set() sns.boxplot(data=all_id_df, x='train_set', y='aupr', hue='signal', hue_order=['signal', 'shuffled']) plt.title('{}, single/best/pancancer predictors'.format(IDENTIFIER)) plt.xlabel('Training data') plt.ylabel('AUPR') plt.legend(title='Signal') print('Single cancer significance: {}'.format( single_cancer_comparison_df.loc[single_cancer_comparison_df.identifier == IDENTIFIER, 'reject_null'].values[0] )) print('Pan-cancer significance: {}'.format( pancancer_comparison_df.loc[pancancer_comparison_df.identifier == IDENTIFIER, 'reject_null'].values[0] )) # Q2: where is this example in the single vs. pan-cancer volcano plot? # see pancancer only experiments for an example of this sort of thing experiment_comparison_df['nlog10_p'] = -np.log(experiment_comparison_df.corr_pval) sns.set({'figure.figsize': (8, 6)}) sns.scatterplot(data=experiment_comparison_df, x='delta_mean', y='nlog10_p', hue='reject_null', alpha=0.3) plt.xlabel('AUPRC(pancancer) - AUPRC(single cancer)') plt.ylabel(r'$-\log_{10}($adjusted p-value$)$') plt.title('Highlight {} in pancancer vs. single-cancer comparison'.format(IDENTIFIER)) def highlight_id(x, y, val, ax, id_to_plot): a = pd.concat({'x': x, 'y': y, 'val': val}, axis=1) for i, point in a.iterrows(): if point['val'] == id_to_plot: ax.scatter(point['x'], point['y'], color='red', marker='+', s=100) highlight_id(experiment_comparison_df.delta_mean, experiment_comparison_df.nlog10_p, experiment_comparison_df.identifier, plt.gca(), IDENTIFIER) ``` Overall, these results weren't quite as convincing as we were expecting. Although there are a few gene/cancer type combinations where there is a clear improvement when one or two relevant cancer types are added, overall there isn't much change in many cases (see first line plots of multiple cancer types). Biologically speaking, this isn't too surprising for a few reasons: * Some genes aren’t drivers in certain cancer types * Some genes have very cancer-specific effects * Some genes (e.g. TP53) have very well-preserved effects across all cancers We think there could be room for improvement as far as cancer type selection (some of the cancers chosen don't make a ton of sense), but overall we're a bit skeptical that this approach will lead to models that generalize better than a single-cancer model in most cases.
true
code
0.372562
null
null
null
null
``` %matplotlib inline ``` 분류기(Classifier) 학습하기 ============================ 지금까지 어떻게 신경망을 정의하고, 손실을 계산하며 또 가중치를 갱신하는지에 대해서 배웠습니다. 이제 아마도 이런 생각을 하고 계실텐데요, 데이터는 어떻게 하나요? ------------------------ 일반적으로 이미지나 텍스트, 오디오나 비디오 데이터를 다룰 때는 표준 Python 패키지를 이용하여 NumPy 배열로 불러오면 됩니다. 그 후 그 배열을 ``torch.*Tensor`` 로 변환합니다. - 이미지는 Pillow나 OpenCV 같은 패키지가 유용합니다. - 오디오를 처리할 때는 SciPy와 LibROSA가 유용하고요. - 텍스트의 경우에는 그냥 Python이나 Cython을 사용해도 되고, NLTK나 SpaCy도 유용합니다. 특별히 영상 분야를 위한 ``torchvision`` 이라는 패키지가 만들어져 있는데, 여기에는 Imagenet이나 CIFAR10, MNIST 등과 같이 일반적으로 사용하는 데이터셋을 위한 데이터 로더(data loader), 즉 ``torchvision.datasets`` 과 이미지용 데이터 변환기 (data transformer), 즉 ``torch.utils.data.DataLoader`` 가 포함되어 있습니다. 이러한 기능은 엄청나게 편리하며, 매번 유사한 코드(boilerplate code)를 반복해서 작성하는 것을 피할 수 있습니다. 이 튜토리얼에서는 CIFAR10 데이터셋을 사용합니다. 여기에는 다음과 같은 분류들이 있습니다: '비행기(airplane)', '자동차(automobile)', '새(bird)', '고양이(cat)', '사슴(deer)', '개(dog)', '개구리(frog)', '말(horse)', '배(ship)', '트럭(truck)'. 그리고 CIFAR10에 포함된 이미지의 크기는 3x32x32로, 이는 32x32 픽셀 크기의 이미지가 3개 채널(channel)의 색상로 이뤄져 있다는 것을 뜻합니다. .. figure:: /_static/img/cifar10.png :alt: cifar10 cifar10 이미지 분류기 학습하기 ---------------------------- 다음과 같은 단계로 진행해보겠습니다: 1. ``torchvision`` 을 사용하여 CIFAR10의 학습용 / 시험용 데이터셋을 불러오고, 정규화(nomarlizing)합니다. 2. 합성곱 신경망(Convolution Neural Network)을 정의합니다. 3. 손실 함수를 정의합니다. 4. 학습용 데이터를 사용하여 신경망을 학습합니다. 5. 시험용 데이터를 사용하여 신경망을 검사합니다. 1. CIFAR10을 불러오고 정규화하기 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ``torchvision`` 을 사용하여 매우 쉽게 CIFAR10을 불러올 수 있습니다. ``` import torch import torchvision import torchvision.transforms as transforms ``` torchvision 데이터셋의 출력(output)은 [0, 1] 범위를 갖는 PILImage 이미지입니다. 이를 [-1, 1]의 범위로 정규화된 Tensor로 변환합니다. <div class="alert alert-info"><h4>Note</h4><p>만약 Windows 환경에서 BrokenPipeError가 발생한다면, torch.utils.data.DataLoader()의 num_worker를 0으로 설정해보세요.</p></div> ``` transform = transforms.Compose( [transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) batch_size = 4 trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size=batch_size, shuffle=True, num_workers=2) testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform) testloader = torch.utils.data.DataLoader(testset, batch_size=batch_size, shuffle=False, num_workers=2) classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck') ``` 재미삼아 학습용 이미지 몇 개를 보겠습니다. ``` import matplotlib.pyplot as plt import numpy as np # 이미지를 보여주기 위한 함수 def imshow(img): img = img / 2 + 0.5 # unnormalize npimg = img.numpy() plt.imshow(np.transpose(npimg, (1, 2, 0))) plt.show() # 학습용 이미지를 무작위로 가져오기 dataiter = iter(trainloader) images, labels = dataiter.next() # 이미지 보여주기 imshow(torchvision.utils.make_grid(images)) # 정답(label) 출력 print(' '.join('%5s' % classes[labels[j]] for j in range(batch_size))) ``` 2. 합성곱 신경망(Convolution Neural Network) 정의하기 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 이전의 신경망 섹션에서 신경망을 복사한 후, (기존에 1채널 이미지만 처리하도록 정의된 것을) 3채널 이미지를 처리할 수 있도록 수정합니다. ``` import torch.nn as nn import torch.nn.functional as F class Net(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(3, 6, 5) self.pool = nn.MaxPool2d(2, 2) self.conv2 = nn.Conv2d(6, 16, 5) self.fc1 = nn.Linear(16 * 5 * 5, 120) self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 10) def forward(self, x): x = self.pool(F.relu(self.conv1(x))) x = self.pool(F.relu(self.conv2(x))) x = torch.flatten(x, 1) # 배치를 제외한 모든 차원을 평탄화(flatten) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x net = Net() ``` 3. 손실 함수와 Optimizer 정의하기 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 교차 엔트로피 손실(Cross-Entropy loss)과 모멘텀(momentum) 값을 갖는 SGD를 사용합니다. ``` import torch.optim as optim criterion = nn.CrossEntropyLoss() optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9) ``` 4. 신경망 학습하기 ^^^^^^^^^^^^^^^^^^^^ 이제 재미있는 부분이 시작됩니다. 단순히 데이터를 반복해서 신경망에 입력으로 제공하고, 최적화(Optimize)만 하면 됩니다. ``` for epoch in range(2): # 데이터셋을 수차례 반복합니다. running_loss = 0.0 for i, data in enumerate(trainloader, 0): # [inputs, labels]의 목록인 data로부터 입력을 받은 후; inputs, labels = data # 변화도(Gradient) 매개변수를 0으로 만들고 optimizer.zero_grad() # 순전파 + 역전파 + 최적화를 한 후 outputs = net(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() # 통계를 출력합니다. running_loss += loss.item() if i % 2000 == 1999: # print every 2000 mini-batches print('[%d, %5d] loss: %.3f' % (epoch + 1, i + 1, running_loss / 2000)) running_loss = 0.0 print('Finished Training') ``` 학습한 모델을 저장해보겠습니다: ``` PATH = './cifar_net.pth' torch.save(net.state_dict(), PATH) ``` PyTorch 모델을 저장하는 자세한 방법은 `여기 <https://pytorch.org/docs/stable/notes/serialization.html>`_ 를 참조해주세요. 5. 시험용 데이터로 신경망 검사하기 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 지금까지 학습용 데이터셋을 2회 반복하며 신경망을 학습시켰습니다. 신경망이 전혀 배운게 없을지도 모르니 확인해봅니다. 신경망이 예측한 출력과 진짜 정답(Ground-truth)을 비교하는 방식으로 확인합니다. 만약 예측이 맞다면 샘플을 '맞은 예측값(correct predictions)' 목록에 넣겠습니다. 첫번째로 시험용 데이터를 좀 보겠습니다. ``` dataiter = iter(testloader) images, labels = dataiter.next() # 이미지를 출력합니다. imshow(torchvision.utils.make_grid(images)) print('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(4))) ``` 이제, 저장했던 모델을 불러오도록 하겠습니다 (주: 모델을 저장하고 다시 불러오는 작업은 여기에서는 불필요하지만, 어떻게 하는지 설명을 위해 해보겠습니다): ``` net = Net() net.load_state_dict(torch.load(PATH)) ``` 좋습니다, 이제 이 예제들을 신경망이 어떻게 예측했는지를 보겠습니다: ``` outputs = net(images) ``` 출력은 10개 분류 각각에 대한 값으로 나타납니다. 어떤 분류에 대해서 더 높은 값이 나타난다는 것은, 신경망이 그 이미지가 해당 분류에 더 가깝다고 생각한다는 것입니다. 따라서, 가장 높은 값을 갖는 인덱스(index)를 뽑아보겠습니다: ``` _, predicted = torch.max(outputs, 1) print('Predicted: ', ' '.join('%5s' % classes[predicted[j]] for j in range(4))) ``` 결과가 괜찮아보이네요. 그럼 전체 데이터셋에 대해서는 어떻게 동작하는지 보겠습니다. ``` correct = 0 total = 0 # 학습 중이 아니므로, 출력에 대한 변화도를 계산할 필요가 없습니다 with torch.no_grad(): for data in testloader: images, labels = data # 신경망에 이미지를 통과시켜 출력을 계산합니다 outputs = net(images) # 가장 높은 값(energy)를 갖는 분류(class)를 정답으로 선택하겠습니다 _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() print('Accuracy of the network on the 10000 test images: %d %%' % ( 100 * correct / total)) ``` (10가지 분류 중에 하나를 무작위로) 찍었을 때의 정확도인 10% 보다는 나아보입니다. 신경망이 뭔가 배우긴 한 것 같네요. 그럼 어떤 것들을 더 잘 분류하고, 어떤 것들을 더 못했는지 알아보겠습니다: ``` # 각 분류(class)에 대한 예측값 계산을 위해 준비 correct_pred = {classname: 0 for classname in classes} total_pred = {classname: 0 for classname in classes} # 변화도는 여전히 필요하지 않습니다 with torch.no_grad(): for data in testloader: images, labels = data outputs = net(images) _, predictions = torch.max(outputs, 1) # 각 분류별로 올바른 예측 수를 모읍니다 for label, prediction in zip(labels, predictions): if label == prediction: correct_pred[classes[label]] += 1 total_pred[classes[label]] += 1 # 각 분류별 정확도(accuracy)를 출력합니다 for classname, correct_count in correct_pred.items(): accuracy = 100 * float(correct_count) / total_pred[classname] print("Accuracy for class {:5s} is: {:.1f} %".format(classname, accuracy)) ``` 자, 이제 다음으로 무엇을 해볼까요? 이러한 신경망들을 GPU에서 실행하려면 어떻게 해야 할까요? GPU에서 학습하기 ---------------- Tensor를 GPU로 이동했던 것처럼, 신경망 또한 GPU로 옮길 수 있습니다. 먼저 (CUDA를 사용할 수 있다면) 첫번째 CUDA 장치를 사용하도록 설정합니다: ``` device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") # CUDA 기기가 존재한다면, 아래 코드가 CUDA 장치를 출력합니다: print(device) ``` 이 섹션의 나머지 부분에서는 ``device`` 를 CUDA 장치라고 가정하겠습니다. 그리고 이 메소드(Method)들은 재귀적으로 모든 모듈의 매개변수와 버퍼를 CUDA tensor로 변경합니다: .. code:: python net.to(device) 또한, 각 단계에서 입력(input)과 정답(target)도 GPU로 보내야 한다는 것도 기억해야 합니다: .. code:: python inputs, labels = data[0].to(device), data[1].to(device) CPU와 비교했을 때 어마어마한 속도 차이가 나지 않는 것은 왜 그럴까요? 그 이유는 바로 신경망이 너무 작기 때문입니다. **연습:** 신경망의 크기를 키워보고, 얼마나 빨라지는지 확인해보세요. (첫번째 ``nn.Conv2d`` 의 2번째 인자와 두번째 ``nn.Conv2d`` 의 1번째 인자는 같은 숫자여야 합니다.) **다음 목표들을 달성했습니다**: - 높은 수준에서 PyTorch의 Tensor library와 신경망을 이해합니다. - 이미지를 분류하는 작은 신경망을 학습시킵니다. 여러개의 GPU에서 학습하기 ------------------------- 모든 GPU를 활용해서 더욱 더 속도를 올리고 싶다면, :doc:`data_parallel_tutorial` 을 참고하세요. 이제 무엇을 해볼까요? ----------------------- - :doc:`비디오 게임을 할 수 있는 신경망 학습시키기 </intermediate/reinforcement_q_learning>` - `imagenet으로 최첨단(state-of-the-art) ResNet 신경망 학습시키기`_ - `적대적 생성 신경망으로 얼굴 생성기 학습시키기`_ - `순환 LSTM 네트워크를 사용해 단어 단위 언어 모델 학습시키기`_ - `다른 예제들 참고하기`_ - `더 많은 튜토리얼 보기`_ - `포럼에서 PyTorch에 대해 얘기하기`_ - `Slack에서 다른 사용자와 대화하기`_ ``` # %%%%%%INVISIBLE_CODE_BLOCK%%%%%% del dataiter # %%%%%%INVISIBLE_CODE_BLOCK%%%%%% ```
true
code
0.799198
null
null
null
null
# Anna KaRNNa In this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book. This network is based off of Andrej Karpathy's [post on RNNs](http://karpathy.github.io/2015/05/21/rnn-effectiveness/) and [implementation in Torch](https://github.com/karpathy/char-rnn). Also, some information [here at r2rt](http://r2rt.com/recurrent-neural-networks-in-tensorflow-ii.html) and from [Sherjil Ozair](https://github.com/sherjilozair/char-rnn-tensorflow) on GitHub. Below is the general architecture of the character-wise RNN. <img src="assets/charseq.jpeg" width="500"> ``` import time from collections import namedtuple import numpy as np import tensorflow as tf ``` First we'll load the text file and convert it into integers for our network to use. ``` with open('anna.txt', 'r') as f: text=f.read() vocab = set(text) vocab_to_int = {c: i for i, c in enumerate(vocab)} int_to_vocab = dict(enumerate(vocab)) chars = np.array([vocab_to_int[c] for c in text], dtype=np.int32) text[:100] chars[:100] ``` Now I need to split up the data into batches, and into training and validation sets. I should be making a test set here, but I'm not going to worry about that. My test will be if the network can generate new text. Here I'll make both input and target arrays. The targets are the same as the inputs, except shifted one character over. I'll also drop the last bit of data so that I'll only have completely full batches. The idea here is to make a 2D matrix where the number of rows is equal to the number of batches. Each row will be one long concatenated string from the character data. We'll split this data into a training set and validation set using the `split_frac` keyword. This will keep 90% of the batches in the training set, the other 10% in the validation set. ``` def split_data(chars, batch_size, num_steps, split_frac=0.9): """ Split character data into training and validation sets, inputs and targets for each set. Arguments --------- chars: character array batch_size: Size of examples in each of batch num_steps: Number of sequence steps to keep in the input and pass to the network split_frac: Fraction of batches to keep in the training set Returns train_x, train_y, val_x, val_y """ slice_size = batch_size * num_steps n_batches = int(len(chars) / slice_size) # Drop the last few characters to make only full batches x = chars[: n_batches*slice_size] y = chars[1: n_batches*slice_size + 1] # Split the data into batch_size slices, then stack them into a 2D matrix x = np.stack(np.split(x, batch_size)) y = np.stack(np.split(y, batch_size)) # Now x and y are arrays with dimensions batch_size x n_batches*num_steps # Split into training and validation sets, keep the virst split_frac batches for training split_idx = int(n_batches*split_frac) train_x, train_y= x[:, :split_idx*num_steps], y[:, :split_idx*num_steps] val_x, val_y = x[:, split_idx*num_steps:], y[:, split_idx*num_steps:] return train_x, train_y, val_x, val_y train_x, train_y, val_x, val_y = split_data(chars, 10, 200) train_x.shape train_x[:,:10] ``` I'll write another function to grab batches out of the arrays made by split data. Here each batch will be a sliding window on these arrays with size `batch_size X num_steps`. For example, if we want our network to train on a sequence of 100 characters, `num_steps = 100`. For the next batch, we'll shift this window the next sequence of `num_steps` characters. In this way we can feed batches to the network and the cell states will continue through on each batch. ``` def get_batch(arrs, num_steps): batch_size, slice_size = arrs[0].shape n_batches = int(slice_size/num_steps) for b in range(n_batches): yield [x[:, b*num_steps: (b+1)*num_steps] for x in arrs] def build_rnn(num_classes, batch_size=50, num_steps=50, lstm_size=128, num_layers=2, learning_rate=0.001, grad_clip=5, sampling=False): if sampling == True: batch_size, num_steps = 1, 1 tf.reset_default_graph() # Declare placeholders we'll feed into the graph inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs') x_one_hot = tf.one_hot(inputs, num_classes, name='x_one_hot') targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets') y_one_hot = tf.one_hot(targets, num_classes, name='y_one_hot') y_reshaped = tf.reshape(y_one_hot, [-1, num_classes]) keep_prob = tf.placeholder(tf.float32, name='keep_prob') # Build the RNN layers lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size) drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob) cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers) initial_state = cell.zero_state(batch_size, tf.float32) # Run the data through the RNN layers rnn_inputs = [tf.squeeze(i, squeeze_dims=[1]) for i in tf.split(x_one_hot, num_steps, 1)] outputs, state = tf.contrib.rnn.static_rnn(cell, rnn_inputs, initial_state=initial_state) final_state = tf.identity(state, name='final_state') # Reshape output so it's a bunch of rows, one row for each cell output seq_output = tf.concat(outputs, axis=1,name='seq_output') output = tf.reshape(seq_output, [-1, lstm_size], name='graph_output') # Now connect the RNN putputs to a softmax layer and calculate the cost softmax_w = tf.Variable(tf.truncated_normal((lstm_size, num_classes), stddev=0.1), name='softmax_w') softmax_b = tf.Variable(tf.zeros(num_classes), name='softmax_b') logits = tf.matmul(output, softmax_w) + softmax_b preds = tf.nn.softmax(logits, name='predictions') loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped, name='loss') cost = tf.reduce_mean(loss, name='cost') # Optimizer for training, using gradient clipping to control exploding gradients tvars = tf.trainable_variables() grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars), grad_clip) train_op = tf.train.AdamOptimizer(learning_rate) optimizer = train_op.apply_gradients(zip(grads, tvars)) # Export the nodes export_nodes = ['inputs', 'targets', 'initial_state', 'final_state', 'keep_prob', 'cost', 'preds', 'optimizer'] Graph = namedtuple('Graph', export_nodes) local_dict = locals() graph = Graph(*[local_dict[each] for each in export_nodes]) return graph ``` ## Hyperparameters Here I'm defining the hyperparameters for the network. The two you probably haven't seen before are `lstm_size` and `num_layers`. These set the number of hidden units in the LSTM layers and the number of LSTM layers, respectively. Of course, making these bigger will improve the network's performance but you'll have to watch out for overfitting. If your validation loss is much larger than the training loss, you're probably overfitting. Decrease the size of the network or decrease the dropout keep probability. ``` batch_size = 100 num_steps = 100 lstm_size = 512 num_layers = 2 learning_rate = 0.001 ``` ## Write out the graph for TensorBoard ``` model = build_rnn(len(vocab), batch_size=batch_size, num_steps=num_steps, learning_rate=learning_rate, lstm_size=lstm_size, num_layers=num_layers) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) file_writer = tf.summary.FileWriter('./logs/1', sess.graph) ``` ## Training Time for training which is is pretty straightforward. Here I pass in some data, and get an LSTM state back. Then I pass that state back in to the network so the next batch can continue the state from the previous batch. And every so often (set by `save_every_n`) I calculate the validation loss and save a checkpoint. ``` !mkdir -p checkpoints/anna epochs = 1 save_every_n = 200 train_x, train_y, val_x, val_y = split_data(chars, batch_size, num_steps) model = build_rnn(len(vocab), batch_size=batch_size, num_steps=num_steps, learning_rate=learning_rate, lstm_size=lstm_size, num_layers=num_layers) saver = tf.train.Saver(max_to_keep=100) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) # Use the line below to load a checkpoint and resume training #saver.restore(sess, 'checkpoints/anna20.ckpt') n_batches = int(train_x.shape[1]/num_steps) iterations = n_batches * epochs for e in range(epochs): # Train network new_state = sess.run(model.initial_state) loss = 0 for b, (x, y) in enumerate(get_batch([train_x, train_y], num_steps), 1): iteration = e*n_batches + b start = time.time() feed = {model.inputs: x, model.targets: y, model.keep_prob: 0.5, model.initial_state: new_state} batch_loss, new_state, _ = sess.run([model.cost, model.final_state, model.optimizer], feed_dict=feed) loss += batch_loss end = time.time() print('Epoch {}/{} '.format(e+1, epochs), 'Iteration {}/{}'.format(iteration, iterations), 'Training loss: {:.4f}'.format(loss/b), '{:.4f} sec/batch'.format((end-start))) if (iteration%save_every_n == 0) or (iteration == iterations): # Check performance, notice dropout has been set to 1 val_loss = [] new_state = sess.run(model.initial_state) for x, y in get_batch([val_x, val_y], num_steps): feed = {model.inputs: x, model.targets: y, model.keep_prob: 1., model.initial_state: new_state} batch_loss, new_state = sess.run([model.cost, model.final_state], feed_dict=feed) val_loss.append(batch_loss) print('Validation loss:', np.mean(val_loss), 'Saving checkpoint!') saver.save(sess, "checkpoints/anna/i{}_l{}_{:.3f}.ckpt".format(iteration, lstm_size, np.mean(val_loss))) tf.train.get_checkpoint_state('checkpoints/anna') ``` ## Sampling Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that. The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters. ``` def pick_top_n(preds, vocab_size, top_n=5): p = np.squeeze(preds) p[np.argsort(p)[:-top_n]] = 0 p = p / np.sum(p) c = np.random.choice(vocab_size, 1, p=p)[0] return c def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "): prime = "Far" samples = [c for c in prime] model = build_rnn(vocab_size, lstm_size=lstm_size, sampling=True) saver = tf.train.Saver() with tf.Session() as sess: saver.restore(sess, checkpoint) new_state = sess.run(model.initial_state) for c in prime: x = np.zeros((1, 1)) x[0,0] = vocab_to_int[c] feed = {model.inputs: x, model.keep_prob: 1., model.initial_state: new_state} preds, new_state = sess.run([model.preds, model.final_state], feed_dict=feed) c = pick_top_n(preds, len(vocab)) samples.append(int_to_vocab[c]) for i in range(n_samples): x[0,0] = c feed = {model.inputs: x, model.keep_prob: 1., model.initial_state: new_state} preds, new_state = sess.run([model.preds, model.final_state], feed_dict=feed) c = pick_top_n(preds, len(vocab)) samples.append(int_to_vocab[c]) return ''.join(samples) checkpoint = "checkpoints/anna/i3560_l512_1.122.ckpt" samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far") print(samp) checkpoint = "checkpoints/anna/i200_l512_2.432.ckpt" samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far") print(samp) checkpoint = "checkpoints/anna/i600_l512_1.750.ckpt" samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far") print(samp) checkpoint = "checkpoints/anna/i1000_l512_1.484.ckpt" samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far") print(samp) ```
true
code
0.606498
null
null
null
null
# データサイエンス100本ノック(構造化データ加工編) - Python ## はじめに - 初めに以下のセルを実行してください - 必要なライブラリのインポートとデータベース(PostgreSQL)からのデータ読み込みを行います - pandas等、利用が想定されるライブラリは以下セルでインポートしています - その他利用したいライブラリがあれば適宜インストールしてください("!pip install ライブラリ名"でインストールも可能) - 処理は複数回に分けても構いません - 名前、住所等はダミーデータであり、実在するものではありません ``` import os import pandas as pd import numpy as np from datetime import datetime, date from dateutil.relativedelta import relativedelta import math import psycopg2 from sqlalchemy import create_engine from sklearn import preprocessing from sklearn.model_selection import train_test_split from imblearn.under_sampling import RandomUnderSampler pgconfig = { 'host': 'db', 'port': os.environ['PG_PORT'], 'database': os.environ['PG_DATABASE'], 'user': os.environ['PG_USER'], 'password': os.environ['PG_PASSWORD'], } # pd.read_sql用のコネクタ conn = psycopg2.connect(**pgconfig) df_customer = pd.read_sql(sql='select * from customer', con=conn) df_category = pd.read_sql(sql='select * from category', con=conn) df_product = pd.read_sql(sql='select * from product', con=conn) df_receipt = pd.read_sql(sql='select * from receipt', con=conn) df_store = pd.read_sql(sql='select * from store', con=conn) df_geocode = pd.read_sql(sql='select * from geocode', con=conn) ``` # 演習問題 --- > P-001: レシート明細のデータフレーム(df_receipt)から全項目の先頭10件を表示し、どのようなデータを保有しているか目視で確認せよ。 ``` df_receipt.head(10) ``` --- > P-002: レシート明細のデータフレーム(df_receipt)から売上日(sales_ymd)、顧客ID(customer_id)、商品コード(product_cd)、売上金額(amount)の順に列を指定し、10件表示させよ。 ``` df_receipt[['sales_ymd', 'customer_id', 'product_cd', 'amount']].head(10) ``` --- > P-003: レシート明細のデータフレーム(df_receipt)から売上日(sales_ymd)、顧客ID(customer_id)、商品コード(product_cd)、売上金額(amount)の順に列を指定し、10件表示させよ。ただし、sales_ymdはsales_dateに項目名を変更しながら抽出すること。 ``` df_receipt[['sales_ymd', 'customer_id', 'product_cd', 'amount']]. \ rename(columns={'sales_ymd': 'sales_date'}).head(10) ``` --- > P-004: レシート明細のデータフレーム(df_receipt)から売上日(sales_ymd)、顧客ID(customer_id)、商品コード(product_cd)、売上金額(amount)の順に列を指定し、以下の条件を満たすデータを抽出せよ。 > - 顧客ID(customer_id)が"CS018205000001" ``` df_receipt[['sales_ymd', 'customer_id', 'product_cd', 'amount']]. \ query('customer_id == "CS018205000001"') ``` --- > P-005: レシート明細のデータフレーム(df_receipt)から売上日(sales_ymd)、顧客ID(customer_id)、商品コード(product_cd)、売上金額(amount)の順に列を指定し、以下の条件を満たすデータを抽出せよ。 > - 顧客ID(customer_id)が"CS018205000001" > - 売上金額(amount)が1,000以上 ``` df_receipt[['sales_ymd', 'customer_id', 'product_cd', 'amount']] \ .query('customer_id == "CS018205000001" & amount >= 1000') ``` --- > P-006: レシート明細データフレーム「df_receipt」から売上日(sales_ymd)、顧客ID(customer_id)、商品コード(product_cd)、売上数量(quantity)、売上金額(amount)の順に列を指定し、以下の条件を満たすデータを抽出せよ。 > - 顧客ID(customer_id)が"CS018205000001" > - 売上金額(amount)が1,000以上または売上数量(quantity)が5以上 ``` df_receipt[['sales_ymd', 'customer_id', 'product_cd', 'quantity', 'amount']].\ query('customer_id == "CS018205000001" & (amount >= 1000 | quantity >=5)') ``` --- > P-007: レシート明細のデータフレーム(df_receipt)から売上日(sales_ymd)、顧客ID(customer_id)、商品コード(product_cd)、売上金額(amount)の順に列を指定し、以下の条件を満たすデータを抽出せよ。 > - 顧客ID(customer_id)が"CS018205000001" > - 売上金額(amount)が1,000以上2,000以下 ``` df_receipt[['sales_ymd', 'customer_id', 'product_cd', 'amount']] \ .query('customer_id == "CS018205000001" & 1000 <= amount <= 2000') ``` --- > P-008: レシート明細のデータフレーム(df_receipt)から売上日(sales_ymd)、顧客ID(customer_id)、商品コード(product_cd)、売上金額(amount)の順に列を指定し、以下の条件を満たすデータを抽出せよ。 > - 顧客ID(customer_id)が"CS018205000001" > - 商品コード(product_cd)が"P071401019"以外 ``` df_receipt[['sales_ymd', 'customer_id', 'product_cd', 'amount']] \ .query('customer_id == "CS018205000001" & product_cd != "P071401019"') ``` --- > P-009: 以下の処理において、出力結果を変えずにORをANDに書き換えよ。 `df_store.query('not(prefecture_cd == "13" | floor_area > 900)')` ``` df_store.query('prefecture_cd != "13" & floor_area <= 900') ``` --- > P-010: 店舗データフレーム(df_store)から、店舗コード(store_cd)が"S14"で始まるものだけ全項目抽出し、10件だけ表示せよ。 ``` df_store.query("store_cd.str.startswith('S14')", engine='python').head(10) ``` --- > P-011: 顧客データフレーム(df_customer)から顧客ID(customer_id)の末尾が1のものだけ全項目抽出し、10件だけ表示せよ。 ``` df_customer.query("customer_id.str.endswith('1')", engine='python').head(10) ``` --- > P-012: 店舗データフレーム(df_store)から横浜市の店舗だけ全項目表示せよ。 ``` df_store.query("address.str.contains('横浜市')", engine='python') ``` --- > P-013: 顧客データフレーム(df_customer)から、ステータスコード(status_cd)の先頭がアルファベットのA〜Fで始まるデータを全項目抽出し、10件だけ表示せよ。 ``` df_customer.query("status_cd.str.contains('^[A-F]', regex=True)", engine='python').head(10) ``` --- > P-014: 顧客データフレーム(df_customer)から、ステータスコード(status_cd)の末尾が数字の1〜9で終わるデータを全項目抽出し、10件だけ表示せよ。 ``` df_customer.query("status_cd.str.contains('[1-9]$', regex=True)", engine='python').head(10) ``` --- > P-015: 顧客データフレーム(df_customer)から、ステータスコード(status_cd)の先頭がアルファベットのA〜Fで始まり、末尾が数字の1〜9で終わるデータを全項目抽出し、10件だけ表示せよ。 ``` df_customer.query("status_cd.str.contains('^[A-F].*[1-9]$', regex=True)", engine='python').head(10) ``` --- > P-016: 店舗データフレーム(df_store)から、電話番号(tel_no)が3桁-3桁-4桁のデータを全項目表示せよ。 ``` df_store.query("tel_no.str.contains('^[0-9]{3}-[0-9]{3}-[0-9]{4}$',regex=True)", engine='python') ``` --- > P-17: 顧客データフレーム(df_customer)を生年月日(birth_day)で高齢順にソートし、先頭10件を全項目表示せよ。 ``` df_customer.sort_values('birth_day', ascending=True).head(10) ``` --- > P-18: 顧客データフレーム(df_customer)を生年月日(birth_day)で若い順にソートし、先頭10件を全項目表示せよ。 ``` df_customer.sort_values('birth_day', ascending=False).head(10) ``` --- > P-19: レシート明細データフレーム(df_receipt)に対し、1件あたりの売上金額(amount)が高い順にランクを付与し、先頭10件を抽出せよ。項目は顧客ID(customer_id)、売上金額(amount)、付与したランクを表示させること。なお、売上金額(amount)が等しい場合は同一順位を付与するものとする。 ``` df_tmp = pd.concat([df_receipt[['customer_id', 'amount']] ,df_receipt['amount'].rank(method='min', ascending=False)], axis=1) df_tmp.columns = ['customer_id', 'amount', 'ranking'] df_tmp.sort_values('ranking', ascending=True).head(10) ``` --- > P-020: レシート明細データフレーム(df_receipt)に対し、1件あたりの売上金額(amount)が高い順にランクを付与し、先頭10件を抽出せよ。項目は顧客ID(customer_id)、売上金額(amount)、付与したランクを表示させること。なお、売上金額(amount)が等しい場合でも別順位を付与すること。 ``` df_tmp = pd.concat([df_receipt[['customer_id', 'amount']] ,df_receipt['amount'].rank(method='first', ascending=False)], axis=1) df_tmp.columns = ['customer_id', 'amount', 'ranking'] df_tmp.sort_values('ranking', ascending=True).head(10) ``` --- > P-021: レシート明細データフレーム(df_receipt)に対し、件数をカウントせよ。 ``` len(df_receipt) ``` --- > P-022: レシート明細データフレーム(df_receipt)の顧客ID(customer_id)に対し、ユニーク件数をカウントせよ。 ``` len(df_receipt['customer_id'].unique()) ``` --- > P-023: レシート明細データフレーム(df_receipt)に対し、店舗コード(store_cd)ごとに売上金額(amount)と売上数量(quantity)を合計せよ。 ``` df_receipt.groupby('store_cd').agg({'amount':'sum', 'quantity':'sum'}).reset_index() ``` --- > P-024: レシート明細データフレーム(df_receipt)に対し、顧客ID(customer_id)ごとに最も新しい売上日(sales_ymd)を求め、10件表示せよ。 ``` df_receipt.groupby('customer_id').sales_ymd.max().reset_index().head(10) ``` --- > P-025: レシート明細データフレーム(df_receipt)に対し、顧客ID(customer_id)ごとに最も古い売上日(sales_ymd)を求め、10件表示せよ。 ``` df_receipt.groupby('customer_id').agg({'sales_ymd':'min'}).head(10) ``` --- > P-026: レシート明細データフレーム(df_receipt)に対し、顧客ID(customer_id)ごとに最も新しい売上日(sales_ymd)と古い売上日を求め、両者が異なるデータを10件表示せよ。 ``` df_tmp = df_receipt.groupby('customer_id'). \ agg({'sales_ymd':['max','min']}).reset_index() df_tmp.columns = ["_".join(pair) for pair in df_tmp.columns] df_tmp.query('sales_ymd_max != sales_ymd_min').head(10) ``` --- > P-027: レシート明細データフレーム(df_receipt)に対し、店舗コード(store_cd)ごとに売上金額(amount)の平均を計算し、降順でTOP5を表示せよ。 ``` df_receipt.groupby('store_cd').agg({'amount':'mean'}).reset_index(). \ sort_values('amount', ascending=False).head(5) ``` --- > P-028: レシート明細データフレーム(df_receipt)に対し、店舗コード(store_cd)ごとに売上金額(amount)の中央値を計算し、降順でTOP5を表示せよ。 ``` df_receipt.groupby('store_cd').agg({'amount':'median'}).reset_index(). \ sort_values('amount', ascending=False).head(5) ``` --- > P-029: レシート明細データフレーム(df_receipt)に対し、店舗コード(store_cd)ごとに商品コード(product_cd)の最頻値を求めよ。 ``` df_receipt.groupby('store_cd').product_cd. \ apply(lambda x: x.mode()).reset_index() ``` --- > P-030: レシート明細データフレーム(df_receipt)に対し、店舗コード(store_cd)ごとに売上金額(amount)の標本分散を計算し、降順でTOP5を表示せよ。 ``` df_receipt.groupby('store_cd').amount.var(ddof=0).reset_index(). \ sort_values('amount', ascending=False).head(5) ``` --- > P-031: レシート明細データフレーム(df_receipt)に対し、店舗コード(store_cd)ごとに売上金額(amount)の標本標準偏差を計算し、降順でTOP5を表示せよ。 TIPS: PandasとNumpyでddofのデフォルト値が異なることに注意しましょう ``` Pandas: DataFrame.std(self, axis=None, skipna=None, level=None, ddof=1, numeric_only=None, **kwargs) Numpy: numpy.std(a, axis=None, dtype=None, out=None, ddof=0, keepdims=) ``` ``` df_receipt.groupby('store_cd').amount.std(ddof=0).reset_index(). \ sort_values('amount', ascending=False).head(5) ``` --- > P-032: レシート明細データフレーム(df_receipt)の売上金額(amount)について、25%刻みでパーセンタイル値を求めよ。 ``` # コード例1 np.percentile(df_receipt['amount'], q=[25, 50, 75,100]) # コード例2 df_receipt.amount.quantile(q=np.arange(5)/4) ``` --- > P-033: レシート明細データフレーム(df_receipt)に対し、店舗コード(store_cd)ごとに売上金額(amount)の平均を計算し、330以上のものを抽出せよ。 ``` df_receipt.groupby('store_cd').amount.mean(). \ reset_index().query('amount >= 330') ``` --- > P-034: レシート明細データフレーム(df_receipt)に対し、顧客ID(customer_id)ごとに売上金額(amount)を合計して全顧客の平均を求めよ。ただし、顧客IDが"Z"から始まるのものは非会員を表すため、除外して計算すること。 ``` # queryを使わない書き方 df_receipt[~df_receipt['customer_id'].str.startswith("Z")]. \ groupby('customer_id').amount.sum().mean() # queryを使う書き方 df_receipt.query('not customer_id.str.startswith("Z")', engine='python').groupby('customer_id').amount.sum().mean() ``` --- > P-035: レシート明細データフレーム(df_receipt)に対し、顧客ID(customer_id)ごとに売上金額(amount)を合計して全顧客の平均を求め、平均以上に買い物をしている顧客を抽出せよ。ただし、顧客IDが"Z"から始まるのものは非会員を表すため、除外して計算すること。なお、データは10件だけ表示させれば良い。 ``` df_receipt_tmp = df_receipt[~df_receipt['customer_id'].str.startswith("Z")] amount_mean = df_receipt_tmp.groupby('customer_id').amount.sum().mean() df_amount_sum = df_receipt_tmp.groupby('customer_id').amount.sum().reset_index() df_amount_sum[df_amount_sum['amount'] >= amount_mean].head(10) ``` --- > P-036: レシート明細データフレーム(df_receipt)と店舗データフレーム(df_store)を内部結合し、レシート明細データフレームの全項目と店舗データフレームの店舗名(store_name)を10件表示させよ。 ``` pd.merge(df_receipt, df_store[['store_cd','store_name']], how='inner', on='store_cd').head(10) ``` --- > P-037: 商品データフレーム(df_product)とカテゴリデータフレーム(df_category)を内部結合し、商品データフレームの全項目とカテゴリデータフレームの小区分名(category_small_name)を10件表示させよ。 ``` pd.merge(df_product , df_category[['category_small_cd','category_small_name']] , how='inner', on='category_small_cd').head(10) ``` --- > P-038: 顧客データフレーム(df_customer)とレシート明細データフレーム(df_receipt)から、各顧客ごとの売上金額合計を求めよ。ただし、売上実績がない顧客については売上金額を0として表示させること。また、顧客は性別コード(gender_cd)が女性(1)であるものを対象とし、非会員(顧客IDが"Z"から始まるもの)は除外すること。なお、結果は10件だけ表示させれば良い。 ``` df_amount_sum = df_receipt.groupby('customer_id').amount.sum().reset_index() df_tmp = df_customer. \ query('gender_cd == "1" and not customer_id.str.startswith("Z")', engine='python') pd.merge(df_tmp['customer_id'], df_amount_sum, how='left', on='customer_id').fillna(0).head(10) ``` --- > P-039: レシート明細データフレーム(df_receipt)から売上日数の多い顧客の上位20件と、売上金額合計の多い顧客の上位20件を抽出し、完全外部結合せよ。ただし、非会員(顧客IDが"Z"から始まるもの)は除外すること。 ``` df_sum = df_receipt.groupby('customer_id').amount.sum().reset_index() df_sum = df_sum.query('not customer_id.str.startswith("Z")', engine='python') df_sum = df_sum.sort_values('amount', ascending=False).head(20) df_cnt = df_receipt[~df_receipt.duplicated(subset=['customer_id', 'sales_ymd'])] df_cnt = df_cnt.query('not customer_id.str.startswith("Z")', engine='python') df_cnt = df_cnt.groupby('customer_id').sales_ymd.count().reset_index() df_cnt = df_cnt.sort_values('sales_ymd', ascending=False).head(20) pd.merge(df_sum, df_cnt, how='outer', on='customer_id') ``` --- > P-040: 全ての店舗と全ての商品を組み合わせると何件のデータとなるか調査したい。店舗(df_store)と商品(df_product)を直積した件数を計算せよ。 ``` df_store_tmp = df_store.copy() df_product_tmp = df_product.copy() df_store_tmp['key'] = 0 df_product_tmp['key'] = 0 len(pd.merge(df_store_tmp, df_product_tmp, how='outer', on='key')) ``` --- > P-041: レシート明細データフレーム(df_receipt)の売上金額(amount)を日付(sales_ymd)ごとに集計し、前日からの売上金額増減を計算せよ。なお、計算結果は10件表示すればよい。 ``` df_sales_amount_by_date = df_receipt[['sales_ymd', 'amount']].\ groupby('sales_ymd').sum().reset_index() df_sales_amount_by_date = pd.concat([df_sales_amount_by_date, df_sales_amount_by_date.shift()], axis=1) df_sales_amount_by_date.columns = ['sales_ymd','amount','lag_ymd','lag_amount'] df_sales_amount_by_date['diff_amount'] = \ df_sales_amount_by_date['amount'] - df_sales_amount_by_date['lag_amount'] df_sales_amount_by_date.head(10) ``` --- > P-042: レシート明細データフレーム(df_receipt)の売上金額(amount)を日付(sales_ymd)ごとに集計し、各日付のデータに対し、1日前、2日前、3日前のデータを結合せよ。結果は10件表示すればよい。 ``` # コード例1:縦持ちケース df_sales_amount_by_date = df_receipt[['sales_ymd', 'amount']]. \ groupby('sales_ymd').sum().reset_index() for i in range(1, 4): if i == 1: df_lag = pd.concat([df_sales_amount_by_date, df_sales_amount_by_date.shift(i)],axis=1) else: df_lag = df_lag.append(pd.concat([df_sales_amount_by_date, df_sales_amount_by_date.shift(i)], axis=1)) df_lag.columns = ['sales_ymd', 'amount', 'lag_ymd', 'lag_amount'] df_lag.dropna().sort_values(['sales_ymd','lag_ymd']).head(10) # コード例2:横持ちケース df_sales_amount_by_date = df_receipt[['sales_ymd', 'amount']].\ groupby('sales_ymd').sum().reset_index() for i in range(1, 4): if i == 1: df_lag = pd.concat([df_sales_amount_by_date, df_sales_amount_by_date.shift(i)],axis=1) else: df_lag = pd.concat([df_lag, df_sales_amount_by_date.shift(i)],axis=1) df_lag.columns = ['sales_ymd', 'amount', 'lag_ymd_1', 'lag_amount_1', 'lag_ymd_2', 'lag_amount_2', 'lag_ymd_3', 'lag_amount_3'] df_lag.dropna().sort_values(['sales_ymd']).head(10) ``` --- > P-043: レシート明細データフレーム(df_receipt)と顧客データフレーム(df_customer)を結合し、性別(gender)と年代(ageから計算)ごとに売上金額(amount)を合計した売上サマリデータフレーム(df_sales_summary)を作成せよ。性別は0が男性、1が女性、9が不明を表すものとする。 > > ただし、項目構成は年代、女性の売上金額、男性の売上金額、性別不明の売上金額の4項目とすること(縦に年代、横に性別のクロス集計)。また、年代は10歳ごとの階級とすること。 ``` # コード例1 df_tmp = pd.merge(df_receipt, df_customer, how ='inner', on="customer_id") df_tmp['era'] = df_tmp['age'].apply(lambda x: math.floor(x / 10) * 10) df_sales_summary = pd.pivot_table(df_tmp, index='era', columns='gender_cd', values='amount', aggfunc='sum').reset_index() df_sales_summary.columns = ['era', 'male', 'female', 'unknown'] df_sales_summary # コード例2 df_tmp = pd.merge(df_receipt, df_customer, how ='inner', on="customer_id") df_tmp['era'] = np.floor(df_tmp['age'] / 10).astype(int) * 10 df_sales_summary = pd.pivot_table(df_tmp, index='era', columns='gender_cd', values='amount', aggfunc='sum').reset_index() df_sales_summary.columns = ['era', 'male', 'female', 'unknown'] df_sales_summary ``` --- > P-044: 前設問で作成した売上サマリデータフレーム(df_sales_summary)は性別の売上を横持ちさせたものであった。このデータフレームから性別を縦持ちさせ、年代、性別コード、売上金額の3項目に変換せよ。ただし、性別コードは男性を"00"、女性を"01"、不明を"99"とする。 ``` df_sales_summary = df_sales_summary.set_index('era'). \ stack().reset_index().replace({'female':'01','male':'00','unknown':'99'}). \ rename(columns={'level_1':'gender_cd', 0: 'amount'}) df_sales_summary ``` --- > P-045: 顧客データフレーム(df_customer)の生年月日(birth_day)は日付型でデータを保有している。これをYYYYMMDD形式の文字列に変換し、顧客ID(customer_id)とともに抽出せよ。データは10件を抽出すれば良い。 ``` pd.concat([df_customer['customer_id'], pd.to_datetime(df_customer['birth_day']).dt.strftime('%Y%m%d')], axis = 1).head(10) ``` --- > P-046: 顧客データフレーム(df_customer)の申し込み日(application_date)はYYYYMMDD形式の文字列型でデータを保有している。これを日付型に変換し、顧客ID(customer_id)とともに抽出せよ。データは10件を抽出すれば良い。 ``` pd.concat([df_customer['customer_id'], pd.to_datetime(df_customer['application_date'])], axis=1).head(10) ``` --- > P-047: レシート明細データフレーム(df_receipt)の売上日(sales_ymd)はYYYYMMDD形式の数値型でデータを保有している。これを日付型に変換し、レシート番号(receipt_no)、レシートサブ番号(receipt_sub_no)とともに抽出せよ。データは10件を抽出すれば良い。 ``` pd.concat([df_receipt[['receipt_no', 'receipt_sub_no']], pd.to_datetime(df_receipt['sales_ymd'].astype('str'))], axis=1).head(10) ``` --- > P-048: レシート明細データフレーム(df_receipt)の売上エポック秒(sales_epoch)は数値型のUNIX秒でデータを保有している。これを日付型に変換し、レシート番号(receipt_no)、レシートサブ番号(receipt_sub_no)とともに抽出せよ。データは10件を抽出すれば良い。 ``` pd.concat([df_receipt[['receipt_no', 'receipt_sub_no']], pd.to_datetime(df_receipt['sales_epoch'], unit='s')], axis=1).head(10) ``` --- > P-049: レシート明細データフレーム(df_receipt)の売上エポック秒(sales_epoch)を日付型に変換し、「年」だけ取り出してレシート番号(receipt_no)、レシートサブ番号(receipt_sub_no)とともに抽出せよ。データは10件を抽出すれば良い。 ``` pd.concat([df_receipt[['receipt_no', 'receipt_sub_no']], pd.to_datetime(df_receipt['sales_epoch'], unit='s').dt.year], axis=1).head(10) ``` --- > P-050: レシート明細データフレーム(df_receipt)の売上エポック秒(sales_epoch)を日付型に変換し、「月」だけ取り出してレシート番号(receipt_no)、レシートサブ番号(receipt_sub_no)とともに抽出せよ。なお、「月」は0埋め2桁で取り出すこと。データは10件を抽出すれば良い。 ``` # dt.monthでも月を取得できるが、ここでは0埋め2桁で取り出すためstrftimeを利用している pd.concat([df_receipt[['receipt_no', 'receipt_sub_no']], pd.to_datetime(df_receipt['sales_epoch'], unit='s'). \ dt.strftime('%m')],axis=1).head(10) ``` --- > P-051: レシート明細データフレーム(df_receipt)の売上エポック秒を日付型に変換し、「日」だけ取り出してレシート番号(receipt_no)、レシートサブ番号(receipt_sub_no)とともに抽出せよ。なお、「日」は0埋め2桁で取り出すこと。データは10件を抽出すれば良い。 ``` # dt.dayでも日を取得できるが、ここでは0埋め2桁で取り出すためstrftimeを利用している pd.concat([df_receipt[['receipt_no', 'receipt_sub_no']], pd.to_datetime(df_receipt['sales_epoch'], unit='s'). \ dt.strftime('%d')],axis=1).head(10) ``` --- > P-052: レシート明細データフレーム(df_receipt)の売上金額(amount)を顧客ID(customer_id)ごとに合計の上、売上金額合計に対して2,000円以下を0、2,000円より大きい金額を1に2値化し、顧客ID、売上金額合計とともに10件表示せよ。ただし、顧客IDが"Z"から始まるのものは非会員を表すため、除外して計算すること。 ``` # コード例1 df_sales_amount = df_receipt.query('not customer_id.str.startswith("Z")', engine='python') df_sales_amount = df_sales_amount[['customer_id', 'amount']]. \ groupby('customer_id').sum().reset_index() df_sales_amount['sales_flg'] = df_sales_amount['amount']. \ apply(lambda x: 1 if x > 2000 else 0) df_sales_amount.head(10) # コード例2(np.whereの活用) df_sales_amount = df_receipt.query('not customer_id.str.startswith("Z")', engine='python') df_sales_amount = df_sales_amount[['customer_id', 'amount']]. \ groupby('customer_id').sum().reset_index() df_sales_amount['sales_flg'] = np.where(df_sales_amount['amount'] > 2000, 1, 0) df_sales_amount.head(10) ``` --- > P-053: 顧客データフレーム(df_customer)の郵便番号(postal_cd)に対し、東京(先頭3桁が100〜209のもの)を1、それ以外のものを0に2値化せよ。さらにレシート明細データフレーム(df_receipt)と結合し、全期間において売上実績がある顧客数を、作成した2値ごとにカウントせよ。 ``` # コード例1 df_tmp = df_customer[['customer_id', 'postal_cd']].copy() df_tmp['postal_flg'] = df_tmp['postal_cd']. \ apply(lambda x: 1 if 100 <= int(x[0:3]) <= 209 else 0) pd.merge(df_tmp, df_receipt, how='inner', on='customer_id'). \ groupby('postal_flg').agg({'customer_id':'nunique'}) # コード例2(np.where、betweenの活用) df_tmp = df_customer[['customer_id', 'postal_cd']].copy() df_tmp['postal_flg'] = np.where(df_tmp['postal_cd'].str[0:3].astype(int) .between(100, 209), 1, 0) pd.merge(df_tmp, df_receipt, how='inner', on='customer_id'). \ groupby('postal_flg').agg({'customer_id':'nunique'}) ``` --- > P-054: 顧客データフレーム(df_customer)の住所(address)は、埼玉県、千葉県、東京都、神奈川県のいずれかとなっている。都道府県毎にコード値を作成し、顧客ID、住所とともに抽出せよ。値は埼玉県を11、千葉県を12、東京都を13、神奈川県を14とすること。結果は10件表示させれば良い。 ``` pd.concat([df_customer[['customer_id', 'address']], df_customer['address'].str[0:3].map({'埼玉県': '11', '千葉県':'12', '東京都':'13', '神奈川':'14'})],axis=1).head(10) ``` --- > P-055: レシート明細データフレーム(df_receipt)の売上金額(amount)を顧客ID(customer_id)ごとに合計し、その合計金額の四分位点を求めよ。その上で、顧客ごとの売上金額合計に対して以下の基準でカテゴリ値を作成し、顧客ID、売上金額合計とともに表示せよ。カテゴリ値は上から順に1〜4とする。結果は10件表示させれば良い。 > > - 最小値以上第一四分位未満 > - 第一四分位以上第二四分位未満 > - 第二四分位以上第三四分位未満 > - 第三四分位以上 ``` # コード例1 df_sales_amount = df_receipt[['customer_id', 'amount']]. \ groupby('customer_id').sum().reset_index() pct25 = np.quantile(df_sales_amount['amount'], 0.25) pct50 = np.quantile(df_sales_amount['amount'], 0.5) pct75 = np.quantile(df_sales_amount['amount'], 0.75) def pct_group(x): if x < pct25: return 1 elif pct25 <= x < pct50: return 2 elif pct50 <= x < pct75: return 3 elif pct75 <= x: return 4 df_sales_amount['pct_group'] = df_sales_amount['amount'].apply(lambda x: pct_group(x)) df_sales_amount.head(10) # 確認用 print('pct25:', pct25) print('pct50:', pct50) print('pct75:', pct75) # コード例2 df_temp = df_receipt.groupby('customer_id')[['amount']].sum() df_temp['quantile'], bins = \ pd.qcut(df_receipt.groupby('customer_id')['amount'].sum(), 4, retbins=True) display(df_temp.head()) print('quantiles:', bins) ``` --- > P-056: 顧客データフレーム(df_customer)の年齢(age)をもとに10歳刻みで年代を算出し、顧客ID(customer_id)、生年月日(birth_day)とともに抽出せよ。ただし、60歳以上は全て60歳代とすること。年代を表すカテゴリ名は任意とする。先頭10件を表示させればよい。 ``` # コード例1 df_customer_era = pd.concat([df_customer[['customer_id', 'birth_day']], df_customer['age']. \ apply(lambda x: min(math.floor(x / 10) * 10, 60))], axis=1) df_customer_era.head(10) # コード例2 df_customer['age_group'] = pd.cut(df_customer['age'], bins=[0, 10, 20, 30, 40, 50, 60, np.inf], right=False) df_customer[['customer_id', 'birth_day', 'age_group']].head(10) ``` --- > P-057: 前問題の抽出結果と性別(gender)を組み合わせ、新たに性別×年代の組み合わせを表すカテゴリデータを作成せよ。組み合わせを表すカテゴリの値は任意とする。先頭10件を表示させればよい。 ``` df_customer_era['era_gender'] = \ df_customer['gender_cd'] + df_customer_era['age'].astype('str') df_customer_era.head(10) ``` --- > P-058: 顧客データフレーム(df_customer)の性別コード(gender_cd)をダミー変数化し、顧客ID(customer_id)とともに抽出せよ。結果は10件表示させれば良い。 ``` pd.get_dummies(df_customer[['customer_id', 'gender_cd']], columns=['gender_cd']).head(10) ``` --- > P-059: レシート明細データフレーム(df_receipt)の売上金額(amount)を顧客ID(customer_id)ごとに合計し、売上金額合計を平均0、標準偏差1に標準化して顧客ID、売上金額合計とともに表示せよ。標準化に使用する標準偏差は、不偏標準偏差と標本標準偏差のどちらでも良いものとする。ただし、顧客IDが"Z"から始まるのものは非会員を表すため、除外して計算すること。結果は10件表示させれば良い。 TIPS: - query()の引数engineで'python'か'numexpr'かを選択でき、デフォルトはインストールされていればnumexprが、無ければpythonが使われます。さらに、文字列メソッドはengine='python'でないとquery()メソッドで使えません。 ``` # skleanのpreprocessing.scaleを利用するため、標本標準偏差で計算されている df_sales_amount = df_receipt.query('not customer_id.str.startswith("Z")', engine='python'). \ groupby('customer_id'). \ agg({'amount':'sum'}).reset_index() df_sales_amount['amount_ss'] = preprocessing.scale(df_sales_amount['amount']) df_sales_amount.head(10) # コード例2(fitを行うことで、別のデータでも同じの平均・標準偏差で標準化を行える) df_sales_amount = df_receipt.query('not customer_id.str.startswith("Z")', engine='python'). \ groupby('customer_id'). \ agg({'amount':'sum'}).reset_index() scaler = preprocessing.StandardScaler() scaler.fit(df_sales_amount[['amount']]) df_sales_amount['amount_ss'] = scaler.transform(df_sales_amount[['amount']]) df_sales_amount.head(10) ``` --- > P-060: レシート明細データフレーム(df_receipt)の売上金額(amount)を顧客ID(customer_id)ごとに合計し、売上金額合計を最小値0、最大値1に正規化して顧客ID、売上金額合計とともに表示せよ。ただし、顧客IDが"Z"から始まるのものは非会員を表すため、除外して計算すること。結果は10件表示させれば良い。 ``` # コード例1 df_sales_amount = df_receipt.query('not customer_id.str.startswith("Z")', engine='python'). \ groupby('customer_id'). \ agg({'amount':'sum'}).reset_index() df_sales_amount['amount_mm'] = \ preprocessing.minmax_scale(df_sales_amount['amount']) df_sales_amount.head(10) # コード例2(fitを行うことで、別のデータでも同じの平均・標準偏差で標準化を行える) df_sales_amount = df_receipt.query('not customer_id.str.startswith("Z")', engine='python'). \ groupby('customer_id'). \ agg({'amount':'sum'}).reset_index() scaler = preprocessing.MinMaxScaler() scaler.fit(df_sales_amount[['amount']]) df_sales_amount['amount_mm'] = scaler.transform(df_sales_amount[['amount']]) df_sales_amount.head(10) ``` --- > P-061: レシート明細データフレーム(df_receipt)の売上金額(amount)を顧客ID(customer_id)ごとに合計し、売上金額合計を常用対数化(底=10)して顧客ID、売上金額合計とともに表示せよ。ただし、顧客IDが"Z"から始まるのものは非会員を表すため、除外して計算すること。結果は10件表示させれば良い。 ``` # skleanのpreprocessing.scaleを利用するため、標本標準偏差で計算されている df_sales_amount = df_receipt.query('not customer_id.str.startswith("Z")', engine='python'). \ groupby('customer_id'). \ agg({'amount':'sum'}).reset_index() df_sales_amount['amount_log10'] = np.log10(df_sales_amount['amount'] + 0.5) df_sales_amount.head(10) ``` --- > P-062: レシート明細データフレーム(df_receipt)の売上金額(amount)を顧客ID(customer_id)ごとに合計し、売上金額合計を自然対数化(底=e)して顧客ID、売上金額合計とともに表示せよ。ただし、顧客IDが"Z"から始まるのものは非会員を表すため、除外して計算すること。結果は10件表示させれば良い。 ``` df_sales_amount = df_receipt.query('not customer_id.str.startswith("Z")', engine='python'). \ groupby('customer_id'). \ agg({'amount':'sum'}).reset_index() df_sales_amount['amount_loge'] = np.log(df_sales_amount['amount'] + 0.5) df_sales_amount.head(10) ``` --- > P-063: 商品データフレーム(df_product)の単価(unit_price)と原価(unit_cost)から、各商品の利益額を算出せよ。結果は10件表示させれば良い。 ``` df_tmp = df_product.copy() df_tmp['unit_profit'] = df_tmp['unit_price'] - df_tmp['unit_cost'] df_tmp.head(10) ``` --- > P-064: 商品データフレーム(df_product)の単価(unit_price)と原価(unit_cost)から、各商品の利益率の全体平均を算出せよ。 ただし、単価と原価にはNULLが存在することに注意せよ。 ``` df_tmp = df_product.copy() df_tmp['unit_profit_rate'] = \ (df_tmp['unit_price'] - df_tmp['unit_cost']) / df_tmp['unit_price'] df_tmp['unit_profit_rate'].mean(skipna=True) ``` --- > P-065: 商品データフレーム(df_product)の各商品について、利益率が30%となる新たな単価を求めよ。ただし、1円未満は切り捨てること。そして結果を10件表示させ、利益率がおよそ30%付近であることを確認せよ。ただし、単価(unit_price)と原価(unit_cost)にはNULLが存在することに注意せよ。 ``` df_tmp = df_product.copy() df_tmp['new_price'] = np.floor(df_tmp['unit_cost'] / 0.7) df_tmp['new_profit_rate'] = \ (df_tmp['new_price'] - df_tmp['unit_cost']) / df_tmp['new_price'] df_tmp.head(10) ``` --- > P-066: 商品データフレーム(df_product)の各商品について、利益率が30%となる新たな単価を求めよ。今回は、1円未満を丸めること(四捨五入または偶数への丸めで良い)。そして結果を10件表示させ、利益率がおよそ30%付近であることを確認せよ。ただし、単価(unit_price)と原価(unit_cost)にはNULLが存在することに注意せよ。 ``` df_tmp = df_product.copy() df_tmp['new_price'] = np.round(df_tmp['unit_cost'] / 0.7) df_tmp['new_profit_rate'] = \ (df_tmp['new_price'] - df_tmp['unit_cost']) / df_tmp['new_price'] df_tmp.head(10) ``` --- > P-067: 商品データフレーム(df_product)の各商品について、利益率が30%となる新たな単価を求めよ。今回は、1円未満を切り上げること。そして結果を10件表示させ、利益率がおよそ30%付近であることを確認せよ。ただし、単価(unit_price)と原価(unit_cost)にはNULLが存在することに注意せよ。 ``` df_tmp = df_product.copy() df_tmp['new_price'] = np.ceil(df_tmp['unit_cost'] / 0.7) df_tmp['new_profit_rate'] = \ (df_tmp['new_price'] - df_tmp['unit_cost']) / df_tmp['new_price'] df_tmp.head(10) ``` --- > P-068: 商品データフレーム(df_product)の各商品について、消費税率10%の税込み金額を求めよ。 1円未満の端数は切り捨てとし、結果は10件表示すれば良い。ただし、単価(unit_price)にはNULLが存在することに注意せよ。 ``` df_tmp = df_product.copy() df_tmp['price_tax'] = np.floor(df_tmp['unit_price'] * 1.1) df_tmp.head(10) ``` --- > P-069: レシート明細データフレーム(df_receipt)と商品データフレーム(df_product)を結合し、顧客毎に全商品の売上金額合計と、カテゴリ大区分(category_major_cd)が"07"(瓶詰缶詰)の売上金額合計を計算の上、両者の比率を求めよ。抽出対象はカテゴリ大区分"07"(瓶詰缶詰)の売上実績がある顧客のみとし、結果は10件表示させればよい。 ``` # コード例1 df_tmp_1 = pd.merge(df_receipt, df_product, how='inner', on='product_cd').groupby('customer_id'). \ agg({'amount':'sum'}).reset_index() df_tmp_2 = pd.merge(df_receipt, df_product.query('category_major_cd == "07"'), how='inner', on='product_cd').groupby('customer_id').\ agg({'amount':'sum'}).reset_index() df_tmp_3 = pd.merge(df_tmp_1, df_tmp_2, how='inner', on='customer_id') df_tmp_3['rate_07'] = df_tmp_3['amount_y'] / df_tmp_3['amount_x'] df_tmp_3.head(10) # コード例2 df_temp = df_receipt.merge(df_product, how='left', on='product_cd'). \ groupby(['customer_id', 'category_major_cd'])['amount'].sum().unstack() df_temp = df_temp[df_temp['07'] > 0] df_temp['sum'] = df_temp.sum(axis=1) df_temp['07_rate'] = df_temp['07'] / df_temp['sum'] df_temp.head(10) ``` --- > P-070: レシート明細データフレーム(df_receipt)の売上日(sales_ymd)に対し、顧客データフレーム(df_customer)の会員申込日(application_date)からの経過日数を計算し、顧客ID(customer_id)、売上日、会員申込日とともに表示せよ。結果は10件表示させれば良い(なお、sales_ymdは数値、application_dateは文字列でデータを保持している点に注意)。 ``` df_tmp = pd.merge(df_receipt[['customer_id', 'sales_ymd']], df_customer[['customer_id', 'application_date']], how='inner', on='customer_id') df_tmp = df_tmp.drop_duplicates() df_tmp['sales_ymd'] = pd.to_datetime(df_tmp['sales_ymd'].astype('str')) df_tmp['application_date'] = pd.to_datetime(df_tmp['application_date']) df_tmp['elapsed_date'] = df_tmp['sales_ymd'] - df_tmp['application_date'] df_tmp.head(10) ``` --- > P-071: レシート明細データフレーム(df_receipt)の売上日(sales_ymd)に対し、顧客データフレーム(df_customer)の会員申込日(application_date)からの経過月数を計算し、顧客ID(customer_id)、売上日、会員申込日とともに表示せよ。結果は10件表示させれば良い(なお、sales_ymdは数値、application_dateは文字列でデータを保持している点に注意)。1ヶ月未満は切り捨てること。 ``` df_tmp = pd.merge(df_receipt[['customer_id', 'sales_ymd']], df_customer[['customer_id', 'application_date']], how='inner', on='customer_id') df_tmp = df_tmp.drop_duplicates() df_tmp['sales_ymd'] = pd.to_datetime(df_tmp['sales_ymd'].astype('str')) df_tmp['application_date'] = pd.to_datetime(df_tmp['application_date']) df_tmp['elapsed_date'] = df_tmp[['sales_ymd', 'application_date']]. \ apply(lambda x: relativedelta(x[0], x[1]).years * 12 + \ relativedelta(x[0], x[1]).months, axis=1) df_tmp.sort_values('customer_id').head(10) ``` --- > P-072: レシート明細データフレーム(df_receipt)の売上日(sales_ymd)に対し、顧客データフレーム(df_customer)の会員申込日(application_date)からの経過年数を計算し、顧客ID(customer_id)、売上日、会員申込日とともに表示せよ。結果は10件表示させれば良い(なお、sales_ymdは数値、application_dateは文字列でデータを保持している点に注意)。1年未満は切り捨てること。 ``` df_tmp = pd.merge(df_receipt[['customer_id', 'sales_ymd']], df_customer[['customer_id', 'application_date']], how='inner', on='customer_id') df_tmp = df_tmp.drop_duplicates() df_tmp['sales_ymd'] = pd.to_datetime(df_tmp['sales_ymd'].astype('str')) df_tmp['application_date'] = pd.to_datetime(df_tmp['application_date']) df_tmp['elapsed_date'] = df_tmp[['sales_ymd', 'application_date']]. \ apply(lambda x: relativedelta(x[0], x[1]).years, axis=1) df_tmp.head(10) ``` --- > P-073: レシート明細データフレーム(df_receipt)の売上日(sales_ymd)に対し、顧客データフレーム(df_customer)の会員申込日(application_date)からのエポック秒による経過時間を計算し、顧客ID(customer_id)、売上日、会員申込日とともに表示せよ。結果は10件表示させれば良い(なお、sales_ymdは数値、application_dateは文字列でデータを保持している点に注意)。なお、時間情報は保有していないため各日付は0時0分0秒を表すものとする。 ``` df_tmp = pd.merge(df_receipt[['customer_id', 'sales_ymd']], df_customer[['customer_id', 'application_date']], how='inner', on='customer_id') df_tmp = df_tmp.drop_duplicates() df_tmp['sales_ymd'] = pd.to_datetime(df_tmp['sales_ymd'].astype('str')) df_tmp['application_date'] = pd.to_datetime(df_tmp['application_date']) df_tmp['elapsed_date'] = \ (df_tmp['sales_ymd'].view(np.int64) / 10**9) - (df_tmp['application_date'].\ view(np.int64) / 10**9) df_tmp.head(10) ``` --- > P-074: レシート明細データフレーム(df_receipt)の売上日(sales_ymd)に対し、当該週の月曜日からの経過日数を計算し、売上日、当該週の月曜日付とともに表示せよ。結果は10件表示させれば良い(なお、sales_ymdは数値でデータを保持している点に注意)。 ``` df_tmp = df_receipt[['customer_id', 'sales_ymd']] df_tmp = df_tmp.drop_duplicates() df_tmp['sales_ymd'] = pd.to_datetime(df_tmp['sales_ymd'].astype('str')) df_tmp['monday'] = df_tmp['sales_ymd']. \ apply(lambda x: x - relativedelta(days=x.weekday())) df_tmp['elapsed_weekday'] = df_tmp['sales_ymd'] - df_tmp['monday'] df_tmp.head(10) ``` --- > P-075: 顧客データフレーム(df_customer)からランダムに1%のデータを抽出し、先頭から10件データを抽出せよ。 ``` df_customer.sample(frac=0.01).head(10) ``` --- > P-076: 顧客データフレーム(df_customer)から性別(gender_cd)の割合に基づきランダムに10%のデータを層化抽出し、性別ごとに件数を集計せよ。 ``` # sklearn.model_selection.train_test_splitを使用した例 _, df_tmp = train_test_split(df_customer, test_size=0.1, stratify=df_customer['gender_cd']) df_tmp.groupby('gender_cd').agg({'customer_id' : 'count'}) df_tmp.head(10) ``` --- > P-077: レシート明細データフレーム(df_receipt)の売上金額(amount)を顧客単位に合計し、合計した売上金額の外れ値を抽出せよ。ただし、顧客IDが"Z"から始まるのものは非会員を表すため、除外して計算すること。なお、ここでは外れ値を平均から3σ以上離れたものとする。結果は10件表示させれば良い。 ``` # skleanのpreprocessing.scaleを利用するため、標本標準偏差で計算されている df_sales_amount = df_receipt.query('not customer_id.str.startswith("Z")', engine='python'). \ groupby('customer_id'). \ agg({'amount':'sum'}).reset_index() df_sales_amount['amount_ss'] = preprocessing.scale(df_sales_amount['amount']) df_sales_amount.query('abs(amount_ss) >= 3').head(10) ``` --- > P-078: レシート明細データフレーム(df_receipt)の売上金額(amount)を顧客単位に合計し、合計した売上金額の外れ値を抽出せよ。ただし、顧客IDが"Z"から始まるのものは非会員を表すため、除外して計算すること。なお、ここでは外れ値を第一四分位と第三四分位の差であるIQRを用いて、「第一四分位数-1.5×IQR」よりも下回るもの、または「第三四分位数+1.5×IQR」を超えるものとする。結果は10件表示させれば良い。 ``` df_sales_amount = df_receipt.query('not customer_id.str.startswith("Z")', engine='python'). \ groupby('customer_id'). \ agg({'amount':'sum'}).reset_index() pct75 = np.percentile(df_sales_amount['amount'], q=75) pct25 = np.percentile(df_sales_amount['amount'], q=25) iqr = pct75 - pct25 amount_low = pct25 - (iqr * 1.5) amount_hight = pct75 + (iqr * 1.5) df_sales_amount.query('amount < @amount_low or @amount_hight < amount').head(10) ``` --- > P-079: 商品データフレーム(df_product)の各項目に対し、欠損数を確認せよ。 ``` df_product.isnull().sum() ``` --- > P-080: 商品データフレーム(df_product)のいずれかの項目に欠損が発生しているレコードを全て削除した新たなdf_product_1を作成せよ。なお、削除前後の件数を表示させ、前設問で確認した件数だけ減少していることも確認すること。 ``` df_product_1 = df_product.copy() print('削除前:', len(df_product_1)) df_product_1.dropna(inplace=True) print('削除後:', len(df_product_1)) ``` --- > P-081: 単価(unit_price)と原価(unit_cost)の欠損値について、それぞれの平均値で補完した新たなdf_product_2を作成せよ。なお、平均値については1円未満を丸めること(四捨五入または偶数への丸めで良い)。補完実施後、各項目について欠損が生じていないことも確認すること。 ``` df_product_2 = df_product.fillna({ 'unit_price':np.round(np.nanmean(df_product['unit_price'])), 'unit_cost':np.round(np.nanmean(df_product['unit_cost']))}) df_product_2.isnull().sum() ``` --- > P-082: 単価(unit_price)と原価(unit_cost)の欠損値について、それぞれの中央値で補完した新たなdf_product_3を作成せよ。なお、中央値については1円未満を丸めること(四捨五入または偶数への丸めで良い)。補完実施後、各項目について欠損が生じていないことも確認すること。 ``` df_product_3 = df_product.fillna({ 'unit_price':np.round(np.nanmedian(df_product['unit_price'])), 'unit_cost':np.round(np.nanmedian(df_product['unit_cost']))}) df_product_3.isnull().sum() ``` --- > P-083: 単価(unit_price)と原価(unit_cost)の欠損値について、各商品の小区分(category_small_cd)ごとに算出した中央値で補完した新たなdf_product_4を作成せよ。なお、中央値については1円未満を丸めること(四捨五入または偶数への丸めで良い)。補完実施後、各項目について欠損が生じていないことも確認すること。 ``` # コード例1 df_tmp = df_product.groupby('category_small_cd'). \ agg({'unit_price':'median', 'unit_cost':'median'}).reset_index() df_tmp.columns = ['category_small_cd', 'median_price', 'median_cost'] df_product_4 = pd.merge(df_product, df_tmp, how='inner', on='category_small_cd') df_product_4['unit_price'] = df_product_4[['unit_price', 'median_price']]. \ apply(lambda x: np.round(x[1]) if np.isnan(x[0]) else x[0], axis=1) df_product_4['unit_cost'] = df_product_4[['unit_cost', 'median_cost']]. \ apply(lambda x: np.round(x[1]) if np.isnan(x[0]) else x[0], axis=1) df_product_4.isnull().sum() # コード例2(maskの活用) df_tmp = (df_product .groupby('category_small_cd') .agg(median_price=('unit_price', 'median'), median_cost=('unit_cost', 'median')) .reset_index()) df_product_4 = df_product.merge(df_tmp, how='inner', on='category_small_cd') df_product_4['unit_price'] = (df_product_4['unit_price'] .mask(df_product_4['unit_price'].isnull(), df_product_4['median_price'].round())) df_product_4['unit_cost'] = (df_product_4['unit_cost'] .mask(df_product_4['unit_cost'].isnull(), df_product_4['median_cost'].round())) df_product_4.isnull().sum() # コード例3(fillna、transformの活用) df_product_4 = df_product.copy() for x in ['unit_price', 'unit_cost']: df_product_4[x] = (df_product_4[x] .fillna(df_product_4.groupby('category_small_cd')[x] .transform('median') .round())) df_product_4.isnull().sum() ``` --- > P-084: 顧客データフレーム(df_customer)の全顧客に対し、全期間の売上金額に占める2019年売上金額の割合を計算せよ。ただし、売上実績がない場合は0として扱うこと。そして計算した割合が0超のものを抽出せよ。 結果は10件表示させれば良い。また、作成したデータにNAやNANが存在しないことを確認せよ。 ``` df_tmp_1 = df_receipt.query('20190101 <= sales_ymd <= 20191231') df_tmp_1 = pd.merge(df_customer['customer_id'], df_tmp_1[['customer_id', 'amount']], how='left', on='customer_id'). \ groupby('customer_id').sum().reset_index(). \ rename(columns={'amount':'amount_2019'}) df_tmp_2 = pd.merge(df_customer['customer_id'], df_receipt[['customer_id', 'amount']], how='left', on='customer_id'). \ groupby('customer_id').sum().reset_index() df_tmp = pd.merge(df_tmp_1, df_tmp_2, how='inner', on='customer_id') df_tmp['amount_2019'] = df_tmp['amount_2019'].fillna(0) df_tmp['amount'] = df_tmp['amount'].fillna(0) df_tmp['amount_rate'] = df_tmp['amount_2019'] / df_tmp['amount'] df_tmp['amount_rate'] = df_tmp['amount_rate'].fillna(0) df_tmp.query('amount_rate > 0').head(10) df_tmp.isnull().sum() ``` --- > P-085: 顧客データフレーム(df_customer)の全顧客に対し、郵便番号(postal_cd)を用いて経度緯度変換用データフレーム(df_geocode)を紐付け、新たなdf_customer_1を作成せよ。ただし、複数紐づく場合は経度(longitude)、緯度(latitude)それぞれ平均を算出すること。 ``` df_customer_1 = pd.merge(df_customer[['customer_id', 'postal_cd']], df_geocode[['postal_cd', 'longitude' ,'latitude']], how='inner', on='postal_cd') df_customer_1 = df_customer_1.groupby('customer_id'). \ agg({'longitude':'mean', 'latitude':'mean'}).reset_index(). \ rename(columns={'longitude':'m_longitude', 'latitude':'m_latitude'}) df_customer_1 = pd.merge(df_customer, df_customer_1, how='inner', on='customer_id') df_customer_1.head(3) ``` --- > P-086: 前設問で作成した緯度経度つき顧客データフレーム(df_customer_1)に対し、申込み店舗コード(application_store_cd)をキーに店舗データフレーム(df_store)と結合せよ。そして申込み店舗の緯度(latitude)・経度情報(longitude)と顧客の緯度・経度を用いて距離(km)を求め、顧客ID(customer_id)、顧客住所(address)、店舗住所(address)とともに表示せよ。計算式は簡易式で良いものとするが、その他精度の高い方式を利用したライブラリを利用してもかまわない。結果は10件表示すれば良い。 $$ 緯度(ラジアン):\phi \\ 経度(ラジアン):\lambda \\ 距離L = 6371 * arccos(sin \phi_1 * sin \phi_2 + cos \phi_1 * cos \phi_2 * cos(\lambda_1 − \lambda_2)) $$ ``` # コード例1 def calc_distance(x1, y1, x2, y2): distance = 6371 * math.acos(math.sin(math.radians(y1)) * math.sin(math.radians(y2)) + math.cos(math.radians(y1)) * math.cos(math.radians(y2)) * math.cos(math.radians(x1) - math.radians(x2))) return distance df_tmp = pd.merge(df_customer_1, df_store, how='inner', left_on='application_store_cd', right_on='store_cd') df_tmp['distance'] = df_tmp[['m_longitude', 'm_latitude','longitude', 'latitude']]. \ apply(lambda x: calc_distance(x[0], x[1], x[2], x[3]), axis=1) df_tmp[['customer_id', 'address_x', 'address_y', 'distance']].head(10) # コード例2 def calc_distance_numpy(x1, y1, x2, y2): x1_r = np.radians(x1) x2_r = np.radians(x2) y1_r = np.radians(y1) y2_r = np.radians(y2) return 6371 * np.arccos(np.sin(y1_r) * np.sin(y2_r) + np.cos(y1_r) * np.cos(y2_r) * np.cos(x1_r - x2_r)) df_tmp = df_customer_1.merge(df_store, how='inner', left_on='application_store_cd', right_on='store_cd') df_tmp['distance'] = calc_distance_numpy(df_tmp['m_longitude'], df_tmp['m_latitude'], df_tmp['longitude'], df_tmp['latitude']) df_tmp[['customer_id', 'address_x', 'address_y', 'distance']].head(10) ``` --- > P-087: 顧客データフレーム(df_customer)では、異なる店舗での申込みなどにより同一顧客が複数登録されている。名前(customer_name)と郵便番号(postal_cd)が同じ顧客は同一顧客とみなし、1顧客1レコードとなるように名寄せした名寄顧客データフレーム(df_customer_u)を作成せよ。ただし、同一顧客に対しては売上金額合計が最も高いものを残すものとし、売上金額合計が同一もしくは売上実績がない顧客については顧客ID(customer_id)の番号が小さいものを残すこととする。 ``` df_receipt_tmp = df_receipt.groupby('customer_id') \ .agg(sum_amount=('amount','sum')).reset_index() df_customer_u = pd.merge(df_customer, df_receipt_tmp, how='left', on='customer_id') df_customer_u['sum_amount'] = df_customer_u['sum_amount'].fillna(0) df_customer_u = df_customer_u.sort_values(['sum_amount', 'customer_id'], ascending=[False, True]) df_customer_u.drop_duplicates(subset=['customer_name', 'postal_cd'], keep='first', inplace=True) print('df_customer:', len(df_customer), 'df_customer_u:', len(df_customer_u), 'diff:', len(df_customer) - len(df_customer_u)) ``` --- > P-088: 前設問で作成したデータを元に、顧客データフレームに統合名寄IDを付与したデータフレーム(df_customer_n)を作成せよ。ただし、統合名寄IDは以下の仕様で付与するものとする。 > > - 重複していない顧客:顧客ID(customer_id)を設定 > - 重複している顧客:前設問で抽出したレコードの顧客IDを設定 ``` df_customer_n = pd.merge(df_customer, df_customer_u[['customer_name', 'postal_cd', 'customer_id']], how='inner', on =['customer_name', 'postal_cd']) df_customer_n.rename(columns={'customer_id_x':'customer_id', 'customer_id_y':'integration_id'}, inplace=True) print('ID数の差', len(df_customer_n['customer_id'].unique()) - len(df_customer_n['integration_id'].unique())) ``` --- > P-閑話: df_customer_1, df_customer_nは使わないので削除する。 ``` del df_customer_1 del df_customer_n ``` --- > P-089: 売上実績がある顧客に対し、予測モデル構築のため学習用データとテスト用データに分割したい。それぞれ8:2の割合でランダムにデータを分割せよ。 ``` df_sales= df_receipt.groupby('customer_id').agg({'amount':sum}).reset_index() df_tmp = pd.merge(df_customer, df_sales['customer_id'], how='inner', on='customer_id') df_train, df_test = train_test_split(df_tmp, test_size=0.2, random_state=71) print('学習データ割合: ', len(df_train) / len(df_tmp)) print('テストデータ割合: ', len(df_test) / len(df_tmp)) ``` --- > P-090: レシート明細データフレーム(df_receipt)は2017年1月1日〜2019年10月31日までのデータを有している。売上金額(amount)を月次で集計し、学習用に12ヶ月、テスト用に6ヶ月のモデル構築用データを3セット作成せよ。 ``` df_tmp = df_receipt[['sales_ymd', 'amount']].copy() df_tmp['sales_ym'] = df_tmp['sales_ymd'].astype('str').str[0:6] df_tmp = df_tmp.groupby('sales_ym').agg({'amount':'sum'}).reset_index() # 関数化することで長期間データに対する多数のデータセットもループなどで処理できるようにする def split_data(df, train_size, test_size, slide_window, start_point): train_start = start_point * slide_window test_start = train_start + train_size return df[train_start : test_start], df[test_start : test_start + test_size] df_train_1, df_test_1 = split_data(df_tmp, train_size=12, test_size=6, slide_window=6, start_point=0) df_train_2, df_test_2 = split_data(df_tmp, train_size=12, test_size=6, slide_window=6, start_point=1) df_train_3, df_test_3 = split_data(df_tmp, train_size=12, test_size=6, slide_window=6, start_point=2) df_train_1 df_test_1 ``` --- > P-091: 顧客データフレーム(df_customer)の各顧客に対し、売上実績がある顧客数と売上実績がない顧客数が1:1となるようにアンダーサンプリングで抽出せよ。 ``` # コード例1 #unbalancedのubUnderを使った例 df_tmp = df_receipt.groupby('customer_id').agg({'amount':'sum'}).reset_index() df_tmp = pd.merge(df_customer, df_tmp, how='left', on='customer_id') df_tmp['buy_flg'] = df_tmp['amount'].apply(lambda x: 0 if np.isnan(x) else 1) print('0の件数', len(df_tmp.query('buy_flg == 0'))) print('1の件数', len(df_tmp.query('buy_flg == 1'))) positive_count = len(df_tmp.query('buy_flg == 1')) rs = RandomUnderSampler(random_state=71) df_sample, _ = rs.fit_resample(df_tmp, df_tmp.buy_flg) print('0の件数', len(df_sample.query('buy_flg == 0'))) print('1の件数', len(df_sample.query('buy_flg == 1'))) # コード例2 #unbalancedのubUnderを使った例 df_tmp = df_customer.merge(df_receipt .groupby('customer_id')['amount'].sum() .reset_index(), how='left', on='customer_id') df_tmp['buy_flg'] = np.where(df_tmp['amount'].isnull(), 0, 1) print("サンプリング前のbuy_flgの件数") print(df_tmp['buy_flg'].value_counts(), "\n") positive_count = (df_tmp['buy_flg'] == 1).sum() rs = RandomUnderSampler(random_state=71) df_sample, _ = rs.fit_resample(df_tmp, df_tmp.buy_flg) print("サンプリング後のbuy_flgの件数") print(df_sample['buy_flg'].value_counts()) ``` --- > P-092: 顧客データフレーム(df_customer)では、性別に関する情報が非正規化の状態で保持されている。これを第三正規化せよ。 ``` df_gender = df_customer[['gender_cd', 'gender']].drop_duplicates() df_customer_s = df_customer.drop(columns='gender') ``` --- > P-093: 商品データフレーム(df_product)では各カテゴリのコード値だけを保有し、カテゴリ名は保有していない。カテゴリデータフレーム(df_category)と組み合わせて非正規化し、カテゴリ名を保有した新たな商品データフレームを作成せよ。 ``` df_product_full = pd.merge(df_product, df_category[['category_small_cd', 'category_major_name', 'category_medium_name', 'category_small_name']], how = 'inner', on = 'category_small_cd') ``` --- > P-094: 先に作成したカテゴリ名付き商品データを以下の仕様でファイル出力せよ。なお、出力先のパスはdata配下とする。 > > - ファイル形式はCSV(カンマ区切り) > - ヘッダ有り > - 文字コードはUTF-8 ``` # コード例1 df_product_full.to_csv('../data/P_df_product_full_UTF-8_header.csv', encoding='UTF-8', index=False) # コード例2(BOM付きでExcelの文字化けを防ぐ) df_product_full.to_csv('../data/P_df_product_full_UTF-8_header.csv', encoding='utf_8_sig', index=False) ``` --- > P-095: 先に作成したカテゴリ名付き商品データを以下の仕様でファイル出力せよ。なお、出力先のパスはdata配下とする。 > > - ファイル形式はCSV(カンマ区切り) > - ヘッダ有り > - 文字コードはCP932 ``` df_product_full.to_csv('../data/P_df_product_full_CP932_header.csv', encoding='CP932', index=False) ``` --- > P-096: 先に作成したカテゴリ名付き商品データを以下の仕様でファイル出力せよ。なお、出力先のパスはdata配下とする。 > > - ファイル形式はCSV(カンマ区切り) > - ヘッダ無し > - 文字コードはUTF-8 ``` df_product_full.to_csv('../data/P_df_product_full_UTF-8_noh.csv', header=False ,encoding='UTF-8', index=False) ``` --- > P-097: 先に作成した以下形式のファイルを読み込み、データフレームを作成せよ。また、先頭3件を表示させ、正しくとりまれていることを確認せよ。 > > - ファイル形式はCSV(カンマ区切り) > - ヘッダ有り > - 文字コードはUTF-8 ``` df_tmp = pd.read_csv('../data/P_df_product_full_UTF-8_header.csv', dtype={'category_major_cd':str, 'category_medium_cd':str, 'category_small_cd':str}, encoding='UTF-8') df_tmp.head(3) ``` --- > P-098: 先に作成した以下形式のファイルを読み込み、データフレームを作成せよ。また、先頭3件を表示させ、正しくとりまれていることを確認せよ。 > > - ファイル形式はCSV(カンマ区切り) > - ヘッダ無し > - 文字コードはUTF-8 ``` df_tmp = pd.read_csv('../data/P_df_product_full_UTF-8_noh.csv', dtype={1:str, 2:str, 3:str}, encoding='UTF-8', header=None) df_tmp.head(3) ``` --- > P-099: 先に作成したカテゴリ名付き商品データを以下の仕様でファイル出力せよ。なお、出力先のパスはdata配下とする。 > > - ファイル形式はTSV(タブ区切り) > - ヘッダ有り > - 文字コードはUTF-8 ``` df_product_full.to_csv('../data/P_df_product_full_UTF-8_header.tsv', sep='\t', encoding='UTF-8', index=False) ``` --- > P-100: 先に作成した以下形式のファイルを読み込み、データフレームを作成せよ。また、先頭3件を表示させ、正しくとりまれていることを確認せよ。 > > - ファイル形式はTSV(タブ区切り) > - ヘッダ有り > - 文字コードはUTF-8 ``` df_tmp = pd.read_table('../data/P_df_product_full_UTF-8_header.tsv', dtype={'category_major_cd':str, 'category_medium_cd':str, 'category_small_cd':str}, encoding='UTF-8') df_tmp.head(3) ``` # これで100本終わりです。おつかれさまでした!
true
code
0.296228
null
null
null
null